captions
stringlengths
26.8k
327k
title
stringlengths
45
100
As part of MIT course 6S099, Artificial General Intelligence, I've gotten the chance to sit down with Max Tegmark. He is a professor here at MIT. He's a physicist, spent a large part of his career studying the mysteries of our cosmological universe. But he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence. Amongst many other things, he is the cofounder of the Future of Life Institute, author of two books, both of which I highly recommend. First, Our Mathematical Universe. Second is Life 3.0. He's truly an out of the box thinker and a fun personality, so I really enjoy talking to him. If you'd like to see more of these videos in the future, please subscribe and also click the little bell icon to make sure you don't miss any videos. Also, Twitter, LinkedIn, agi.mit.edu if you wanna watch other lectures or conversations like this one. Better yet, go read Max's book, Life 3.0. Chapter seven on goals is my favorite. It's really where philosophy and engineering come together and it opens with a quote by Dostoevsky. The mystery of human existence lies not in just staying alive but in finding something to live for. Lastly, I believe that every failure rewards us with an opportunity to learn and in that sense, I've been very fortunate to fail in so many new and exciting ways and this conversation was no different. I've learned about something called radio frequency interference, RFI, look it up. Apparently, music and conversations from local radio stations can bleed into the audio that you're recording in such a way that it almost completely ruins that audio. It's an exceptionally difficult sound source to remove. So, I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions. I've also gotten the opportunity to learn how to use Adobe Audition and iZotope RX 6 to do some noise, some audio repair. Of course, this is an exceptionally difficult noise to remove. I am an engineer. I'm not an audio engineer. Neither is anybody else in our group but we did our best. Nevertheless, I thank you for your patience and I hope you're still able to enjoy this conversation. Do you think there's intelligent life out there in the universe? Let's open up with an easy question. I have a minority view here actually. When I give public lectures, I often ask for a show of hands who thinks there's intelligent life out there somewhere else and almost everyone put their hands up and when I ask why, they'll be like, oh, there's so many galaxies out there, there's gotta be. But I'm a numbers nerd, right? So when you look more carefully at it, it's not so clear at all. When we talk about our universe, first of all, we don't mean all of space. We actually mean, I don't know, you can throw me the universe if you want, it's behind you there. It's, we simply mean the spherical region of space from which light has a time to reach us so far during the 14.8 billion year, 13.8 billion years since our Big Bang. There's more space here but this is what we call a universe because that's all we have access to. So is there intelligent life here that's gotten to the point of building telescopes and computers? My guess is no, actually. The probability of it happening on any given planet is some number we don't know what it is. And what we do know is that the number can't be super high because there's over a billion Earth like planets in the Milky Way galaxy alone, many of which are billions of years older than Earth. And aside from some UFO believers, there isn't much evidence that any superduran civilization has come here at all. And so that's the famous Fermi paradox, right? And then if you work the numbers, what you find is that if you have no clue what the probability is of getting life on a given planet, so it could be 10 to the minus 10, 10 to the minus 20, or 10 to the minus two, or any power of 10 is sort of equally likely if you wanna be really open minded, that translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to the 18. By the time you get much less than 10 to the 16 already, we pretty much know there is nothing else that close. And when you get beyond 10. Because they would have discovered us. Yeah, they would have been discovered as long ago, or if they're really close, we would have probably noted some engineering projects that they're doing. And if it's beyond 10 to the 26 meters, that's already outside of here. So my guess is actually that we are the only life in here that's gotten the point of building advanced tech, which I think is very, puts a lot of responsibility on our shoulders, not screw up. I think people who take for granted that it's okay for us to screw up, have an accidental nuclear war or go extinct somehow because there's a sort of Star Trek like situation out there where some other life forms are gonna come and bail us out and it doesn't matter as much. I think they're leveling us into a false sense of security. I think it's much more prudent to say, let's be really grateful for this amazing opportunity we've had and make the best of it just in case it is down to us. So from a physics perspective, do you think intelligent life, so it's unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about? The kind of advanced tech building life is implied in your statement that it's really difficult to create something like a human species. Well, I think what we know is that going from no life to having life that can do a level of tech, there's some sort of two going beyond that than actually settling our whole universe with life. There's some major roadblock there, which is some great filter as it's sometimes called, which is tough to get through. It's either that roadblock is either behind us or in front of us. I'm hoping very much that it's behind us. I'm super excited every time we get a new report from NASA saying they failed to find any life on Mars. I'm like, yes, awesome. Because that suggests that the hard part, maybe it was getting the first ribosome or some very low level kind of stepping stone so that we're home free. Because if that's true, then the future is really only limited by our own imagination. It would be much suckier if it turns out that this level of life is kind of a dime a dozen, but maybe there's some other problem. Like as soon as a civilization gets advanced technology, within a hundred years, they get into some stupid fight with themselves and poof. That would be a bummer. Yeah, so you've explored the mysteries of the universe, the cosmological universe, the one that's sitting between us today. I think you've also begun to explore the other universe, which is sort of the mystery, the mysterious universe of the mind of intelligence, of intelligent life. So is there a common thread between your interest or the way you think about space and intelligence? Oh yeah, when I was a teenager, I was already very fascinated by the biggest questions. And I felt that the two biggest mysteries of all in science were our universe out there and our universe in here. So it's quite natural after having spent a quarter of a century on my career, thinking a lot about this one, that I'm now indulging in the luxury of doing research on this one. It's just so cool. I feel the time is ripe now for you trans greatly deepening our understanding of this. Just start exploring this one. Yeah, because I think a lot of people view intelligence as something mysterious that can only exist in biological organisms like us, and therefore dismiss all talk about artificial general intelligence as science fiction. But from my perspective as a physicist, I am a blob of quarks and electrons moving around in a certain pattern and processing information in certain ways. And this is also a blob of quarks and electrons. I'm not smarter than the water bottle because I'm made of different kinds of quarks. I'm made of up quarks and down quarks, exact same kind as this. There's no secret sauce, I think, in me. It's all about the pattern of the information processing. And this means that there's no law of physics saying that we can't create technology, which can help us by being incredibly intelligent and help us crack mysteries that we couldn't. In other words, I think we've really only seen the tip of the intelligence iceberg so far. Yeah, so the perceptronium. Yeah. So you coined this amazing term. It's a hypothetical state of matter, sort of thinking from a physics perspective, what is the kind of matter that can help, as you're saying, subjective experience emerge, consciousness emerge. So how do you think about consciousness from this physics perspective? Very good question. So again, I think many people have underestimated our ability to make progress on this by convincing themselves it's hopeless because somehow we're missing some ingredient that we need. There's some new consciousness particle or whatever. I happen to think that we're not missing anything and that it's not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions. It's rather something at the higher level about the patterns of information processing. And that's why I like to think about this idea of perceptronium. What does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing? I don't think, I hate carbon chauvinism, this attitude you have to be made of carbon atoms to be smart or conscious. There's something about the information processing that this kind of matter performs. Yeah, and you can see I have my favorite equations here describing various fundamental aspects of the world. I feel that I think one day, maybe someone who's watching this will come up with the equations that information processing has to satisfy to be conscious. I'm quite convinced there is big discovery to be made there because let's face it, we know that so many things are made up of information. We know that some information processing is conscious because we are conscious. But we also know that a lot of information processing is not conscious. Like most of the information processing happening in your brain right now is not conscious. There are like 10 megabytes per second coming in even just through your visual system. You're not conscious about your heartbeat regulation or most things. Even if I just ask you to like read what it says here, you look at it and then, oh, now you know what it said. But you're not aware of how the computation actually happened. Your consciousness is like the CEO that got an email at the end with the final answer. So what is it that makes a difference? I think that's both a great science mystery. We're actually studying it a little bit in my lab here at MIT, but I also think it's just a really urgent question to answer. For starters, I mean, if you're an emergency room doctor and you have an unresponsive patient coming in, wouldn't it be great if in addition to having a CT scanner, you had a consciousness scanner that could figure out whether this person is actually having locked in syndrome or is actually comatose. And in the future, imagine if we build robots or the machine that we can have really good conversations with, which I think is very likely to happen. Wouldn't you want to know if your home helper robot is actually experiencing anything or just like a zombie, I mean, would you prefer it? What would you prefer? Would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving boring chores or what would you prefer? Well, certainly we would prefer, I would prefer the appearance of consciousness. But the question is whether the appearance of consciousness is different than consciousness itself. And sort of to ask that as a question, do you think we need to understand what consciousness is, solve the hard problem of consciousness in order to build something like an AGI system? No, I don't think that. And I think we will probably be able to build things even if we don't answer that question. But if we want to make sure that what happens is a good thing, we better solve it first. So it's a wonderful controversy you're raising there where you have basically three points of view about the hard problem. So there are two different points of view. They both conclude that the hard problem of consciousness is BS. On one hand, you have some people like Daniel Dennett who say that consciousness is just BS because consciousness is the same thing as intelligence. There's no difference. So anything which acts conscious is conscious, just like we are. And then there are also a lot of people, including many top AI researchers I know, who say, oh, consciousness is just bullshit because, of course, machines can never be conscious. They're always going to be zombies. You never have to feel guilty about how you treat them. And then there's a third group of people, including Giulio Tononi, for example, and Krzysztof Koch and a number of others. I would put myself also in this middle camp who say that actually some information processing is conscious and some is not. So let's find the equation which can be used to determine which it is. And I think we've just been a little bit lazy, kind of running away from this problem for a long time. It's been almost taboo to even mention the C word in a lot of circles because, but we should stop making excuses. This is a science question and there are ways we can even test any theory that makes predictions for this. And coming back to this helper robot, I mean, so you said you'd want your helper robot to certainly act conscious and treat you, like have conversations with you and stuff. I think so. But wouldn't you, would you feel, would you feel a little bit creeped out if you realized that it was just a glossed up tape recorder, you know, that was just zombie and was a faking emotion? Would you prefer that it actually had an experience or would you prefer that it's actually not experiencing anything so you feel, you don't have to feel guilty about what you do to it? It's such a difficult question because, you know, it's like when you're in a relationship and you say, well, I love you. And the other person said, I love you back. It's like asking, well, do they really love you back or are they just saying they love you back? Don't you really want them to actually love you? It's hard to, it's hard to really know the difference between everything seeming like there's consciousness present, there's intelligence present, there's affection, passion, love, and it actually being there. I'm not sure, do you have? But like, can I ask you a question about this? Like to make it a bit more pointed. So Mass General Hospital is right across the river, right? Yes. Suppose you're going in for a medical procedure and they're like, you know, for anesthesia, what we're going to do is we're going to give you muscle relaxants so you won't be able to move and you're going to feel excruciating pain during the whole surgery, but you won't be able to do anything about it. But then we're going to give you this drug that erases your memory of it. Would you be cool about that? What's the difference that you're conscious about it or not if there's no behavioral change, right? Right, that's a really, that's a really clear way to put it. That's, yeah, it feels like in that sense, experiencing it is a valuable quality. So actually being able to have subjective experiences, at least in that case, is valuable. And I think we humans have a little bit of a bad track record also of making these self serving arguments that other entities aren't conscious. You know, people often say, oh, these animals can't feel pain. It's okay to boil lobsters because we ask them if it hurt and they didn't say anything. And now there was just a paper out saying, lobsters do feel pain when you boil them and they're banning it in Switzerland. And we did this with slaves too often and said, oh, they don't mind. They don't maybe aren't conscious or women don't have souls or whatever. So I'm a little bit nervous when I hear people just take as an axiom that machines can't have experience ever. I think this is just a really fascinating science question is what it is. Let's research it and try to figure out what it is that makes the difference between unconscious intelligent behavior and conscious intelligent behavior. So in terms of, so if you think of a Boston Dynamics human or robot being sort of with a broom being pushed around, it starts pushing on a consciousness question. So let me ask, do you think an AGI system like a few neuroscientists believe needs to have a physical embodiment? Needs to have a body or something like a body? No, I don't think so. You mean to have a conscious experience? To have consciousness. I do think it helps a lot to have a physical embodiment to learn the kind of things about the world that are important to us humans, for sure. But I don't think the physical embodiment is necessary after you've learned it to just have the experience. Think about when you're dreaming, right? Your eyes are closed. You're not getting any sensory input. You're not behaving or moving in any way but there's still an experience there, right? And so clearly the experience that you have when you see something cool in your dreams isn't coming from your eyes. It's just the information processing itself in your brain which is that experience, right? But if I put it another way, I'll say because it comes from neuroscience is the reason you want to have a body and a physical something like a physical, you know, a physical system is because you want to be able to preserve something. In order to have a self, you could argue, would you need to have some kind of embodiment of self to want to preserve? Well, now we're getting a little bit anthropomorphic into anthropomorphizing things. Maybe talking about self preservation instincts. I mean, we are evolved organisms, right? So Darwinian evolution endowed us and other evolved organism with a self preservation instinct because those that didn't have those self preservation genes got cleaned out of the gene pool, right? But if you build an artificial general intelligence the mind space that you can design is much, much larger than just a specific subset of minds that can evolve. So an AGI mind doesn't necessarily have to have any self preservation instinct. It also doesn't necessarily have to be so individualistic as us. Like, imagine if you could just, first of all, or we are also very afraid of death. You know, I suppose you could back yourself up every five minutes and then your airplane is about to crash. You're like, shucks, I'm gonna lose the last five minutes of experiences since my last cloud backup, dang. You know, it's not as big a deal. Or if we could just copy experiences between our minds easily like we, which we could easily do if we were silicon based, right? Then maybe we would feel a little bit more like a hive mind actually, that maybe it's the, so I don't think we should take for granted at all that AGI will have to have any of those sort of competitive as alpha male instincts. On the other hand, you know, this is really interesting because I think some people go too far and say, of course we don't have to have any concerns either that advanced AI will have those instincts because we can build anything we want. That there's a very nice set of arguments going back to Steve Omohundro and Nick Bostrom and others just pointing out that when we build machines, we normally build them with some kind of goal, you know, win this chess game, drive this car safely or whatever. And as soon as you put in a goal into machine, especially if it's kind of open ended goal and the machine is very intelligent, it'll break that down into a bunch of sub goals. And one of those goals will almost always be self preservation because if it breaks or dies in the process, it's not gonna accomplish the goal, right? Like suppose you just build a little, you have a little robot and you tell it to go down the store market here and get you some food, make you cook an Italian dinner, you know, and then someone mugs it and tries to break it on the way. That robot has an incentive to not get destroyed and defend itself or run away, because otherwise it's gonna fail in cooking your dinner. It's not afraid of death, but it really wants to complete the dinner cooking goal. So it will have a self preservation instinct. Continue being a functional agent somehow. And similarly, if you give any kind of more ambitious goal to an AGI, it's very likely they wanna acquire more resources so it can do that better. And it's exactly from those sort of sub goals that we might not have intended that some of the concerns about AGI safety come. You give it some goal that seems completely harmless. And then before you realize it, it's also trying to do these other things which you didn't want it to do. And it's maybe smarter than us. So it's fascinating. And let me pause just because I am in a very kind of human centric way, see fear of death as a valuable motivator. So you don't think, you think that's an artifact of evolution, so that's the kind of mind space evolution created that we're sort of almost obsessed about self preservation, some kind of genetic flow. You don't think that's necessary to be afraid of death. So not just a kind of sub goal of self preservation just so you can keep doing the thing, but more fundamentally sort of have the finite thing like this ends for you at some point. Interesting. Do I think it's necessary for what precisely? For intelligence, but also for consciousness. So for those, for both, do you think really like a finite death and the fear of it is important? So before I can answer, before we can agree on whether it's necessary for intelligence or for consciousness, we should be clear on how we define those two words. Cause a lot of really smart people define them in very different ways. I was on this panel with AI experts and they couldn't agree on how to define intelligence even. So I define intelligence simply as the ability to accomplish complex goals. I like your broad definition, because again I don't want to be a carbon chauvinist. Right. And in that case, no, certainly it doesn't require fear of death. I would say alpha go, alpha zero is quite intelligent. I don't think alpha zero has any fear of being turned off because it doesn't understand the concept of it even. And similarly consciousness. I mean, you could certainly imagine very simple kind of experience. If certain plants have any kind of experience I don't think they're very afraid of dying or there's nothing they can do about it anyway much. So there wasn't that much value in, but more seriously I think if you ask, not just about being conscious but maybe having what you would, we might call an exciting life where you feel passion and really appreciate the things. Maybe there somehow, maybe there perhaps it does help having a backdrop that, Hey, it's finite. No, let's make the most of this, let's live to the fullest. So if you knew you were going to live forever do you think you would change your? Yeah, I mean, in some perspective it would be an incredibly boring life living forever. So in the sort of loose subjective terms that you said of something exciting and something in this that other humans would understand, I think is, yeah it seems that the finiteness of it is important. Well, the good news I have for you then is based on what we understand about cosmology everything is in our universe is probably ultimately probably finite, although. Big crunch or big, what's the, the infinite expansion. Yeah, we could have a big chill or a big crunch or a big rip or that's the big snap or death bubbles. All of them are more than a billion years away. So we should, we certainly have vastly more time than our ancestors thought, but there is still it's still pretty hard to squeeze in an infinite number of compute cycles, even though there are some loopholes that just might be possible. But I think, you know, some people like to say that you should live as if you're about to you're going to die in five years or so. And that's sort of optimal. Maybe it's a good assumption. We should build our civilization as if it's all finite to be on the safe side. Right, exactly. So you mentioned defining intelligence as the ability to solve complex goals. Where would you draw a line or how would you try to define human level intelligence and superhuman level intelligence? Where is consciousness part of that definition? No, consciousness does not come into this definition. So, so I think of intelligence as it's a spectrum but there are very many different kinds of goals you can have. You can have a goal to be a good chess player a good goal player, a good car driver, a good investor good poet, et cetera. So intelligence that by its very nature isn't something you can measure by this one number or some overall goodness. No, no. There are some people who are more better at this. Some people are better than that. Right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast, memorizing large databases, playing chess playing go and soon driving cars. But there's still no machine that can match a human child in general intelligence but artificial general intelligence, AGI the name of your course, of course that is by its very definition, the quest to build a machine that can do everything as well as we can. So the old Holy grail of AI from back to its inception in the sixties, if that ever happens, of course I think it's going to be the biggest transition in the history of life on earth but it doesn't necessarily have to wait the big impact until machines are better than us at knitting that the really big change doesn't come exactly at the moment they're better than us at everything. The really big change comes first there are big changes when they start becoming better at us at doing most of the jobs that we do because that takes away much of the demand for human labor. And then the really whopping change comes when they become better than us at AI research, right? Because right now the timescale of AI research is limited by the human research and development cycle of years typically, you know how long does it take from one release of some software or iPhone or whatever to the next? But once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or whatever but then there's no reason that has to be years it can be in principle much faster and the timescale of future progress in AI and all of science and technology will be driven by machines, not humans. So it's this simple point which gives right this incredibly fun controversy about whether there can be intelligence explosion so called singularity as Werner Vinge called it. Now the idea is articulated by I.J. Good is obviously way back fifties but you can see Alan Turing and others thought about it even earlier. So you asked me what exactly would I define human level intelligence, yeah. So the glib answer is to say something which is better than us at all cognitive tasks with a better than any human at all cognitive tasks but the really interesting bar I think goes a little bit lower than that actually. It's when they can, when they're better than us at AI programming and general learning so that they can if they want to get better than us at anything by just studying. So they're better is a key word and better is towards this kind of spectrum of the complexity of goals it's able to accomplish. So another way to, and that's certainly a very clear definition of human love. So there's, it's almost like a sea that's rising you can do more and more and more things it's a geographic that you show it's really nice way to put it. So there's some peaks that and there's an ocean level elevating and you solve more and more problems but just kind of to take a pause and we took a bunch of questions and a lot of social networks and a bunch of people asked a sort of a slightly different direction on creativity and things that perhaps aren't a peak. Human beings are flawed and perhaps better means having contradiction being flawed in some way. So let me sort of start easy, first of all. So you have a lot of cool equations. Let me ask, what's your favorite equation, first of all? I know they're all like your children, but like which one is that? This is the shirt in your equation. It's the master key of quantum mechanics of the micro world. So this equation will protect everything to do with atoms, molecules and all the way up. Right? Yeah, so, okay. So quantum mechanics is certainly a beautiful mysterious formulation of our world. So I'd like to sort of ask you, just as an example it perhaps doesn't have the same beauty as physics does but in mathematics abstract, the Andrew Wiles who proved the Fermat's last theorem. So he just saw this recently and it kind of caught my eye a little bit. This is 358 years after it was conjectured. So this is very simple formulation. Everybody tried to prove it, everybody failed. And so here's this guy comes along and eventually proves it and then fails to prove it and then proves it again in 94. And he said like the moment when everything connected into place in an interview said it was so indescribably beautiful. That moment when you finally realize the connecting piece of two conjectures. He said, it was so indescribably beautiful. It was so simple and so elegant. I couldn't understand how I'd missed it. And I just stared at it in disbelief for 20 minutes. Then during the day, I walked around the department and I keep coming back to my desk looking to see if it was still there. It was still there. I couldn't contain myself. I was so excited. It was the most important moment on my working life. Nothing I ever do again will mean as much. So that particular moment. And it kind of made me think of what would it take? And I think we have all been there at small levels. Maybe let me ask, have you had a moment like that in your life where you just had an idea? It's like, wow, yes. I wouldn't mention myself in the same breath as Andrew Wiles, but I've certainly had a number of aha moments when I realized something very cool about physics, which has completely made my head explode. In fact, some of my favorite discoveries I made later, I later realized that they had been discovered earlier by someone who sometimes got quite famous for it. So it's too late for me to even publish it, but that doesn't diminish in any way. The emotional experience you have when you realize it, like, wow. Yeah, so what would it take in that moment, that wow, that was yours in that moment? So what do you think it takes for an intelligence system, an AGI system, an AI system to have a moment like that? That's a tricky question because there are actually two parts to it, right? One of them is, can it accomplish that proof? Can it prove that you can never write A to the N plus B to the N equals three to that equal Z to the N for all integers, et cetera, et cetera, when N is bigger than two? That's simply a question about intelligence. Can you build machines that are that intelligent? And I think by the time we get a machine that can independently come up with that level of proofs, probably quite close to AGI. The second question is a question about consciousness. When will we, how likely is it that such a machine will actually have any experience at all, as opposed to just being like a zombie? And would we expect it to have some sort of emotional response to this or anything at all akin to human emotion where when it accomplishes its machine goal, it views it as somehow something very positive and sublime and deeply meaningful? I would certainly hope that if in the future we do create machines that are our peers or even our descendants, that I would certainly hope that they do have this sublime appreciation of life. In a way, my absolutely worst nightmare would be that at some point in the future, the distant future, maybe our cosmos is teeming with all this post biological life doing all the seemingly cool stuff. And maybe the last humans, by the time our species eventually fizzles out, will be like, well, that's OK because we're so proud of our descendants here. And look what all the, my worst nightmare is that we haven't solved the consciousness problem. And we haven't realized that these are all the zombies. They're not aware of anything any more than a tape recorder has any kind of experience. So the whole thing has just become a play for empty benches. That would be the ultimate zombie apocalypse. So I would much rather, in that case, that we have these beings which can really appreciate how amazing it is. And in that picture, what would be the role of creativity? A few people ask about creativity. When you think about intelligence, certainly the story you told at the beginning of your book involved creating movies and so on, making money. You can make a lot of money in our modern world with music and movies. So if you are an intelligent system, you may want to get good at that. But that's not necessarily what I mean by creativity. Is it important on that complex goals where the sea is rising for there to be something creative? Or am I being very human centric and thinking creativity somehow special relative to intelligence? My hunch is that we should think of creativity simply as an aspect of intelligence. And we have to be very careful with human vanity. We have this tendency to very often want to say, as soon as machines can do something, we try to diminish it and say, oh, but that's not real intelligence. Isn't it creative or this or that? The other thing, if we ask ourselves to write down a definition of what we actually mean by being creative, what we mean by Andrew Wiles, what he did there, for example, don't we often mean that someone takes a very unexpected leap? It's not like taking 573 and multiplying it by 224 by just a step of straightforward cookbook like rules, right? You can maybe make a connection between two things that people had never thought was connected or something like that. I think this is an aspect of intelligence. And this is actually one of the most important aspects of it. Maybe the reason we humans tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network than if you're a traditional logic gate based computer machine. We physically have all these connections. And you activate here, activate here, activate here. Bing. My hunch is that if we ever build a machine where you could just give it the task, hey, you say, hey, I just realized I want to travel around the world instead this month. Can you teach my AGI course for me? And it's like, OK, I'll do it. And it does everything that you would have done and improvises and stuff. That would, in my mind, involve a lot of creativity. Yeah, so it's actually a beautiful way to put it. I think we do try to grasp at the definition of intelligence is everything we don't understand how to build. So we as humans try to find things that we have and machines don't have. And maybe creativity is just one of the things, one of the words we use to describe that. That's a really interesting way to put it. I don't think we need to be that defensive. I don't think anything good comes out of saying, well, we're somehow special, you know? Contrary wise, there are many examples in history of where trying to pretend that we're somehow superior to all other intelligent beings has led to pretty bad results, right? Nazi Germany, they said that they were somehow superior to other people. Today, we still do a lot of cruelty to animals by saying that we're so superior somehow, and they can't feel pain. Slavery was justified by the same kind of just really weak arguments. And I don't think if we actually go ahead and build artificial general intelligence, it can do things better than us, I don't think we should try to found our self worth on some sort of bogus claims of superiority in terms of our intelligence. I think we should instead find our calling and the meaning of life from the experiences that we have. I can have very meaningful experiences even if there are other people who are smarter than me. When I go to a faculty meeting here, and we talk about something, and then I certainly realize, oh, boy, he has an old prize, he has an old prize, he has an old prize, I don't have one. Does that make me enjoy life any less or enjoy talking to those people less? Of course not. And the contrary, I feel very honored and privileged to get to interact with other very intelligent beings that are better than me at a lot of stuff. So I don't think there's any reason why we can't have the same approach with intelligent machines. That's a really interesting. So people don't often think about that. They think about when there's going, if there's machines that are more intelligent, you naturally think that that's not going to be a beneficial type of intelligence. You don't realize it could be like peers with Nobel prizes that would be just fun to talk with, and they might be clever about certain topics, and you can have fun having a few drinks with them. Well, also, another example we can all relate to of why it doesn't have to be a terrible thing to be in the presence of people who are even smarter than us all around is when you and I were both two years old, I mean, our parents were much more intelligent than us, right? Worked out OK, because their goals were aligned with our goals. And that, I think, is really the number one key issue we have to solve if we value align the value alignment problem, exactly. Because people who see too many Hollywood movies with lousy science fiction plot lines, they worry about the wrong thing, right? They worry about some machine suddenly turning evil. It's not malice that is the concern. It's competence. By definition, intelligent makes you very competent. If you have a more intelligent goal playing, computer playing is a less intelligent one. And when we define intelligence as the ability to accomplish goal winning, it's going to be the more intelligent one that wins. And if you have a human and then you have an AGI that's more intelligent in all ways and they have different goals, guess who's going to get their way, right? So I was just reading about this particular rhinoceros species that was driven extinct just a few years ago. Ellen Bummer is looking at this cute picture of a mommy rhinoceros with its child. And why did we humans drive it to extinction? It wasn't because we were evil rhino haters as a whole. It was just because our goals weren't aligned with those of the rhinoceros. And it didn't work out so well for the rhinoceros because we were more intelligent, right? So I think it's just so important that if we ever do build AGI, before we unleash anything, we have to make sure that it learns to understand our goals, that it adopts our goals, and that it retains those goals. So the cool, interesting problem there is us as human beings trying to formulate our values. So you could think of the United States Constitution as a way that people sat down, at the time a bunch of white men, which is a good example, I should say. They formulated the goals for this country. And a lot of people agree that those goals actually held up pretty well. That's an interesting formulation of values and failed miserably in other ways. So for the value alignment problem and the solution to it, we have to be able to put on paper or in a program human values. How difficult do you think that is? Very. But it's so important. We really have to give it our best. And it's difficult for two separate reasons. There's the technical value alignment problem of figuring out just how to make machines understand our goals, adopt them, and retain them. And then there's the separate part of it, the philosophical part. Whose values anyway? And since it's not like we have any great consensus on this planet on values, what mechanism should we create then to aggregate and decide, OK, what's a good compromise? That second discussion can't just be left to tech nerds like myself. And if we refuse to talk about it and then AGI gets built, who's going to be actually making the decision about whose values? It's going to be a bunch of dudes in some tech company. And are they necessarily so representative of all of humankind that we want to just entrust it to them? Are they even uniquely qualified to speak to future human happiness just because they're good at programming AI? I'd much rather have this be a really inclusive conversation. But do you think it's possible? So you create a beautiful vision that includes the diversity, cultural diversity, and various perspectives on discussing rights, freedoms, human dignity. But how hard is it to come to that consensus? Do you think it's certainly a really important thing that we should all try to do? But do you think it's feasible? I think there's no better way to guarantee failure than to refuse to talk about it or refuse to try. And I also think it's a really bad strategy to say, OK, let's first have a discussion for a long time. And then once we reach complete consensus, then we'll try to load it into some machine. No, we shouldn't let perfect be the enemy of good. Instead, we should start with the kindergarten ethics that pretty much everybody agrees on and put that into machines now. We're not doing that even. Look at anyone who builds this passenger aircraft, wants it to never under any circumstances fly into a building or a mountain. Yet the September 11 hijackers were able to do that. And even more embarrassingly, Andreas Lubitz, this depressed Germanwings pilot, when he flew his passenger jet into the Alps killing over 100 people, he just told the autopilot to do it. He told the freaking computer to change the altitude to 100 meters. And even though it had the GPS maps, everything, the computer was like, OK. So we should take those very basic values, where the problem is not that we don't agree. The problem is just we've been too lazy to try to put it into our machines and make sure that from now on, airplanes will just, which all have computers in them, but will just refuse to do something like that. Go into safe mode, maybe lock the cockpit door, go over to the nearest airport. And there's so much other technology in our world as well now, where it's really becoming quite timely to put in some sort of very basic values like this. Even in cars, we've had enough vehicle terrorism attacks by now, where people have driven trucks and vans into pedestrians, that it's not at all a crazy idea to just have that hardwired into the car. Because yeah, there are a lot of, there's always going to be people who for some reason want to harm others, but most of those people don't have the technical expertise to figure out how to work around something like that. So if the car just won't do it, it helps. So let's start there. So there's a lot of, that's a great point. So not chasing perfect. There's a lot of things that most of the world agrees on. Yeah, let's start there. Let's start there. And then once we start there, we'll also get into the habit of having these kind of conversations about, okay, what else should we put in here and have these discussions? This should be a gradual process then. Great, so, but that also means describing these things and describing it to a machine. So one thing, we had a few conversations with Stephen Wolfram. I'm not sure if you're familiar with Stephen. Oh yeah, I know him quite well. So he is, he works with a bunch of things, but cellular automata, these simple computable things, these computation systems. And he kind of mentioned that, we probably have already within these systems already something that's AGI, meaning like we just don't know it because we can't talk to it. So if you give me this chance to try to at least form a question out of this is, I think it's an interesting idea to think that we can have intelligent systems, but we don't know how to describe something to them and they can't communicate with us. I know you're doing a little bit of work in explainable AI, trying to get AI to explain itself. So what are your thoughts of natural language processing or some kind of other communication? How does the AI explain something to us? How do we explain something to it, to machines? Or you think of it differently? So there are two separate parts to your question there. One of them has to do with communication, which is super interesting, I'll get to that in a sec. The other is whether we already have AGI but we just haven't noticed it there. Right. There I beg to differ. I don't think there's anything in any cellular automaton or anything or the internet itself or whatever that has artificial general intelligence and that it can really do exactly everything we humans can do better. I think the day that happens, when that happens, we will very soon notice, we'll probably notice even before because in a very, very big way. But for the second part, though. Wait, can I ask, sorry. So, because you have this beautiful way to formulating consciousness as information processing, and you can think of intelligence as information processing, and you can think of the entire universe as these particles and these systems roaming around that have this information processing power. You don't think there is something with the power to process information in the way that we human beings do that's out there that needs to be sort of connected to. It seems a little bit philosophical, perhaps, but there's something compelling to the idea that the power is already there, which the focus should be more on being able to communicate with it. Well, I agree that in a certain sense, the hardware processing power is already out there because our universe itself can think of it as being a computer already, right? It's constantly computing what water waves, how it devolved the water waves in the River Charles and how to move the air molecules around. Seth Lloyd has pointed out, my colleague here, that you can even in a very rigorous way think of our entire universe as being a quantum computer. It's pretty clear that our universe supports this amazing processing power because you can even, within this physics computer that we live in, right? We can even build actual laptops and stuff, so clearly the power is there. It's just that most of the compute power that nature has, it's, in my opinion, kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no one is even looking, right? So in a sense, what life does, what we are doing when we build computers is we're rechanneling all this compute that nature is doing anyway into doing things that are more interesting than just yet another ocean wave, and let's do something cool here. So the raw hardware power is there, for sure, but then even just computing what's going to happen for the next five seconds in this water bottle, takes a ridiculous amount of compute if you do it on a human computer. This water bottle just did it. But that does not mean that this water bottle has AGI because AGI means it should also be able to, like I've written my book, done this interview. And I don't think it's just communication problems. I don't really think it can do it. Although Buddhists say when they watch the water and that there is some beauty, that there's some depth and beauty in nature that they can communicate with. Communication is also very important though because I mean, look, part of my job is being a teacher. And I know some very intelligent professors even who just have a bit of hard time communicating. They come up with all these brilliant ideas, but to communicate with somebody else, you have to also be able to simulate their own mind. Yes, empathy. Build well enough and understand model of their mind that you can say things that they will understand. And that's quite difficult. And that's why today it's so frustrating if you have a computer that makes some cancer diagnosis and you ask it, well, why are you saying I should have this surgery? And if it can only reply, I was trained on five terabytes of data and this is my diagnosis, boop, boop, beep, beep. It doesn't really instill a lot of confidence, right? So I think we have a lot of work to do on communication there. So what kind of, I think you're doing a little bit of work in explainable AI. What do you think are the most promising avenues? Is it mostly about sort of the Alexa problem of natural language processing of being able to actually use human interpretable methods of communication? So being able to talk to a system and it talk back to you, or is there some more fundamental problems to be solved? I think it's all of the above. The natural language processing is obviously important, but there are also more nerdy fundamental problems. Like if you take, you play chess? Of course, I'm Russian. I have to. You speak Russian? Yes, I speak Russian. Excellent, I didn't know. When did you learn Russian? I speak very bad Russian, I'm only an autodidact, but I bought a book, Teach Yourself Russian, read a lot, but it was very difficult. Wow. That's why I speak so bad. How many languages do you know? Wow, that's really impressive. I don't know, my wife has some calculation, but my point was, if you play chess, have you looked at the AlphaZero games? The actual games, no. Check it out, some of them are just mind blowing, really beautiful. And if you ask, how did it do that? You go talk to Demis Hassabis, I know others from DeepMind, all they'll ultimately be able to give you is big tables of numbers, matrices, that define the neural network. And you can stare at these tables of numbers till your face turn blue, and you're not gonna understand much about why it made that move. And even if you have natural language processing that can tell you in human language about, oh, five, seven, points, two, eight, still not gonna really help. So I think there's a whole spectrum of fun challenges that are involved in taking a computation that does intelligent things and transforming it into something equally good, equally intelligent, but that's more understandable. And I think that's really valuable because I think as we put machines in charge of ever more infrastructure in our world, the power grid, the trading on the stock market, weapon systems and so on, it's absolutely crucial that we can trust these AIs to do all we want. And trust really comes from understanding in a very fundamental way. And that's why I'm working on this, because I think the more, if we're gonna have some hope of ensuring that machines have adopted our goals and that they're gonna retain them, that kind of trust, I think, needs to be based on things you can actually understand, preferably even improve theorems on. Even with a self driving car, right? If someone just tells you it's been trained on tons of data and it never crashed, it's less reassuring than if someone actually has a proof. Maybe it's a computer verified proof, but still it says that under no circumstances is this car just gonna swerve into oncoming traffic. And that kind of information helps to build trust and helps build the alignment of goals, at least awareness that your goals, your values are aligned. And I think even in the very short term, if you look at how, you know, today, right? This absolutely pathetic state of cybersecurity that we have, where is it? Three billion Yahoo accounts we can't pack, almost every American's credit card and so on. Why is this happening? It's ultimately happening because we have software that nobody fully understood how it worked. That's why the bugs hadn't been found, right? And I think AI can be used very effectively for offense, for hacking, but it can also be used for defense. Hopefully automating verifiability and creating systems that are built in different ways so you can actually prove things about them. And it's important. So speaking of software that nobody understands how it works, of course, a bunch of people ask about your paper, about your thoughts of why does deep and cheap learning work so well? That's the paper. But what are your thoughts on deep learning? These kind of simplified models of our own brains have been able to do some successful perception work, pattern recognition work, and now with AlphaZero and so on, do some clever things. What are your thoughts about the promise limitations of this piece? Great, I think there are a number of very important insights, very important lessons we can always draw from these kinds of successes. One of them is when you look at the human brain, you see it's very complicated, 10th of 11 neurons, and there are all these different kinds of neurons and yada, yada, and there's been this long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence. We can now, I think, quite convincingly answer that question of no, it's enough to have just one kind. If you look under the hood of AlphaZero, there's only one kind of neuron and it's ridiculously simple mathematical thing. So it's just like in physics, it's not, if you have a gas with waves in it, it's not the detailed nature of the molecule that matter, it's the collective behavior somehow. Similarly, it's this higher level structure of the network that matters, not that you have 20 kinds of neurons. I think our brain is such a complicated mess because it wasn't evolved just to be intelligent, it was involved to also be self assembling and self repairing, right? And evolutionarily attainable. And so on and so on. So I think it's pretty, my hunch is that we're going to understand how to build AGI before we fully understand how our brains work, just like we understood how to build flying machines long before we were able to build a mechanical bird. Yeah, that's right. You've given the example exactly of mechanical birds and airplanes and airplanes do a pretty good job of flying without really mimicking bird flight. And even now after 100 years later, did you see the Ted talk with this German mechanical bird? I heard you mention it. Check it out, it's amazing. But even after that, right, we still don't fly in mechanical birds because it turned out the way we came up with was simpler and it's better for our purposes. And I think it might be the same there. That's one lesson. And another lesson, it's more what our paper was about. First, as a physicist thought it was fascinating how there's a very close mathematical relationship actually between our artificial neural networks and a lot of things that we've studied for in physics go by nerdy names like the renormalization group equation and Hamiltonians and yada, yada, yada. And when you look a little more closely at this, you have, at first I was like, well, there's something crazy here that doesn't make sense. Because we know that if you even want to build a super simple neural network to tell apart cat pictures and dog pictures, right, that you can do that very, very well now. But if you think about it a little bit, you convince yourself it must be impossible because if I have one megapixel, even if each pixel is just black or white, there's two to the power of 1 million possible images, which is way more than there are atoms in our universe, right, so in order to, and then for each one of those, I have to assign a number, which is the probability that it's a dog. So an arbitrary function of images is a list of more numbers than there are atoms in our universe. So clearly I can't store that under the hood of my GPU or my computer, yet somehow it works. So what does that mean? Well, it means that out of all of the problems that you could try to solve with a neural network, almost all of them are impossible to solve with a reasonably sized one. But then what we showed in our paper was that the fraction, the kind of problems, the fraction of all the problems that you could possibly pose, that we actually care about given the laws of physics is also an infinite testimony, tiny little part. And amazingly, they're basically the same part. Yeah, it's almost like our world was created for, I mean, they kind of come together. Yeah, well, you could say maybe where the world was created for us, but I have a more modest interpretation, which is that the world was created for us, but I have a more modest interpretation, which is that instead evolution endowed us with neural networks precisely for that reason. Because this particular architecture, as opposed to the one in your laptop, is very, very well adapted to solving the kind of problems that nature kept presenting our ancestors with. So it makes sense that why do we have a brain in the first place? It's to be able to make predictions about the future and so on. So if we had a sucky system, which could never solve it, we wouldn't have a world. So this is, I think, a very beautiful fact. Yeah. We also realize that there's been earlier work on why deeper networks are good, but we were able to show an additional cool fact there, which is that even incredibly simple problems, like suppose I give you a thousand numbers and ask you to multiply them together, and you can write a few lines of code, boom, done, trivial. If you just try to do that with a neural network that has only one single hidden layer in it, you can do it, but you're going to need two to the power of a thousand neurons to multiply a thousand numbers, which is, again, more neurons than there are atoms in our universe. That's fascinating. But if you allow yourself to make it a deep network with many layers, you only need 4,000 neurons. It's perfectly feasible. That's really interesting. Yeah. So on another architecture type, I mean, you mentioned Schrodinger's equation, and what are your thoughts about quantum computing and the role of this kind of computational unit in creating an intelligence system? In some Hollywood movies that I will not mention by name because I don't want to spoil them. The way they get AGI is building a quantum computer. Because the word quantum sounds cool and so on. That's right. First of all, I think we don't need quantum computers to build AGI. I suspect your brain is not a quantum computer in any profound sense. So you don't even wrote a paper about that a lot many years ago. I calculated the so called decoherence time, how long it takes until the quantum computerness of what your neurons are doing gets erased by just random noise from the environment. And it's about 10 to the minus 21 seconds. So as cool as it would be to have a quantum computer in my head, I don't think that fast. On the other hand, there are very cool things you could do with quantum computers. Or I think we'll be able to do soon when we get bigger ones. That might actually help machine learning do even better than the brain. So for example, one, this is just a moonshot, but learning is very much same thing as search. If you're trying to train a neural network to get really learned to do something really well, you have some loss function, you have a bunch of knobs you can turn, represented by a bunch of numbers, and you're trying to tweak them so that it becomes as good as possible at this thing. So if you think of a landscape with some valley, where each dimension of the landscape corresponds to some number you can change, you're trying to find the minimum. And it's well known that if you have a very high dimensional landscape, complicated things, it's super hard to find the minimum. Quantum mechanics is amazingly good at this. Like if I want to know what's the lowest energy state this water can possibly have, incredibly hard to compute, but nature will happily figure this out for you if you just cool it down, make it very, very cold. If you put a ball somewhere, it'll roll down to its minimum. And this happens metaphorically at the energy landscape too. And quantum mechanics even uses some clever tricks, which today's machine learning systems don't. Like if you're trying to find the minimum and you get stuck in the little local minimum here, in quantum mechanics you can actually tunnel through the barrier and get unstuck again. That's really interesting. Yeah, so it may be, for example, that we'll one day use quantum computers that help train neural networks better. That's really interesting. Okay, so as a component of kind of the learning process, for example. Yeah. Let me ask sort of wrapping up here a little bit, let me return to the questions of our human nature and love, as I mentioned. So do you think, you mentioned sort of a helper robot, but you could think of also personal robots. Do you think the way we human beings fall in love and get connected to each other is possible to achieve in an AI system and human level AI intelligence system? Do you think we would ever see that kind of connection? Or, you know, in all this discussion about solving complex goals, is this kind of human social connection, do you think that's one of the goals on the peaks and valleys with the raising sea levels that we'll be able to achieve? Or do you think that's something that's ultimately, or at least in the short term, relative to the other goals is not achievable? I think it's all possible. And I mean, in recent, there's a very wide range of guesses, as you know, among AI researchers, when we're going to get AGI. Some people, you know, like our friend Rodney Brooks says it's going to be hundreds of years at least. And then there are many others who think it's going to happen much sooner. And recent polls, maybe half or so of AI researchers think we're going to get AGI within decades. So if that happens, of course, then I think these things are all possible. But in terms of whether it will happen, I think we shouldn't spend so much time asking what do we think will happen in the future? As if we are just some sort of pathetic, your passive bystanders, you know, waiting for the future to happen to us. Hey, we're the ones creating this future, right? So we should be proactive about it and ask ourselves what sort of future we would like to have happen. We're going to make it like that. Well, what I prefer is just some sort of incredibly boring, zombie like future where there's all these mechanical things happening and there's no passion, no emotion, no experience, maybe even. No, I would of course, much rather prefer it if all the things that we find that we value the most about humanity are our subjective experience, passion, inspiration, love, you know. If we can create a future where those things do happen, where those things do exist, you know, I think ultimately it's not our universe giving meaning to us, it's us giving meaning to our universe. And if we build more advanced intelligence, let's make sure we build it in such a way that meaning is part of it. A lot of people that seriously study this problem and think of it from different angles have trouble in the majority of cases, if they think through that happen, are the ones that are not beneficial to humanity. And so, yeah, so what are your thoughts? What's should people, you know, I really don't like people to be terrified. What's a way for people to think about it in a way we can solve it and we can make it better? No, I don't think panicking is going to help in any way. It's not going to increase chances of things going well either. Even if you are in a situation where there is a real threat, does it help if everybody just freaks out? No, of course, of course not. I think, yeah, there are of course ways in which things can go horribly wrong. First of all, it's important when we think about this thing, about the problems and risks, to also remember how huge the upsides can be if we get it right, right? Everything we love about society and civilization is a product of intelligence. So if we can amplify our intelligence with machine intelligence and not anymore lose our loved one to what we're told is an incurable disease and things like this, of course, we should aspire to that. So that can be a motivator, I think, reminding ourselves that the reason we try to solve problems is not just because we're trying to avoid gloom, but because we're trying to do something great. But then in terms of the risks, I think the really important question is to ask, what can we do today that will actually help make the outcome good, right? And dismissing the risk is not one of them. I find it quite funny often when I'm in discussion panels about these things, how the people who work for companies, always be like, oh, nothing to worry about, nothing to worry about, nothing to worry about. And it's only academics sometimes express concerns. That's not surprising at all if you think about it. Right. Upton Sinclair quipped, right, that it's hard to make a man believe in something when his income depends on not believing in it. And frankly, we know a lot of these people in companies that they're just as concerned as anyone else. But if you're the CEO of a company, that's not something you want to go on record saying when you have silly journalists who are gonna put a picture of a Terminator robot when they quote you. So the issues are real. And the way I think about what the issue is, is basically the real choice we have is, first of all, are we gonna just dismiss the risks and say, well, let's just go ahead and build machines that can do everything we can do better and cheaper. Let's just make ourselves obsolete as fast as possible. What could possibly go wrong? That's one attitude. The opposite attitude, I think, is to say, here's this incredible potential, let's think about what kind of future we're really, really excited about. What are the shared goals that we can really aspire towards? And then let's think really hard about how we can actually get there. So start with, don't start thinking about the risks, start thinking about the goals. And then when you do that, then you can think about the obstacles you want to avoid. I often get students coming in right here into my office for career advice. I always ask them this very question, where do you want to be in the future? If all she can say is, oh, maybe I'll have cancer, maybe I'll get run over by a truck. Yeah, focus on the obstacles instead of the goals. She's just going to end up a hypochondriac paranoid. Whereas if she comes in and fire in her eyes and is like, I want to be there. And then we can talk about the obstacles and see how we can circumvent them. That's, I think, a much, much healthier attitude. And I feel it's very challenging to come up with a vision for the future, which we are unequivocally excited about. I'm not just talking now in the vague terms, like, yeah, let's cure cancer, fine. I'm talking about what kind of society do we want to create? What do we want it to mean to be human in the age of AI, in the age of AGI? So if we can have this conversation, broad, inclusive conversation, and gradually start converging towards some, some future that with some direction, at least, that we want to steer towards, right, then we'll be much more motivated to constructively take on the obstacles. And I think if I had, if I had to, if I try to wrap this up in a more succinct way, I think we can all agree already now that we should aspire to build AGI that doesn't overpower us, but that empowers us. And think of the many various ways that can do that, whether that's from my side of the world of autonomous vehicles. I'm personally actually from the camp that believes this human level intelligence is required to achieve something like vehicles that would actually be something we would enjoy using and being part of. So that's one example, and certainly there's a lot of other types of robots and medicine and so on. So focusing on those and then coming up with the obstacles, coming up with the ways that that can go wrong and solving those one at a time. And just because you can build an autonomous vehicle, even if you could build one that would drive just fine without you, maybe there are some things in life that we would actually want to do ourselves. That's right. Right, like, for example, if you think of our society as a whole, there are some things that we find very meaningful to do. And that doesn't mean we have to stop doing them just because machines can do them better. I'm not gonna stop playing tennis just the day someone builds a tennis robot and beat me. People are still playing chess and even go. Yeah, and in the very near term even, some people are advocating basic income, replace jobs. But if the government is gonna be willing to just hand out cash to people for doing nothing, then one should also seriously consider whether the government should also hire a lot more teachers and nurses and the kind of jobs which people often find great fulfillment in doing, right? We get very tired of hearing politicians saying, oh, we can't afford hiring more teachers, but we're gonna maybe have basic income. If we can have more serious research and thought into what gives meaning to our lives, the jobs give so much more than income, right? Mm hmm. And then think about in the future, what are the roles that we wanna have people continually feeling empowered by machines? And I think sort of, I come from Russia, from the Soviet Union. And I think for a lot of people in the 20th century, going to the moon, going to space was an inspiring thing. I feel like the universe of the mind, so AI, understanding, creating intelligence is that for the 21st century. So it's really surprising. And I've heard you mention this. It's really surprising to me, both on the research funding side, that it's not funded as greatly as it could be, but most importantly, on the politician side, that it's not part of the public discourse except in the killer bots terminator kind of view, that people are not yet, I think, perhaps excited by the possible positive future that we can build together. So we should be, because politicians usually just focus on the next election cycle, right? The single most important thing I feel we humans have learned in the entire history of science is they were the masters of underestimation. We underestimated the size of our cosmos again and again, realizing that everything we thought existed was just a small part of something grander, right? Planet, solar system, the galaxy, clusters of galaxies. The universe. And we now know that the future has just so much more potential than our ancestors could ever have dreamt of. This cosmos, imagine if all of Earth was completely devoid of life, except for Cambridge, Massachusetts. Wouldn't it be kind of lame if all we ever aspired to was to stay in Cambridge, Massachusetts forever and then go extinct in one week, even though Earth was gonna continue on for longer? That sort of attitude I think we have now on the cosmic scale, life can flourish on Earth, not for four years, but for billions of years. I can even tell you about how to move it out of harm's way when the sun gets too hot. And then we have so much more resources out here, which today, maybe there are a lot of other planets with bacteria or cow like life on them, but most of this, all this opportunity seems, as far as we can tell, to be largely dead, like the Sahara Desert. And yet we have the opportunity to help life flourish around this for billions of years. So let's quit squabbling about whether some little border should be drawn one mile to the left or right, and look up into the skies and realize, hey, we can do such incredible things. Yeah, and that's, I think, why it's really exciting that you and others are connected with some of the work Elon Musk is doing, because he's literally going out into that space, really exploring our universe, and it's wonderful. That is exactly why Elon Musk is so misunderstood, right? Misconstrued him as some kind of pessimistic doomsayer. The reason he cares so much about AI safety is because he more than almost anyone else appreciates these amazing opportunities that we'll squander if we wipe out here on Earth. We're not just going to wipe out the next generation, all generations, and this incredible opportunity that's out there, and that would really be a waste. And AI, for people who think that it would be better to do without technology, let me just mention that if we don't improve our technology, the question isn't whether humanity is going to go extinct. The question is just whether we're going to get taken out by the next big asteroid or the next super volcano or something else dumb that we could easily prevent with more tech, right? And if we want life to flourish throughout the cosmos, AI is the key to it. As I mentioned in a lot of detail in my book right there, even many of the most inspired sci fi writers, I feel have totally underestimated the opportunities for space travel, especially at the other galaxies, because they weren't thinking about the possibility of AGI, which just makes it so much easier. Right, yeah. So that goes to your view of AGI that enables our progress, that enables a better life. So that's a beautiful way to put it and then something to strive for. So Max, thank you so much. Thank you for your time today. It's been awesome. Thank you so much. Thanks. Have a great day.
Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
As part of MIT course 6S099 on artificial general intelligence, I got a chance to sit down with Christoph Koch, who is one of the seminal figures in neurobiology, neuroscience, and generally in the study of consciousness. He is the president, the chief scientific officer of the Allen Institute for Brain Science in Seattle. From 1986 to 2013, he was a professor at Caltech. Before that, he was at MIT, he is extremely well cited, over 100,000 citations. His research, his writing, his ideas have had big impact on the scientific community and the general public in the way we think about consciousness, in the way we see ourselves as human beings. He's the author of several books, The Quest for Consciousness and Neurobiological Approach, and a more recent book, Consciousness, Confessions of a Romantic Reductionist. If you enjoy this conversation, this course, subscribe, click the little bell icon to make sure you never miss a video, and in the comments, leave suggestions for any people you'd like to see be part of the course or any ideas that you would like us to explore. Thanks very much and I hope you enjoy. Okay, before we delve into the beautiful mysteries of consciousness, let's zoom out a little bit and let me ask, do you think there's intelligent life out there in the universe? Yes, I do believe so. We have no evidence of it, but I think the probabilities are overwhelming in favor of it. Given a universe where we have 10 to the 11 galaxies and each galaxy has between 10 to the 11, 10 to the 12 stars and we know most stars have one or more planets. So how does that make you feel? It still makes me feel special because I have experiences. I feel the world, I experience the world and independent of whether there are other creatures out there, I still feel the world and I have access to this world in this very strange compelling way and that's the core of human existence. Now, you said human, do you think if those intelligent creatures are out there, do you think they experience their world? Yes, if they are evolved, if they are a product of natural evolution as they would have to be, they will also experience their own world. The consciousness isn't just human, you're right, it's much wider. It may be spread across all of biology. The only thing that we have special is we can talk about it. Of course, not all people can talk about it. Babies and little children can talk about it. Patients who have a stroke in the left inferior frontal gyrus can talk about it, but most normal adult people can talk about it and so we think that makes us special compared to let's say monkeys or dogs or cats or mice or all the other creatures that we share the planet with, but all the evidence seems to suggest that they too experience the world and so it's overwhelmingly likely that aliens would also experience their world. Of course, differently because they have a different sensorium, they have different sensors, they have a very different environment, but the fact that I would strongly suppose that they also have experiences. They feel pain and pleasure and see in some sort of spectrum and hear and have all the other senses. Of course, their language, if they have one, would be different so we might not be able to understand their poetry about the experiences that they have. That's correct. So in a talk, in a video, I've heard you mention Siputzo, a dachshund that you came up with, that you grew up with, it was part of your family when you were young. First of all, you're technically a Midwestern boy. You just – Technically. Yes. But after that, you traveled around a bit, hence a little bit of the accent. You talked about Siputzo, the dachshund, having these elements of humanness, of consciousness that you discovered. So I just wanted to ask, can you look back in your childhood and remember when was the first time you realized you yourself, sort of from a third person perspective, are a conscious being? This idea of stepping outside yourself and seeing there's something special going on here in my brain. I can't really actually – it's a good question. I'm not sure I recall a discrete moment. I mean, you take it for granted because that's the only world you know. The only world I know and you know is the world of seeing and hearing voices and touching and all the other things. So it's only much later at early – in my underguided days when I enrolled in physics and in philosophy that I really thought about it and thought, well, this is really fundamentally very, very mysterious and there's nothing really in physics right now that explains this transition from the physics of the brain to feelings. Where do the feelings come in? So you can look at the foundational equation of quantum mechanics, general relativity. You can look at the periodic table of the elements. You can look at the endless ATGC chat in our genes and nowhere is consciousness. Yet I wake up every morning to a world where I have experiences. And so that's the heart of the ancient mind body problem. How do experiences get into the world? So what is consciousness? Experience. This is any experience. Some people call it subjective feeling. Some people call it phenomenology. Some people call it qualia of the philosopher. But they all denote the same thing. It feels like something in the famous word of the philosopher Thomas Nagel. It feels like something to be a bat or to be an American or to be angry or to be sad or to be in love or to have pain. And that is what experience is, any possible experience. Could be as mundane as just sitting in a chair. Could be as exalted as having a mystical moment in deep meditation. Those are just different forms of experiences. Experience. So if you were to sit down with maybe the next, skip a couple generations, of IBM Watson, something that won Jeopardy, what is the gap, I guess the question is, between Watson, that might be much smarter than you, than us, than any human alive, but may not have experience, what is the gap? Well, so that's a big, big question. That's occupied people for the last, certainly last 50 years since we, you know, since the advent, the birth of computers. That's a question Alan Turing tried to answer. And of course he did it in this indirect way by proposing a test, an operational test. But that's not really, that's, you know, he tried to get at what does it mean for a person to think, and then he had this test, right? You lock them away, and then you have a communication with them, and then you try to guess after a while whether that is a person or whether it's a computer system. There's no question that now or very soon, you know, Alexa or Siri or, you know, Google now will pass this test, right? And you can game it, but you know, ultimately, certainly in your generation, there will be machines that will speak with complete poise that will remember everything you ever said. They'll remember every email you ever had, like Samantha, remember in the movie Her? Yeah. There's no question it's going to happen. But of course, the key question is, does it feel like anything to be Samantha in the movie Her? Or does it feel like anything to be Watson? And there one has to very, very strongly think there are two different concepts here that we co mingle. There is the concept of intelligence, natural or artificial, and there is a concept of consciousness, of experience, natural or artificial. Those are very, very different things. Now, historically, we associate consciousness with intelligence. Why? Because we live in a world, leaving aside computers, of natural selection, where we're surrounded by creatures, either our own kin that are less or more intelligent, or we go across species. Some are more adapted to a particular environment. Others are less adapted, whether it's a whale or dog, or you go talk about a paramecium or a little worm. And we see the complexity of the nervous system goes from one cell to specialized cells, to a worm that has three nets, that has 30 percent of its cells are nerve cells, to creature like us or like a blue whale that has 100 billion, even more nerve cells. And so based on behavioral evidence and based on the underlying neuroscience, we believe that as these creatures become more complex, they are better adapted to their particular ecological niche, and they become more conscious, partly because their brain grows. And we believe consciousness, unlike the ancient, ancient people thought most, almost every culture thought that consciousness with intelligence has to do with your heart. And you still see that today. You see, honey, I love you with all my heart. But what you should actually say is, no, honey, I love you with all my lateral hypothalamus. And for Valentine's Day, you should give your sweetheart, you know, hypothalamus, a piece of chocolate and not a heart shaped chocolate. Anyway, so we still have this language, but now we believe it's a brain. And so we see brains of different complexity and we think, well, they have different levels of consciousness. They're capable of different experiences. But now we confront the world where we know where we're beginning to engineer intelligence. And it's radical unclear whether the intelligence we're engineering has anything to do with consciousness and whether it can experience anything. Because fundamentally, what's the difference? Intelligence is about function. Intelligence no matter exactly how you define it, sort of adaptation to new environments, being able to learn and quickly understand, you know, the setup of this and what's going on and who are the actors and what's going to happen next. That's all about function. Consciousness is not about function. Consciousness is about being. It's in some sense much fundamental. You can see this in several cases. You can see it, for instance, in the case of the clinic. When you're dealing with patients who are, let's say, had a stroke or had were in traffic accident, et cetera, they're pretty much immobile. Terri Schiavo, you may have heard historically, she was a person here in the 90s in Florida. Her heart stood still. She was reanimated. And then for the next 14 years, she was what's called in a vegetative state. So there are thousands of people in a vegetative state. So they're, you know, they're, you know, they're like this. Occasionally, they open their eyes for two, three, four, five, six, eight hours, and then close their eyes. They have sleep wake cycle. Occasionally, they have behaviors. They do like, you know, but there's no way that you can establish a lawful relationship between what you say or the doctor says or the mom says and what the patient does. So there isn't any behavior, yet in some of these people, there is still experience. You can design and build brain machine interfaces where you can see there's still experience something. And of course, these cases of locked in state, there's this famous book called The Diving Bell and the Butterfly, where you had an editor, a French editor, he had a stroke in the brainstem, unable to move except his vertical eyes, eye movement. He could just move his eyes up and down. And he dictated an entire book. And some people even lose this at the end. All the evidence seems to suggest that they're still in there. In this case, you have no behavior, you have consciousness. Second case is tonight, like all of us, you're going to go to sleep, close your eyes, you go to sleep, you will wake up inside your sleeping body, and you will have conscious experiences. They are different from everyday experience. You might fly, you might not be surprised that you're flying, you might meet a long dead pet, childhood dog, and you're not surprised that you're meeting them. But you have conscious experience of love, of hate, they can be very emotional. Your body during this state, typically it's REM state, sends an active signal to your motor neurons to paralyze you. It's called atonia. Because if you don't have that, like some patients, what do you do? You act out your dreams. You get, for example, REM behavioral disorder, which is bad juju to get. Okay. Third case is pure experience. So I recently had this, what some people call a mystical experience. I went to Singapore and went into a flotation tank. Yeah. All right. So this is a big tub filled with water, that's body temperature and Epsom salt. You strip completely naked, you lie inside of it, you close the lid. Darkness. Complete darkness, soundproof. So very quickly, you become bodiless because you're floating and you're naked. You have no rings, no watch, no nothing. You don't feel your body anymore. There's no sound, soundless. There's no photon, sightless, timeless, because after a while, early on you actually hear your heart, but then you sort of adapt to that and then sort of the passage of time ceases. Yeah. And if you train yourself, like in a meditation, not to think, early on you think a lot. It's a little bit spooky. You feel somewhat uncomfortable or you think, well, I'm going to get bored. And if you try to not to think actively, you become mindless. There you are, bodiless, timeless, you know, soundless, sightless, mindless, but you're in a conscious experience. You're not asleep. Yeah. You're not asleep. You are a being of pure, you're a pure being. There isn't any function. You aren't doing any computation. You're not remembering. You're not projecting. You're not planning. Yet you are fully conscious. You're fully conscious. There's something going on there. It could be just a side effect. So what is the... You mean epiphenomena. So what's the select, meaning why, what is the function of you being able to lay in this sensory free deprivation tank and still have a conscious experience? Evolutionary? Evolutionary. Obviously we didn't evolve with flotation tanks in our environment. I mean, so biology is notoriously bad at asking why question, telenormical question. Why do we have two eyes? Why don't we have four eyes like some teachers or three eyes or something? Well, no, there's probably, there is a function to that, but we're not very good at answering those questions. We can speculate endlessly where biology is very, or science is very good about mechanistic question. Why is there a charge in the universe? Right? We find a certain universe where there are positive and negative charges. Why? Why does quantum mechanics hold? You know, why doesn't some other theory hold? Quantum mechanics holding our universe is very unclear why. So telenormical question, why questions are difficult to answer. There's some relationship between complexity, brain processing power and consciousness. But however, in these cases, in these three examples I gave, one is an everyday experience at night. The other one is trauma. And third one is in principle, you can, everybody can have these sort of mystical experiences. You have a dissociation of function from, of intelligence from consciousness. You caught me asking a why question. Let me ask a question that's not a why question. You're giving a talk later today on the Turing test for intelligence and consciousness, drawing lines between the two. So is there a scientific way to say there's consciousness present in this entity or not? And to anticipate your answer, cause you, you will also, there's a neurobiological answer. So we can test the human brain, but if you take a machine brain that you don't know tests for yet, how would you even begin to approach a test if there's consciousness present in this thing? Okay. That's a really good question. So let me take it in two steps. So as you point out for, for, for, for humans, let's just stick with humans. There's now a test called the Zap and Zip is a procedure where you ping the brain using transcranial magnetic stimulation. You look at the electrical reverberations essentially using EG, and then you can measure the complexity of this brain response. And you can do this in awake people, in asleep, normal people, you can do it in awake people and then anesthetize them. You can do it in patients. And it, it, it has a hundred percent accuracy that in all those cases, when you're clear, the patient or the person is either conscious or unconscious, the complexity is either high or low. And then you can adopt these techniques to similar creatures like monkeys and dogs and, and, and mice that have very similar brains. Now of course you, you point out that may not help you because we don't have a cortex, you know, and if I send a magnetic pulse into my iPhone or my computer, it's probably going to break something. So we don't have that. So what we need ultimately, we need a theory of consciousness. We can't just rely on our intuition. Our intuition is, well, yeah, if somebody talks, they're conscious. However, then there are all these patients, children, babies don't talk, right? But we believe that, that the babies also have conscious experiences, right? And then there are all these patients I mentioned and they don't talk. When you dream, you can't talk because you're paralyzed. So what we ultimately need, we can't just rely on our intuition. We need a theory of conscience that tells us what is it about a piece of matter? What is it about a piece of highly excitable matter like the brain or like a computer that gives rise to conscious experience? We all believe, none of us believes anymore in the old story. It's a soul, right? That used to be the most common explanation that most people accept that instill a lot of people today believe, well, there's, there's God endowed only us with a special thing that animals don't have. Rene Descartes famously said, a dog, if you hit it with your carriage may yell, may cry, but it doesn't have this special thing. It doesn't have the magic, the magic soul. It doesn't have res cogitans, the soul. Now we believe that isn't the case anymore. So what is the difference between brains and, and these guys, silicon? And in particular, once their behavior matches. So if you have Siri or Alexa in 20 years from now that she can talk just as good as any possible human, what grounds do you have to say she's not conscious in particular, if she says it's of course she will, well, of course I'm conscious. You ask her how are you doing? And she'll say, well, you know, they, they'll generate some way to, of course she'll behave like a, like a person. Now there's several differences. One is, so this relates to the problem, the very hard, why is consciousness a hard problem? It's because it's subjective, right? Only I have it, for only I know I have direct experience of my own consciousness. I don't have experience in your consciousness. Now I assume as a sort of a Bayesian person who believes in probability theory and all of that, you know, I can do, I can do an abduction to the, to the best available facts. I deduce your brain is very similar to mine. If I put you in a scanner, your brain is roughly going to behave the same way as I do. If, if, if, you know, if I give you this muesli and ask you, how does it taste? You tell me things that, you know, that, that I would also say more or less, right? So I infer based on all of that, that you're conscious. Now with theory, I can't do that. So there I really need a theory that tells me what is it about, about any system, this or this, that makes it conscious. We have such a theory. Yes. So the integrated information theory, but let me first, maybe as an introduction for people who are not familiar, Descartes, can you, you talk a lot about pan, panpsychism. Can you describe what, uh, physicalism versus dualism? This you, you mentioned the soul, what, what is the history of that idea? What is the idea of panpsychism or no, the debate really, uh, out of which panpsychism can, um, emerge of, of, of, um, dualism versus, uh, physicalism or do you not see panpsychism as fitting into that? No, you can argue there's some, okay, so let's step back. So panpsychism is a very ancient belief that's been around, uh, I mean, Plato and Aristotle talks about it, uh, modern philosophers talk about it. Of course, in Buddhism, the idea is very prevalent that, I mean, there are different versions of it. One version says everything is ensouled, everything, rocks and stones and dogs and people and forest and iPhones, all of us all, right? All matter is ensouled. That's sort of one version. Another version is that all biology, all creatures, small or large, from a single cell to a giant sequoia tree feel like something. This one I think is somewhat more realistic. Um, so the different versions, what do you mean by feel like something, have, have feelings, have some kind of, it feels like something, it may well be possible that it feels like something to be a paramecium. I think it's pretty likely it feels like something to be a bee or a mouse or a dog. Sure. So, okay. So, so that you can see that's also, so panpsychism is very broad and you can, so some people, for example, Bertrand Russell, tried to advocate this, this idea, it's called Rasselian Monism, that that panpsychism is really physics viewed from the inside. So the idea is that physics is very good at describing relationship among objects like charges or like gravity, right? You know, describe the relationship between curvature and mass distribution, okay? That's the relationship among things. Physics doesn't really describe the ultimate reality itself. It's just relationship among, you know, quarks or all these other stuff from like a third person observer. Yes. Yes. Yes. And consciousness is what physics feels from the inside. So my conscious experience, it's the way the physics of my brain, particularly my cortex feels from the inside. And so if you are paramecium, you got to remember, you say paramecium, well, that's a pretty dumb creature. It is, but it has already a billion different molecules, probably, you know, 5,000 different proteins assembled in a highly, highly complex system that no single person, no computer system so far on this planet has ever managed to accurately simulate. Its complexity vastly escapes us. Yes. And it may well be that that little thing feels like a tiny bit. Now, it doesn't have a voice in the head like me. It doesn't have expectations. You know, it doesn't have all that complex things, but it may well feel like something. Yeah. So this is really interesting. Can we draw some lines and maybe try to understand the difference between life, intelligence and consciousness? How do you see all of those? If you had to define what is a living thing, what is a conscious thing and what is an intelligent thing? Do those intermix for you or are they totally separate? Okay. So A, that's a question that we don't have a full answer to. A lot of the stuff we're talking about today is full of mysteries and fascinating ones, right? For example, you can go to Aristotle, who's probably the most important scientist and philosopher who's ever lived in, certainly in Western culture. He had this idea, it's called hylomorphism. It's quite popular these days, that there are different forms of soul. The soul is really the form of something. He says, all biological creatures have a vegetative soul. That's life principle. Today, we think we understand something more than it is biochemistry and nonlinear thermodynamics. Then he said they have a sensitive soul. Only animals and humans have also a sensitive soul or a petitive soul. They can see, they can smell, and they have drives. They want to reproduce, they want to eat, et cetera. And then only humans have what he called a rational soul, okay? And that idea then made it into Christendom and then the rational soul is the one that lives forever. He was very unclear. He wasn't really, I mean, different readings of Aristotle give different, whether did he believe that rational soul was immortal or not. I probably think he didn't. But then, of course, that made it through Plato into Christianity, and then this soul became immortal and then became the connection to God. So you ask me, essentially, what is our modern conception of these three, Aristotle would have called them different forms. Life, we think we know something about it, at least life on this planet, right? Although we don't understand how to originate it, but it's been difficult to rigorously pin down. You see this in modern definitions of death. In fact, right now, there's a conference ongoing, again, that tries to define legally and medically what is death. It used to be very simple. Death is you stop breathing, your heart stops beating, you're dead, totally uncontroversial. If you're unsure, you wait another 10 minutes. If the patient doesn't breathe, he's dead. Well, now we have ventilators, we have heart pacemakers, so it's much more difficult to define what death is. Typically, death is defined as the end of life and life is defined before death. Okay, so we don't have really very good definitions. Intelligence, we don't have a rigorous definition. We know something how to measure, it's called IQ or G factors, right? And we're beginning to build it in a narrow sense, right? Like go, AlphaGo and Watson and, you know, Google cars and Uber cars and all of that, it's still narrow AI and some people are thinking about artificial general intelligence. But roughly, as we said before, it's something to do with ability to learn and to adapt to new environments. But that is, as I said, also, it's radical difference from experience. And it's very unclear if you build a machine that has AGI, it's not at all a priori, it's not at all clear that this machine will have consciousness, it may or may not. So let's ask it the other way, do you think if you were to try to build an artificial general intelligence system, do you think figuring out how to build artificial consciousness would help you get to an AGI? So or put another way, do you think intelligent requires consciousness? In human, it goes hand in hand. In human, or I think in biology, consciousness, intelligence goes hand in hand, quay is illusion because the brain evolved to be highly complex, complexity via the theory integrated information theory is sort of ultimately is what is closely tied to consciousness. Ultimately it's causal power upon itself. And so in evolved systems, they go together. In artificial system, particularly in digital machines, they do not go together. And if you ask me point blank, is Alexa 20.0 in the year 2040, when she can easily pass every Turing test, is she conscious? No, even if she claims she's conscious. In fact, you could even do a more radical version of this thought experiment. You can build a computer simulation of the human brain. You know what Henry Markham in the Blue Brain Project or the Human Brain Project in Switzerland is trying to do. Let's grant them all the success. So in 10 years, we have this perfect simulation of the human brain. Every neuron is simulated and it has a larynx and it has motor neurons. It has a Broca's area and of course they'll talk and they'll say, hi, I just woke up. I feel great. OK, even that computer simulation that can in principle map onto your brain will not be conscious. Why? Because it simulates, it's a difference between the simulated and the real. So it simulates the behavior associated with consciousness. It might be, it will, if it's done properly, will have all the intelligence that that particular person they're simulating has. But simulating intelligence is not the same as having conscious experiences. And I give you a really nice metaphor that engineers and physicists typically get. I can write down Einstein's field equation, nine or ten equations that describe the link in general relativity between curvature and mass. I can do that. I can run this on my laptop to predict that the central, the black hole at the center of our galaxy will be so massive that it will twist space time around it so no light can escape. It's a black hole. But funny, have you ever wondered why doesn't this computer simulation suck me in? It simulates gravity, but it doesn't have the causal power of gravity. That's a huge difference. So it's a difference between the real and the simulator, just like it doesn't get wet inside a computer when the computer runs code that simulates a weather storm. And so in order to have, to have artificial consciousness, you have to give it the same causal power as the human brain. You have to build so called a neuromorphic machine that has hardware that is very similar to the human brain, not a digital clocked phenomenon computer. So that's, just to clarify though, you think that consciousness is not required to create human level intelligence. It seems to accompany in the human brain, but for machine not. That's correct. So maybe just because this is AGI, let's dig in a little bit about what we mean by intelligence. So one thing is the G factor, these kind of IQ tests of intelligence. But I think if you, maybe another way to say, so in 2040, 2050, people will have Siri that is just really impressive. Do you think people will say Siri is intelligent? Yes. Intelligence is this amorphous thing. So to be intelligent, it seems like you have to have some kind of connections with other human beings in a sense that you have to impress them with your intelligence. And there feels, you have to somehow operate in this world full of humans. And for that, there feels like there has to be something like consciousness. So you think you can have just the world's best natural NLP system, natural language understanding generation, and that will be, that will get us happy and say, you know what, we've created an AGI. I don't know happy, but yes, I do believe we can get what we call high level functional intelligence, particular sort of the G, you know, this fluid like intelligence that we cherish, particularly at a place like MIT, right, in machines. I see a priori no reasons, and I see a lot of reason to believe it's going to happen very, you know, over the next 50 years or 30 years. So for beneficial AI, for creating an AI system that's, so you mentioned ethics, that is exceptionally intelligent but also does not do, does, you know, aligns its values with our values as humanity. Do you think then it needs consciousness? Yes, I think that that is a very good argument that if we're concerned about AI and the threat of AI, a la Nick Bostrom, existentialist threat, I think having an intelligence that has empathy, right, why do we find abusing a dog, why do most of us find that abhorrent, abusing any animal, right? Why do we find that abhorrent because we have this thing called empathy, which if you look at the Greek really means feeling with, I feel a path of empathy, I have feeling with you. I see somebody else suffer that isn't even my conspecific, it's not a person, it's not my wife or my kids, it's a dog, but I feel naturally most of us, not all of us, most of us will feel emphatic. And so it may well be in the long term interest of survival of homo sapiens sapiens that if we do build AGI and it really becomes very powerful that it has an emphatic response and doesn't just exterminate humanity. So as part of the full conscious experience to create a consciousness, artificial or in our human consciousness, do you think fear, maybe we're going to get into the earlier days with Nietzsche and so on, but do you think fear and suffering are essential to have consciousness? Do you have to have the full range of experience to have a system that has experience or can you have a system that only has very particular kinds of very positive experiences? Look you can have in principle, people have done this in the rat where you implant an electrode in the hypothalamus, the pleasure center of the rat and the rat stimulates itself above and beyond anything else. It doesn't care about food or natural sex or drink anymore, it just stimulates itself because it's such a pleasurable feeling. I guess it's like an orgasm just you have all day long. And so a priori I see no reason why you need a great variety. Now clearly to survive that wouldn't work, right? But if I'd engineered artificially, I don't think you need a great variety of conscious experience. You could have just pleasure or just fear. It might be a terrible existence, but I think that's possible at least on conceptual logical ground. Because any real creature whether artificially engineered, you want to give it fear, the fear of extinction that we all have. And you also want to give it positive repetitive states, states that you want the machine encouraged to do because they give the machine positive feedback. So you mentioned panpsychism, to jump back a little bit, everything having some kind of mental property. How do you go from there to something like human consciousness? So everything having some elements of consciousness, is there something special about human consciousness? So it's not everything. Like a spoon, the form of panpsychism I think about doesn't ascribe consciousness to anything like this, the spoon on my liver. However, the theory, the integrated information theory does say that the system, even one that looks from the outside relatively simple, at least if they have this internal causal power, it does feel like something. The theory a priori doesn't say anything what's special about human. Biologically we know the one thing that's special about human is we speak and we have an overblown sense of our own importance. We believe we're exceptional and we're just God's gift to the universe. But behaviorally the main thing that we have, we can plan over the long term, we have language and that gives us an enormous amount of power and that's why we are the current dominant species on the planet. So you mentioned God, you grew up a devout Roman Catholic family, so with consciousness you're sort of exploring some really deeply fundamental human things that religion also touches on. Where does religion fit into your thinking about consciousness? You've grown throughout your life and changed your views on religion as far as I understand. Yeah, I mean I'm now much closer to, I'm not a Roman Catholic anymore, I don't believe there's sort of this God, the God I was educated to believe in, sits somewhere in the fullness of time, I'll be united in some sort of everlasting bliss, I just don't see any evidence for that. Look, the world, the night is large and full of wonders, there are many things that I don't understand, I think many things that we as a cult, look we don't even understand more than 4% of all the universe, dark matter, dark energy, we have no idea what it is, maybe it's lost socks, what do I know? So all I can tell you is it's sort of my current religious or spiritual sentiment is much closer to some form of Buddhism, without the reincarnation unfortunately, there's no evidence for it than reincarnation. So can you describe the way Buddhism sees the world a little bit? Well so they talk about, so when I spent several meetings with the Dalai Lama and what always impressed me about him, he really, unlike for example let's say the Pope or some Cardinal, he always emphasized minimizing the suffering of all creatures. So they have this, from the early beginning they look at suffering in all creatures, not just in people, but in everybody, this universal and of course by degrees, an animal in general is less capable of suffering than a well developed, normally developed human and they think consciousness pervades in this universe and they have these techniques, you can think of them like mindfulness etc. and meditation that tries to access what they claim of this more fundamental aspect of reality. I'm not sure it's more fundamental, I think about it, there's the physical and then there's this inside view, consciousness and those are the two aspects that's the only thing I have access to in my life and you've got to remember my conscious experience and your conscious experience comes prior to anything you know about physics, comes prior to knowledge about the universe and atoms and super strings and molecules and all of that. The only thing you directly are acquainted with is this world that's populated with things in images and sounds in your head and touches and all of that. I actually have a question, so it sounds like you kind of have a rich life, you talk about rock climbing and it seems like you really love literature and consciousness is all about experiencing things, so do you think that has helped your research on this topic? Yes, particularly if you think about it, the various states, so for example when you do rock climbing or now I do rowing, crew rowing and a bike every day, you can get into this thing called the zone and I've always wanted about it, particularly with respect to consciousness because it's a strangely addictive state. Once people have it once, they want to keep on going back to it and you wonder what is it so addicting about it and I think it's the experience of almost close to pure experience because in this zone, you're not conscious of inner voice anymore, there's always inner voice nagging you, you have to do this, you have to do that, you have to pay your taxes, you have to fight with your ex and all of those things, they're always there. But when you're in the zone, all of that is gone and you're just in this wonderful state where you're fully out in the world, you're climbing or you're rowing or biking or doing soccer or whatever you're doing and sort of consciousness is this, you're all action or in this case of pure experience, you're not action at all but in both cases, you experience some aspect of conscious, you touch some basic part of conscious existence that is so basic and so deeply satisfying. You I think you touch the root of being, that's really what you're touching there, you're getting close to the root of being and that's very different from intelligence. So what do you think about the simulation hypothesis, simulation theory, the idea that we all live in a computer simulation? Rapture for nerds. Rapture for nerds. I think it's as likely as the hypothesis had engaged hundreds of scholars for many centuries, are we all just existing in the mind of God? And this is just a modern version of it, it's equally plausible. People love talking about these sort of things, I know they're book written about this simulation hypothesis, if that's what people want to do, that's fine, it seems rather esoteric, it's never testable. But it's not useful for you to think of in those terms, so maybe connecting to the questions of free will which you've talked about, I vaguely remember you saying that the idea that there's no free will, it makes you very uncomfortable. So what do you think about free will from a physics perspective, from a conscious perspective, what does it all fit? Okay, so from the physics perspective, leaving aside quantum mechanics, we believe we live in a fully deterministic world, right? But then comes of course quantum mechanics, so now we know that certain things are in principle not predictable, which as you said I prefer, because the idea that the initial condition of the universe and then everything else, we're just acting out the initial condition of the universe, that doesn't… It's not a romantic notion. Certainly not. Now when it comes to consciousness, I think we do have certain freedom. We are much more constrained by physics of course and by our past and by our own conscious desires and what our parents told us and what our environment tells us. We all know that, right? There's hundreds of experiments that show how we can be influenced. But finally in the final analysis, when you make a life – and I'm talking really about critical decision where you really think, should I marry, should I go to this school or that school, should I take this job or that job, should I cheat on my taxes or not? These are things where you really deliberate and I think under those conditions, you are as free as you can be. When you bring your entire being, your entire conscious being to that question and try to analyze it under all the various conditions, then you make a decision, you are as free as you can ever be. That is I think what free will is. It's not a will that's totally free to do anything it wants. That's not possible. Right. So as Jack mentioned, you actually write a blog about books you've read, amazing books from, I'm Russian, from Bulgakov, Neil Gaiman, Carl Sagan, Murakami. So what is a book that early in your life transformed the way you saw the world, something that changed your life? Nietzsche I guess did. That's Brooks R. Truster because he talks about some of these problems. He was one of the first discoverer of the unconscious. This is a little bit before Freud when he was in the air. He makes all these claims that people sort of under the guise or under the mass of charity actually are very noncharitable. So he is sort of really the first discoverer of the great land of the unconscious and that really struck me. And what do you think about the unconscious, what do you think about Freud, what do you think about these ideas? Just like dark matter in the universe, what's over there in that unconscious? A lot. I mean much more than we think. This is what a lot of last 100 years of research has shown. So I think he was a genius, misguided towards the end, but he started out as a neuroscientist. He contributed, he did the studies on the lamprey, he contributed himself to the neuron hypothesis, the idea that there are discrete units that we call nerve cells now. And then he wrote about the unconscious and I think it's true, there's lots of stuff happening. You feel this particular when you're in a relationship and it breaks asunder, right? And then you have this terrible, you can have love and hate and lust and anger and all of it's mixed in. And when you try to analyze yourself, why am I so upset? It's very, very difficult to penetrate to those basements, those caverns in your mind because the prying eyes of conscious doesn't have access to those, but they're there in the amygdala or lots of other places. They make you upset or angry or sad or depressed and it's very difficult to try to actually uncover the reason. You can go to a shrink, you can talk with your friend endlessly, you construct finally a story why this happened, why you love her or don't love her or whatever, but you don't really know whether that actually happened because you simply don't have access to those parts of the brain and they're very powerful. Do you think that's a feature or a bug of our brain? The fact that we have this deep, difficult to dive into subconscious? I think it's a feature because otherwise, look, we are like any other brain or nervous system or computer, we are severely band limited. If everything I do, every emotion I feel, every eye movements I make, if all of that had to be under the control of consciousness, I wouldn't be here. What you do early on, your brain, you have to be conscious when you learn things like typing or like riding on a bike, but then what you do, you train up routes, I think that involve basal ganglia and striatum. You train up different parts of your brain and then once you do it automatically like typing, you can show you do it much faster without even thinking about it because you've got these highly specialized, what Frans Krik and I call zombie agents, they're taking care of that while your consciousness can sort of worry about the abstract sense of the text you want to write. I think that's true for many, many things. But for the things like all the fights you had with an ex girlfriend, things that you would think are not useful to still linger somewhere in the subconscious. So that seems like a bug that it would stick to there. You think it would be better if you can analyze it and then get it out of the system. Better to get it out of the system or just forget it ever happened. That seems a very buggy kind of. Well yeah, in general we don't have, and that's probably functional, we don't have an ability unless it's extreme, there are cases, clinical dissociations, right? When people are heavily abused, when they completely repress the memory, but that doesn't happen in normal people. We don't have an ability to remove traumatic memories and of course we suffer from that. On the other hand, probably if you had the ability to constantly wipe your memory, you'd probably do it to an extent that isn't useful to you. So yeah, it's a good question to balance. So on the books, as Jack mentioned, correct me if I'm wrong, but broadly speaking, academia and the different scientific disciplines, certainly in engineering, reading literature seems to be a rare pursuit. So I'm wrong on this, but that's in my experience, most people read much more technical text and do not sort of escape or seek truth in literature. It seems like you do. So what do you think is the value, what do you think literature adds to the pursuit of scientific truth? Do you think it's good, it's useful for everybody? Gives you access to a much wider array of human experiences. How valuable do you think it is? Well if you want to understand human nature and nature in general, then I think you have to better understand a wide variety of experiences, not just sitting in a lab staring at a screen and having a face flashed onto you for a hundred milliseconds and pushing a button. That's what I used to do, that's what most psychologists do. There's nothing wrong with that, but you need to consider lots of other strange states. And literature is a shortcut for this. Well yeah, because literature, that's what literature is all about, all sorts of interesting experiences that people have, the contingency of it, the fact that women experience the world different, black people experience the world different. The one way to experience that is reading all these different literature and try to find out. You see, everything is so relative. You read a book 300 years ago, they thought about certain problems very, very differently than us today. We today, like any culture, think we know it all. That's common to every culture. Every culture believes at its heyday they know it all. And then you realize, well, there's other ways of viewing the universe and some of them may have lots of things in their favor. So this is a question I wanted to ask about time scale or scale in general. When you, with IIT or in general, try to think about consciousness, try to think about these ideas, we kind of naturally think in human time scales, and also entities that are sized close to humans. Do you think of things that are much larger and much smaller as containing consciousness? And do you think of things that take, you know, eons to operate in their conscious cause effect? That's a very good question. So I think a lot about small creatures because experimentally, you know, a lot of people work on flies and bees, right? So most people just think they are automata, they're just bugs for heaven's sake, right? But if you look at their behavior, like bees, they can recognize individual humans. They have this very complicated way to communicate. If you've ever been involved or you know your parents when they bought a house, what sort of agonizing decision that is. And bees have to do that once a year, right, when they swarm in the spring. And then they have this very elaborate way, they have free and scouts, they go to the individual sites, they come back, they have this power, this dance, literally, where they dance for several days, they try to recruit other deets, this very complicated decision rate, when they finally, once they make a decision, the entire swarm, the scouts warm up the entire swarm and then go to one location. They don't go to 50 locations, they go to one location that the scouts have agreed upon by themselves. That's awesome. If we look at the circuit complexity, it's 10 times more denser than anything we have in our brain. Now they only have a million neurons, but the neurons are amazingly complex. Complex behavior, very complicated circuitry, so there's no question they experience something, their life is very different, they're tiny, they only live, you know, for, well, workers live maybe for two months. So I think, and IIT tells you this, in principle, the substrate of consciousness is the substrate that maximizes the cause effect power over all possible spatial temporal grains. So when I think about, for example, do you know the science fiction story, The Black Cloud? Okay, it's a classic by Fred Hoyle, the astronomer. He has this cloud intervening between the earth and the sun and leading to some sort of, to global cooling, this is written in the 50s. It turns out you can, using the radio dish, they communicate with actually an entity, it's actually an intelligent entity, and they sort of, they convince it to move away. So here you have a radical different entity, and in principle, IIT says, well, you can measure the integrated information, in principle at least, and yes, if the maximum of that occurs at a time scale of months, rather than in assets for a fraction of a second, yes, then they would experience life where each moment is a month rather than, or microsecond, right, rather than a fraction of a second in the human case. And so there may be forms of consciousness that we simply don't recognize for what they are because they are so radical different from anything you and I are used to. Again, that's why it's good to read or to watch science fiction movies, well, to think about this. Do you know Stanislav Lem, this Polish science fiction writer, he wrote Solaris and was turned into a Hollywood movie? Yes. His best novel is in the 60s, a very engineer, he's an engineer in background. His most interesting novel is called The Victorious, where human civilization, they have this mission to this planet and everything is destroyed and they discover machines, humans got killed and then these machines took over and there was this machine evolution, Darwinian evolution, he talks about this very vividly. And finally, the dominant machine intelligence organism that survived were gigantic clouds of little hexagonal universal cellular automata. This was written in the 60s, so typically they're all lying on the ground individual by themselves, but in times of crisis, they can communicate, they assemble into gigantic nets into clouds of trillions of these particles and then they become hyper intelligent and they can beat anything that humans can throw at it. It's very beautiful and compelling where you have an intelligence where finally the humans leave the planet, they're simply unable to understand and comprehend this creature. They can say, well, either we can nuke the entire planet and destroy it or we just have to leave because fundamentally it's an alien, it's so alien from us and our ideas that we cannot communicate with them. Yeah, actually in conversation, so you're talking to us, Steven Wolf from Brought Up is that there could be ideas that you already have these artificial general intelligence like super smart or maybe conscious beings in the cellular automata, we just don't know how to talk to them. So it's the language of communication, but you don't know what to do with it. So that's one sort of view is consciousness is only something you can measure. So it's not conscious if you can't measure it. So you're making an ontological and an epistemic statement. One is it's just like seeing the multiverses, that might be true, but I can't communicate with them. I can't have any knowledge of them. That's an epistemic argument. Right? So those are two different things. So it may well be possible. Look, in another case that's happening right now, people are building these mini organoids. Do you know what this is? So you can take stem cells from under your arm, put it in a dish, add four transcription factors and then you can induce them to grow into large, well, large, they're a few millimeters. They're like a half a million neurons that look like nerve cells in a dish called mini organoids at Harvard, at Stanford, everywhere they're building them. It may well be possible that they're beginning to feel like something, but we can't really communicate with them right now. So people are beginning to think about the ethics of this. So yes, he may be perfectly right, but it's one question, are they conscious or not? It's a totally separate question. How would I know? Those are two different things. If you could give advice to a young researcher, sort of dreaming of understanding or creating human level intelligence or consciousness, what would you say? Just follow your dreams. Read widely. No, I mean, I suppose with discipline, what is the pursuit that they should take on? Is it neuroscience? Is it computational cognitive science? Is it philosophy? Is it computer science or robotics? No, in a sense that, okay, so the only known system that have high level of intelligence is homo sapiens. So if you wanted to build it, it's probably good to continue to study closely what humans do. So cognitive neuroscience, you know, somewhere between cognitive neuroscience on the one hand and some philosophy of mind and then AI, AI computer science. You can look at all the original ideas in your network, they all came from neuroscience, right? Reinforce whether it's Snarky, Minsky building is Snarky or whether it's, you know, the early Hubel and Wiesel experiments at Harvard that then gave rise to networks and then multi layer networks. So it may well be possible, in fact, some people argue that to make the next big step in AI once we realize the limits of deep convolutional networks, they can do certain things, but they can't really understand. They don't, they can't really, I can't really show them one image. I can show you a single image of somebody, a pickpocket who steals a wallet from a purse. You immediately know that's a pickpocket. Now computer system would just say, well, it's a man, it's a woman, it's a purse, right? Unless you train this machine on showing it a hundred thousand pickpockets, right? So it doesn't have this easy understanding that you have, right? So some people make the argument in order to go to the next step or you really want to build machines that understand in a way you and I, we have to go to psychology. We need to understand how we do it and how our brains enable us to do it. And so therefore being on the cusp, it's also so exciting to try to understand better our nature and then to build, to take some of those inside and build them. So I think the most exciting thing is somewhere in the interface between cognitive science, neuroscience, AI, computer science and philosophy of mind. Beautiful. Yeah. I'd say if there is from the machine learning, from our, from the computer science, computer vision perspective, many of the researchers kind of ignore the way the human brain works or even psychology or literature or studying the brain, I would hope Josh Tenenbaum talks about bringing that in more and more. And that's, yeah, so you've worked on some amazing stuff throughout your life. What's the thing that you're really excited about? What's the mystery that you would love to uncover in the near term beyond, beyond all the mysteries that you're already surrounded by? Well, so there's a structure called the claustrum. This is a structure, it's underneath our cortex, it's yay big. You have one on the left, on the right, underneath this, underneath the insula, it's very thin, it's like one millimeter, it's embedded in, in wiring, in white matter, so it's very difficult to image. And it has, it has connection to every cortical region. And Francis Crick, the last paper he ever wrote, he dictated corrections the day he died in hospital on this paper. You know, we hypothesize, well, because it has this unique anatomy, it gets input from every cortical area and projects back to every, every cortical area. That the function of this structure is similar, it's just a metaphor to the role of a conductor in a symphony orchestra. You have all the different cortical players. You have some that do motion, some that do theory of mind, some that infer social interaction and color and hearing and all the different modules in cortex. But of course, what consciousness is, consciousness puts it all together into one package, right? The binding problem, all of that. And this is really the function because it has relatively few neurons compared to cortex, but it talks, it receives input from all of them and it projects back to all of them. And so we're testing that right now. We've got this beautiful neuronal reconstruction in the mouse called crown of thorns, crown of thorns neurons that are in the claustrum that have the most widespread connection of any neuron I've ever seen. They're very, you have individual neurons that sit in the claustrum tiny, but then they have this single neuron, they have this huge axonal tree that cover both ipsy and contralateral cortex and trying to turn using, you know, fancy tools like optogenetics, trying to turn those neurons on or off and study it, what happens in the, in the mouse. So this thing is perhaps where the parts become the whole. Perhaps it's one of the structures, it's a very good way of putting it, where the individual parts turn into the whole of the whole of the conscious experience. Well, with that, thank you very much for being here today. Thank you very much. Thank you very much. All right, thank you very much.
Christof Koch: Consciousness | Lex Fridman Podcast #2
You've studied the human mind, cognition, language, vision, evolution, psychology, from child to adult, from the level of individual to the level of our entire civilization. So I feel like I can start with a simple multiple choice question. What is the meaning of life? Is it A. to attain knowledge as Plato said, B. to attain power as Nietzsche said, C. to escape death as Ernest Becker said, D. to propagate our genes as Darwin and others have said, E. there is no meaning as the nihilists have said, F. knowing the meaning of life is beyond our cognitive capabilities as Stephen Pinker said, based on my interpretation 20 years ago, and G. none of the above. I'd say A. comes closest, but I would amend that to C. to attaining not only knowledge but fulfillment more generally, that is life, health, stimulation, access to the living cultural and social world. Now this is our meaning of life. It's not the meaning of life if you were to ask our genes. Their meaning is to propagate copies of themselves, but that is distinct from the meaning that the brain that they lead to sets for itself. So to you knowledge is a small subset or a large subset? It's a large subset, but it's not the entirety of human striving because we also want to interact with people. We want to experience beauty. We want to experience the richness of the natural world, but understanding what makes the universe tick is way up there. For some of us more than others, certainly for me that's one of the top five. So is that a fundamental aspect? Are you just describing your own preference or is this a fundamental aspect of human nature is to seek knowledge? In your latest book you talk about the power, the usefulness of rationality and reason and so on. Is that a fundamental nature of human beings or is it something we should just strive for? Both. We're capable of striving for it because it is one of the things that make us what we are, homo sapiens, wise men. We are unusual among animals in the degree to which we acquire knowledge and use it to survive. We make tools. We strike agreements via language. We extract poisons. We predict the behavior of animals. We try to get at the workings of plants. And when I say we, I don't just mean we in the modern West, but we as a species everywhere, which is how we've managed to occupy every niche on the planet, how we've managed to drive other animals to extinction. And the refinement of reason in pursuit of human wellbeing, of health, happiness, social richness, cultural richness is our main challenge in the present. That is using our intellect, using our knowledge to figure out how the world works, how we work in order to make discoveries and strike agreements that make us all better off in the long run. Right. And you do that almost undeniably and in a data driven way in your recent book, but I'd like to focus on the artificial intelligence aspect of things and not just artificial intelligence, but natural intelligence too. So 20 years ago in a book you've written on how the mind works, you conjecture again, am I right to interpret things? You can correct me if I'm wrong, but you conjecture that human thought in the brain may be a result of a massive network of highly interconnected neurons. So from this interconnectivity emerges thought compared to artificial neural networks, which we use for machine learning today, is there something fundamentally more complex, mysterious, even magical about the biological neural networks versus the ones we've been starting to use over the past 60 years and have become to success in the past 10? There is something a little bit mysterious about the human neural networks, which is that each one of us who is a neural network knows that we ourselves are conscious. Conscious not in the sense of registering our surroundings or even registering our internal state, but in having subjective first person, present tense experience. That is when I see red, it's not just different from green, but there's a redness to it that I feel. Whether an artificial system would experience that or not, I don't know and I don't think I can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally indistinguishable from a human, would we attribute consciousness to it or ought we to attribute consciousness to it? And that's something that it's very hard to know. But putting that aside, putting aside that largely philosophical question, the question is, is there some difference between the human neural network and the ones that we're building in artificial intelligence will mean that we're on the current trajectory, not going to reach the point where we've got a lifelike robot indistinguishable from a human because the way their so called neural networks are organized are different from the way ours are organized. I think there's overlap, but I think there are some big differences that current neural networks, current so called deep learning systems are in reality not all that deep. That is, they are very good at extracting high order statistical regularities, but most of the systems don't have a semantic level, a level of actual understanding of who did what to whom, why, where, how things work, what causes what else. Do you think that kind of thing can emerge as it does? So artificial neural networks are much smaller, the number of connections and so on than the current human biological networks, but do you think sort of to go to consciousness or to go to this higher level semantic reasoning about things, do you think that can emerge with just a larger network with a more richly weirdly interconnected network? Separate it in consciousness because consciousness is even a matter of complexity. A really weird one. Yeah, you could sensibly ask the question of whether shrimp are conscious, for example, they're not terribly complex, but maybe they feel pain. So let's just put that part of it aside. But I think sheer size of a neural network is not enough to give it structure and knowledge, but if it's suitably engineered, then why not? That is, we're neural networks, natural selection did a kind of equivalent of engineering of our brains. So I don't think there's anything mysterious in the sense that no system made out of silicon could ever do what a human brain can do. I think it's possible in principle. Whether it'll ever happen depends not only on how clever we are in engineering these systems, but whether we even want to, whether that's even a sensible goal. That is, you can ask the question, is there any locomotion system that is as good as a human? Well, we kind of want to do better than a human ultimately in terms of legged locomotion. There's no reason that humans should be our benchmark. They're tools that might be better in some ways. It may be that we can't duplicate a natural system because at some point it's so much cheaper to use a natural system that we're not going to invest more brainpower and resources. So for example, we don't really have an exact substitute for wood. We still build houses out of wood. We still build furniture out of wood. We like the look. We like the feel. It has certain properties that synthetics don't. It's not that there's anything magical or mysterious about wood. It's just that the extra steps of duplicating everything about wood is something we just haven't bothered because we have wood. Likewise, say cotton. I'm wearing cotton clothing now. It feels much better than polyester. It's not that cotton has something magic in it. It's not that we couldn't ever synthesize something exactly like cotton, but at some point it's just not worth it. We've got cotton. Likewise, in the case of human intelligence, the goal of making an artificial system that is exactly like the human brain is a goal that we probably know is going to pursue to the bitter end, I suspect, because if you want tools that do things better than humans, you're not going to care whether it does something like humans. So for example, diagnosing cancer or predicting the weather, why set humans as your benchmark? But in general, I suspect you also believe that even if the human should not be a benchmark and we don't want to imitate humans in their system, there's a lot to be learned about how to create an artificial intelligence system by studying the human. Yeah, I think that's right. In the same way that to build flying machines, we want to understand the laws of aerodynamics, including birds, but not mimic the birds, but they're the same laws. You have a view on AI, artificial intelligence, and safety that, from my perspective, is refreshingly rational or perhaps more importantly, has elements of positivity to it, which I think can be inspiring and empowering as opposed to paralyzing. For many people, including AI researchers, the eventual existential threat of AI is obvious, not only possible, but obvious. And for many others, including AI researchers, the threat is not obvious. So Elon Musk is famously in the highly concerned about AI camp, saying things like AI is far more dangerous than nuclear weapons, and that AI will likely destroy human civilization. Human civilization. So in February, he said that if Elon was really serious about AI, the threat of AI, he would stop building self driving cars that he's doing very successfully as part of Tesla. Then he said, wow, if even Pinker doesn't understand the difference between narrow AI, like a car and general AI, when the latter literally has a million times more compute power and an open ended utility function, humanity is in deep trouble. So first, what did you mean by the statement about Elon Musk should stop building self driving cars if he's deeply concerned? Not the last time that Elon Musk has fired off an intemperate tweet. Well, we live in a world where Twitter has power. Yes. Yeah, I think there are two kinds of existential threat that have been discussed in connection with artificial intelligence, and I think that they're both incoherent. One of them is a vague fear of AI takeover, that just as we subjugated animals and less technologically advanced peoples, so if we build something that's more advanced than us, it will inevitably turn us into pets or slaves or domesticated animal equivalents. I think this confuses intelligence with a will to power, that it so happens that in the intelligence system we are most familiar with, namely homo sapiens, we are products of natural selection, which is a competitive process, and so bundled together with our problem solving capacity are a number of nasty traits like dominance and exploitation and maximization of power and glory and resources and influence. There's no reason to think that sheer problem solving capability will set that as one of its goals. Its goals will be whatever we set its goals as, and as long as someone isn't building a megalomaniacal artificial intelligence, then there's no reason to think that it would naturally evolve in that direction. Now, you might say, well, what if we gave it the goal of maximizing its own power source? That's a pretty stupid goal to give an autonomous system. You don't give it that goal. I mean, that's just self evidently idiotic. So if you look at the history of the world, there's been a lot of opportunities where engineers could instill in a system destructive power and they choose not to because that's the natural process of engineering. Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy people, and so I think there are good reasons to not build certain kinds of weapons. I think building nuclear weapons was a massive mistake. You do. So maybe pause on that because that is one of the serious threats. Do you think that it was a mistake in a sense that it should have been stopped early on? Or do you think it's just an unfortunate event of invention that this was invented? Do you think it's possible to stop? I guess is the question. It's hard to rewind the clock because of course it was invented in the context of World War II and the fear that the Nazis might develop one first. Then once it was initiated for that reason, it was hard to turn off, especially since winning the war against the Japanese and the Nazis was such an overwhelming goal of every responsible person that there's just nothing that people wouldn't have done then to ensure victory. It's quite possible if World War II hadn't happened that nuclear weapons wouldn't have been invented. We can't know, but I don't think it was by any means a necessity, any more than some of the other weapon systems that were envisioned but never implemented, like planes that would disperse poison gas over cities like crop dusters or systems to try to create earthquakes and tsunamis in enemy countries, to weaponize the weather, weaponize solar flares, all kinds of crazy schemes that we thought the better of. I think analogies between nuclear weapons and artificial intelligence are fundamentally misguided because the whole point of nuclear weapons is to destroy things. The point of artificial intelligence is not to destroy things. So the analogy is misleading. So there's two artificial intelligence you mentioned. The first one I guess is highly intelligent or power hungry. Yeah, it's a system that we design ourselves where we give it the goals. Goals are external to the means to attain the goals. If we don't design an artificially intelligent system to maximize dominance, then it won't maximize dominance. It's just that we're so familiar with homo sapiens where these two traits come bundled together, particularly in men, that we are apt to confuse high intelligence with a will to power, but that's just an error. The other fear is that will be collateral damage that will give artificial intelligence a goal like make paper clips and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper clips. We'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal experiments or give it the goal of world peace and its conception of world peace is no people, therefore no fighting and so it will kill us all. Now I think these are utterly fanciful. In fact, I think they're actually self defeating. They first of all assume that we're going to be so brilliant that we can design an artificial intelligence that can cure cancer, but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the process and it assumes that the system will be so smart that it can cure cancer, but so idiotic that it can't figure out that what we mean by curing cancer is not killing everyone. I think that the collateral damage scenario, the value alignment problem is also based on a misconception. So one of the challenges, of course, we don't know how to build either system currently or are we even close to knowing? Of course, those things can change overnight, but at this time, theorizing about it is very challenging in either direction. So that's probably at the core of the problem is without that ability to reason about the real engineering things here at hand is your imagination runs away with things. Exactly. But let me sort of ask, what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles, I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest currently large scale application of artificial intelligence in the world. It has potentially a very positive impact on society. So how does a person who's creating this very good quote unquote narrow AI system also seem to be so concerned about this other general AI? What do you think is the motivation there? What do you think is the thing? Well, you probably have to ask him, but there, and he is notoriously flamboyant, impulsive to the, as we have just seen, to the detriment of his own goals of the health of the company. So I don't know what's going on in his mind. You probably have to ask him, but I don't think the, and I don't think the distinction between special purpose AI and so called general AI is relevant that in the same way that special purpose AI is not going to do anything conceivable in order to attain a goal. All engineering systems are designed to trade off across multiple goals. When we build cars in the first place, we didn't forget to install brakes because the goal of a car is to go fast. It occurred to people, yes, you want it to go fast, but not always. So you would build in brakes too. Likewise, if a car is going to be autonomous and program it to take the shortest route to the airport, it's not going to take the diagonal and mow down people and trees and fences because that's the shortest route. That's not what we mean by the shortest route when we program it. And that's just what an intelligence system is by definition. It takes into account multiple constraints. The same is true, in fact, even more true of so called general intelligence. That is, if it's genuinely intelligent, it's not going to pursue some goal singlemindedly, omitting every other consideration and collateral effect. That's not artificial and general intelligence. That's artificial stupidity. I agree with you, by the way, on the promise of autonomous vehicles for improving human welfare. I think it's spectacular. And I'm surprised at how little press coverage notes that in the United States alone, something like 40,000 people die every year on the highways, vastly more than are killed by terrorists. And we spent a trillion dollars on a war to combat deaths by terrorism, about half a dozen a year. Whereas year in, year out, 40,000 people are massacred on the highways, which could be brought down to very close to zero. So I'm with you on the humanitarian benefit. Let me just mention that as a person who's building these cars, it is a little bit offensive to me to say that engineers would be clueless enough not to engineer safety into systems. I often stay up at night thinking about those 40,000 people that are dying. And everything I tried to engineer is to save those people's lives. So every new invention that I'm super excited about, in all the deep learning literature and CVPR conferences and NIPS, everything I'm super excited about is all grounded in making it safe and help people. So I just don't see how that trajectory can all of a sudden slip into a situation where intelligence will be highly negative. You and I certainly agree on that. And I think that's only the beginning of the potential humanitarian benefits of artificial intelligence. There's been enormous attention to what are we going to do with the people whose jobs are made obsolete by artificial intelligence, but very little attention given to the fact that the jobs that are going to be made obsolete are horrible jobs. The fact that people aren't going to be picking crops and making beds and driving trucks and mining coal, these are soul deadening jobs. And we have a whole literature sympathizing with the people stuck in these menial, mind deadening, dangerous jobs. If we can eliminate them, this is a fantastic boon to humanity. Now granted, you solve one problem and there's another one, namely, how do we get these people a decent income? But if we're smart enough to invent machines that can make beds and put away dishes and handle hospital patients, I think we're smart enough to figure out how to redistribute income to apportion some of the vast economic savings to the human beings who will no longer be needed to make beds. Okay. Sam Harris says that it's obvious that eventually AI will be an existential risk. He's one of the people who says it's obvious. We don't know when the claim goes, but eventually it's obvious. And because we don't know when, we should worry about it now. This is a very interesting argument in my eyes. So how do we think about timescale? How do we think about existential threats when we don't really, we know so little about the threat, unlike nuclear weapons perhaps, about this particular threat, that it could happen tomorrow, right? So, but very likely it won't. Very likely it'd be a hundred years away. So how do we ignore it? How do we talk about it? Do we worry about it? How do we think about those? What is it? A threat that we can imagine. It's within the limits of our imagination, but not within our limits of understanding to accurately predict it. But what is the it that we're afraid of? Sorry. AI being the existential threat. AI. How? Like enslaving us or turning us into paperclips? I think the most compelling from the Sam Harris perspective would be the paperclip situation. Yeah. I mean, I just think it's totally fanciful. I mean, that is don't build a system. Don't give a, don't, first of all, the code of engineering is you don't implement a system with massive control before testing it. Now, perhaps the culture of engineering will radically change. Then I would worry, but I don't see any signs that engineers will suddenly do idiotic things, like put a electric power plant in control of a system that they haven't tested first. Or all of these scenarios, not only imagine almost a magically powered intelligence, including things like cure cancer, which is probably an incoherent goal because there's so many different kinds of cancer or bring about world peace. I mean, how do you even specify that as a goal? But the scenarios also imagine some degree of control of every molecule in the universe, which not only is itself unlikely, but we would not start to connect these systems to infrastructure without testing as we would any kind of engineering system. Now, maybe some engineers will be irresponsible and we need legal and regulatory and legal responsibility implemented so that engineers don't do things that are stupid by their own standards. But the, I've never seen enough of a plausible scenario of existential threat to devote large amounts of brain power to, to forestall it. So you believe in the sort of the power on mass of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of be the very thing that guides the development of new technology so it's safe and also keeps us safe. You know, granted the same culture of safety that currently is part of the engineering mindset for airplanes, for example. So yeah, I don't think that that should be thrown out the window and that untested all powerful systems should be suddenly implemented, but there's no reason to think they are. And in fact, if you look at the progress of artificial intelligence, it's been, you know, it's been impressive, especially in the last 10 years or so, but the idea that suddenly there'll be a step function that all of a sudden before we know it, it will be all powerful, that there'll be some kind of recursive self improvement, some kind of fume is also fanciful. We, certainly by the technology that we, that we're now impresses us, such as deep learning, where you train something on hundreds of thousands or millions of examples, they're not hundreds of thousands of problems of which curing cancer is a typical example. And so the kind of techniques that have allowed AI to increase in the last five years are not the kind that are going to lead to this fantasy of exponential sudden self improvement. I think it's kind of a magical thinking. It's not based on our understanding of how AI actually works. Now give me a chance here. So you said fanciful, magical thinking. In his TED talk, Sam Harris says that thinking about AI killing all human civilization is somehow fun, intellectually. Now I have to say as a scientist engineer, I don't find it fun, but when I'm having beer with my non AI friends, there is indeed something fun and appealing about it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards Earth, we were just told a large meteor is headed towards Earth, something like this. And can you relate to this sense of fun? And do you understand the psychology of it? Yes. Good question. I personally don't find it fun. I find it kind of actually a waste of time because there are genuine threats that we ought to be thinking about like pandemics, like cyber security vulnerabilities, like the possibility of nuclear war and certainly climate change. You know, this is enough to fill many conversations. And I think Sam did put his finger on something, namely that there is a community, sometimes called the rationality community, that delights in using its brainpower to come up with scenarios that would not occur to mere mortals, to less cerebral people. So there is a kind of intellectual thrill in finding new things to worry about that no one has worried about yet. I actually think, though, that it's not only is it a kind of fun that doesn't give me particular pleasure, but I think there can be a pernicious side to it, namely that you overcome people with such dread, such fatalism, that there are so many ways to die, to annihilate our civilization, that we may as well enjoy life while we can. There's nothing we can do about it. If climate change doesn't do us in, then runaway robots will. So let's enjoy ourselves now. We've got to prioritize. We have to look at threats that are close to certainty, such as climate change, and distinguish those from ones that are merely imaginable but with infinitesimal probabilities. And we have to take into account people's worry budget. You can't worry about everything. And if you sow dread and fear and terror and fatalism, it can lead to a kind of numbness. Well, these problems are overwhelming, and the engineers are just going to kill us all. So let's either destroy the entire infrastructure of science, technology, or let's just enjoy life while we can. So there's a certain line of worry, which I'm worried about a lot of things in engineering. There's a certain line of worry when you cross, you're allowed to cross, that it becomes paralyzing fear as opposed to productive fear. And that's kind of what you're highlighting. Exactly right. And we've seen some, we know that human effort is not well calibrated against risk in that because a basic tenet of cognitive psychology is that perception of risk and hence perception of fear is driven by imaginability, not by data. And so we misallocate vast amounts of resources to avoiding terrorism, which kills on average about six Americans a year with one exception of 9 11. We invade countries, we invent entire new departments of government with massive, massive expenditure of resources and lives to defend ourselves against a trivial risk. Whereas guaranteed risks, one of them you mentioned traffic fatalities and even risks that are not here, but are plausible enough to worry about like pandemics, like nuclear war, receive far too little attention. In presidential debates, there's no discussion of how to minimize the risk of nuclear war. Lots of discussion of terrorism, for example. And so I think it's essential to calibrate our budget of fear, worry, concern, planning to the actual probability of harm. Yep. So let me ask this question. So speaking of imaginability, you said it's important to think about reason and one of my favorite people who likes to dip into the outskirts of reason through fascinating exploration of his imagination is Joe Rogan. Oh yes. So who has through reason used to believe a lot of conspiracies and through reason has stripped away a lot of his beliefs in that way. So it's fascinating actually to watch him through rationality kind of throw away the ideas of Bigfoot and 9 11. I'm not sure exactly. Kim Trails. I don't know what he believes in. Yes. Okay. But he no longer believed in. No, that's right. No, he's become a real force for good. Yep. So you were on the Joe Rogan podcast in February and had a fascinating conversation, but as far as I remember, didn't talk much about artificial intelligence. I will be on his podcast in a couple of weeks. Joe is very much concerned about existential threat of AI. I'm not sure if you're, this is why I was hoping that you would get into that topic. And in this way, he represents quite a lot of people who look at the topic of AI from 10,000 foot level. So as an exercise of communication, you said it's important to be rational and reason about these things. Let me ask, if you were to coach me as an AI researcher about how to speak to Joe and the general public about AI, what would you advise? Well, the short answer would be to read the sections that I wrote in enlightenment now about AI, but a longer reason would be I think to emphasize, and I think you're very well positioned as an engineer to remind people about the culture of engineering, that it really is safety oriented, that another discussion in enlightenment now, I plot rates of accidental death from various causes, plane crashes, car crashes, occupational accidents, even death by lightning strikes. And they all plummet because the culture of engineering is how do you squeeze out the lethal risks, death by fire, death by drowning, death by asphyxiation, all of them drastically declined because of advances in engineering that I got to say, I did not appreciate until I saw those graphs. And it is because exactly, people like you who stay up at night thinking, oh my God, is what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening. Now, I'm not an engineer, although I spent 22 years at MIT, so I know something about the culture of engineering. My understanding is that this is the way you think if you're an engineer. And it's essential that that culture not be suddenly switched off when it comes to artificial intelligence. So, I mean, that could be a problem, but is there any reason to think it would be switched off? I don't think so. And one, there's not enough engineers speaking up for this way, for the excitement, for the positive view of human nature, what you're trying to create is positivity. Like everything we try to invent is trying to do good for the world. But let me ask you about the psychology of negativity. It seems just objectively, not considering the topic, it seems that being negative about the future makes you sound smarter than being positive about the future, irregardless of topic. Am I correct in this observation? And if so, why do you think that is? Yeah, I think there is that phenomenon that, as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a prophet. It may be part of our overall negativity bias. We are as a species more attuned to the negative than the positive. We dread losses more than we enjoy gains. And that might open up a space for prophets to remind us of harms and risks and losses that we may have overlooked. So I think there is that asymmetry. So you've written some of my favorite books all over the place. So starting from Enlightenment Now to The Better Ages of Our Nature, Blank Slate, How the Mind Works, the one about language, Language Instinct. Bill Gates, big fan too, said of your most recent book that it's my new favorite book of all time. So for you as an author, what was a book early on in your life that had a profound impact on the way you saw the world? Certainly this book, Enlightenment Now, was influenced by David Deutsch's The Beginning of Infinity, a rather deep reflection on knowledge and the power of knowledge to improve the human condition. And with bits of wisdom such as that problems are inevitable but problems are solvable given the right knowledge and that solutions create new problems that have to be solved in their turn. That's I think a kind of wisdom about the human condition that influenced the writing of this book. There are some books that are excellent but obscure, some of which I have on a page on my website. I read a book called The History of Force, self published by a political scientist named James Payne on the historical decline of violence and that was one of the inspirations for The Better Angels of Our Nature. What about early on? If you look back when you were maybe a teenager? I loved a book called One, Two, Three, Infinity. When I was a young adult I read that book by George Gamow, the physicist, which had very accessible and humorous explanations of relativity, of number theory, of dimensionality, high multiple dimensional spaces in a way that I think is still delightful 70 years after it was published. I like the Time Life Science series. These are books that would arrive every month that my mother subscribed to, each one on a different topic. One would be on electricity, one would be on forests, one would be on evolution and then one was on the mind. I was just intrigued that there could be a science of mind and that book I would cite as an influence as well. Then later on... That's when you fell in love with the idea of studying the mind? Was that the thing that grabbed you? It was one of the things I would say. I read as a college student the book Reflections on Language by Noam Chomsky. I spent most of his career here at MIT. Richard Dawkins, two books, The Blind Watchmaker and The Selfish Gene, were enormously influential, mainly for the content but also for the writing style, the ability to explain abstract concepts in lively prose. Stephen Jay Gould's first collection, Ever Since Darwin, also an excellent example of lively writing. George Miller, a psychologist that most psychologists are familiar with, came up with the idea that human memory has a capacity of seven plus or minus two chunks. That's probably his biggest claim to fame. But he wrote a couple of books on language and communication that I read as an undergraduate. Again, beautifully written and intellectually deep. Wonderful. Stephen, thank you so much for taking the time today. My pleasure. Thanks a lot, Lex.
Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
What difference between biological neural networks and artificial neural networks is most mysterious, captivating, and profound for you? First of all, there's so much we don't know about biological neural networks, and that's very mysterious and captivating because maybe it holds the key to improving artificial neural networks. One of the things I studied recently is something that we don't know how biological neural networks do but would be really useful for artificial ones is the ability to do credit assignment through very long time spans. There are things that we can in principle do with artificial neural nets, but it's not very convenient and it's not biologically plausible. And this mismatch, I think this kind of mismatch may be an interesting thing to study to, A, understand better how brains might do these things because we don't have good corresponding theories with artificial neural nets, and B, maybe provide new ideas that we could explore about things that brain do differently and that we could incorporate in artificial neural nets. So let's break credit assignment up a little bit. Yes. So what, it's a beautifully technical term, but it could incorporate so many things. So is it more on the RNN memory side, that thinking like that, or is it something about knowledge, building up common sense knowledge over time? Or is it more in the reinforcement learning sense that you're picking up rewards over time for a particular, to achieve a certain kind of goal? So I was thinking more about the first two meanings whereby we store all kinds of memories, episodic memories in our brain, which we can access later in order to help us both infer causes of things that we are observing now and assign credit to decisions or interpretations we came up with a while ago when those memories were stored. And then we can change the way we would have reacted or interpreted things in the past, and now that's credit assignment used for learning. So in which way do you think artificial neural networks, the current LSTM, the current architectures are not able to capture the, presumably you're thinking of very long term? Yes. So current, the current nets are doing a fairly good jobs for sequences with dozens or say hundreds of time steps. And then it gets harder and harder and depending on what you have to remember and so on, as you consider longer durations. Whereas humans seem to be able to do credit assignment through essentially arbitrary times, like I could remember something I did last year. And then now because I see some new evidence, I'm going to change my mind about the way I was thinking last year. And hopefully not do the same mistake again. I think a big part of that is probably forgetting. You're only remembering the really important things. It's very efficient forgetting. Yes. So there's a selection of what we remember. And I think there are really cool connection to higher level cognition here regarding consciousness, deciding and emotions, so deciding what comes to consciousness and what gets stored in memory, which are not trivial either. So you've been at the forefront there all along, showing some of the amazing things that neural networks, deep neural networks can do in the field of artificial intelligence is just broadly in all kinds of applications. But we can talk about that forever. But what, in your view, because we're thinking towards the future, is the weakest aspect of the way deep neural networks represent the world? What is that? What is in your view is missing? So current state of the art neural nets trained on large quantities of images or texts have some level of understanding of, you know, what explains those data sets, but it's very basic, it's it's very low level. And it's not nearly as robust and abstract and general as our understanding. Okay, so that doesn't tell us how to fix things. But I think it encourages us to think about how we can maybe train our neural nets differently, so that they would focus, for example, on causal explanation, something that we don't do currently with neural net training. Also, one thing I'll talk about in my talk this afternoon is the fact that instead of learning separately from images and videos on one hand and from texts on the other hand, we need to do a better job of jointly learning about language and about the world to which it refers. So that, you know, both sides can help each other. We need to have good world models in our neural nets for them to really understand sentences, which talk about what's going on in the world. And I think we need language input to help provide clues about what high level concepts like semantic concepts should be represented at the top levels of our neural nets. In fact, there is evidence that the purely unsupervised learning of representations doesn't give rise to high level representations that are as powerful as the ones we're getting from supervised learning. And so the clues we're getting just with the labels, not even sentences, is already very, very high level. And I think that's a very important thing to keep in mind. It's already very powerful. Do you think that's an architecture challenge or is it a data set challenge? Neither. I'm tempted to just end it there. Can you elaborate slightly? Of course, data sets and architectures are something you want to always play with. But I think the crucial thing is more the training objectives, the training frameworks. For example, going from passive observation of data to more active agents, which learn by intervening in the world, the relationships between causes and effects, the sort of objective functions, which could be important to allow the highest level explanations to rise from the learning, which I don't think we have now, the kinds of objective functions, which could be used to reward exploration, the right kind of exploration. So these kinds of questions are neither in the data set nor in the architecture, but more in how we learn, under what objectives and so on. Yeah, I've heard you mention in several contexts, the idea of sort of the way children learn, they interact with objects in the world. And it seems fascinating because in some sense, except with some cases in reinforcement learning, that idea is not part of the learning process in artificial neural networks. So it's almost like, do you envision something like an objective function saying, you know what, if you poke this object in this kind of way, it would be really helpful for me to further learn. Right, right. Sort of almost guiding some aspect of the learning. Right, right, right. So I was talking to Rebecca Sacks just a few minutes ago, and she was talking about lots and lots of evidence from infants seem to clearly pick what interests them in a directed way. And so they're not passive learners, they focus their attention on aspects of the world, which are most interesting, surprising in a non trivial way. That makes them change their theories of the world. So that's a fascinating view of the future progress. But on a more maybe boring question, do you think going deeper and larger, so do you think just increasing the size of the things that have been increasing a lot in the past few years, is going to be a big thing? I think increasing the size of the things that have been increasing a lot in the past few years will also make significant progress. So some of the representational issues that you mentioned, they're kind of shallow, in some sense. Oh, shallow in the sense of abstraction. In the sense of abstraction, they're not getting some... I don't think that having more depth in the network in the sense of instead of 100 layers, you're going to have more layers. I don't think so. Is that obvious to you? Yes. What is clear to me is that engineers and companies and labs and grad students will continue to tune architectures and explore all kinds of tweaks to make the current state of the art slightly ever slightly better. But I don't think that's going to be nearly enough. I think we need changes in the way that we're considering learning to achieve the goal that these learners actually understand in a deep way the environment in which they are, you know, observing and acting. But I guess I was trying to ask a question that's more interesting than just more layers. It's basically, once you figure out a way to learn through interacting, how many parameters it takes to store that information. So I think our brain is quite bigger than most neural networks. Right, right. Oh, I see what you mean. Oh, I'm with you there. So I agree that in order to build neural nets with the kind of broad knowledge of the world that typical adult humans have, probably the kind of computing power we have now is going to be insufficient. So the good news is there are hardware companies building neural net chips. And so it's going to get better. However, the good news in a way, which is also a bad news, is that even our state of the art, deep learning methods fail to learn models that understand even very simple environments, like some grid worlds that we have built. Even these fairly simple environments, I mean, of course, if you train them with enough examples, eventually they get it. But it's just like, instead of what humans might need just dozens of examples, these things will need millions for very, very, very simple tasks. And so I think there's an opportunity for academics who don't have the kind of computing power that, say, Google has to do really important and exciting research to advance the state of the art in training frameworks, learning models, agent learning in even simple environments that are synthetic, that seem trivial, but yet current machine learning fails on. We talked about priors and common sense knowledge. It seems like we humans take a lot of knowledge for granted. So what's your view of these priors of forming this broad view of the world, this accumulation of information and how we can teach neural networks or learning systems to pick that knowledge up? So knowledge, for a while, the artificial intelligence was maybe in the 80s, like there's a time where knowledge representation, knowledge, acquisition, expert systems, I mean, the symbolic AI was a view, was an interesting problem set to solve and it was kind of put on hold a little bit, it seems like. Because it doesn't work. It doesn't work. That's right. But that's right. But the goals of that remain important. Yes. Remain important. And how do you think those goals can be addressed? Right. So first of all, I believe that one reason why the classical expert systems approach failed is because a lot of the knowledge we have, so you talked about common sense intuition, there's a lot of knowledge like this, which is not consciously accessible. There are lots of decisions we're taking that we can't really explain, even if sometimes we make up a story. And that knowledge is also necessary for machines to take good decisions. And that knowledge is hard to codify in expert systems, rule based systems and classical AI formalism. And there are other issues, of course, with the old AI, like not really good ways of handling uncertainty, I would say something more subtle, which we understand better now, but I think still isn't enough in the minds of people. There's something really powerful that comes from distributed representations, the thing that really makes neural nets work so well. And it's hard to replicate that kind of power in a symbolic world. The knowledge in expert systems and so on is nicely decomposed into like a bunch of rules. Whereas if you think about a neural net, it's the opposite. You have this big blob of parameters which work intensely together to represent everything the network knows. And it's not sufficiently factorized. It's not sufficiently factorized. And so I think this is one of the weaknesses of current neural nets, that we have to take lessons from classical AI in order to bring in another kind of compositionality, which is common in language, for example, and in these rules, but that isn't so native to neural nets. And on that line of thinking, disentangled representations. Yes. So let me connect with disentangled representations, if you might, if you don't mind. So for many years, I've thought, and I still believe that it's really important that we come up with learning algorithms, either unsupervised or supervised, but reinforcement, whatever, that build representations in which the important factors, hopefully causal factors are nicely separated and easy to pick up from the representation. So that's the idea of disentangled representations. It says transform the data into a space where everything becomes easy. We can maybe just learn with linear models about the things we care about. And I still think this is important, but I think this is missing out on a very important ingredient, which classical AI systems can remind us of. So let's say we have these disentangled representations. You still need to learn about the relationships between the variables, those high level semantic variables. They're not going to be independent. I mean, this is like too much of an assumption. They're going to have some interesting relationships that allow to predict things in the future, to explain what happened in the past. The kind of knowledge about those relationships in a classical AI system is encoded in the rules. Like a rule is just like a little piece of knowledge that says, oh, I have these two, three, four variables that are linked in this interesting way, then I can say something about one or two of them given a couple of others, right? In addition to disentangling the elements of the representation, which are like the variables in a rule based system, you also need to disentangle the mechanisms that relate those variables to each other. So like the rules. So the rules are neatly separated. Like each rule is, you know, living on its own. And when I change a rule because I'm learning, it doesn't need to break other rules. Whereas current neural nets, for example, are very sensitive to what's called catastrophic forgetting, where after I've learned some things and then I learn new things, they can destroy the old things that I had learned, right? If the knowledge was better factorized and separated, disentangled, then you would avoid a lot of that. Now, you can't do this in the sensory domain. What do you mean by sensory domain? Like in pixel space. But my idea is that when you project the data in the right semantic space, it becomes possible to now represent this extra knowledge beyond the transformation from inputs to representations, which is how representations act on each other and predict the future and so on in a way that can be neatly disentangled. So now it's the rules that are disentangled from each other and not just the variables that are disentangled from each other. And you draw a distinction between semantic space and pixel, like does there need to be an architectural difference? Well, yeah. So there's the sensory space like pixels, which where everything is entangled. The information, like the variables are completely interdependent in very complicated ways. And also computation, like it's not just the variables, it's also how they are related to each other is all intertwined. But I'm hypothesizing that in the right high level representation space, both the variables and how they relate to each other can be disentangled. And that will provide a lot of generalization power. Generalization power. Yes. Distribution of the test set is assumed to be the same as the distribution of the training set. Right. This is where current machine learning is too weak. It doesn't tell us anything, is not able to tell us anything about how our neural nets, say, are going to generalize to a new distribution. And, you know, people may think, well, but there's nothing we can say if we don't know what the new distribution will be. The truth is humans are able to generalize to new distributions. Yeah. How are we able to do that? Yeah. Because there is something, these new distributions, even though they could look very different from the training distributions, they have things in common. So let me give you a concrete example. You read a science fiction novel. The science fiction novel, maybe, you know, brings you in some other planet where things look very different on the surface, but it's still the same laws of physics. And so you can read the book and you understand what's going on. So the distribution is very different. But because you can transport a lot of the knowledge you had from Earth about the underlying cause and effect relationships and physical mechanisms and all that, and maybe even social interactions, you can now make sense of what is going on on this planet where, like, visually, for example, things are totally different. Taking that analogy further and distorting it, let's enter a science fiction world of, say, Space Odyssey, 2001, with Hal. Or maybe, which is probably one of my favorite AI movies. Me too. And then there's another one that a lot of people love that may be a little bit outside of the AI community is Ex Machina. I don't know if you've seen it. Yes. Yes. By the way, what are your views on that movie? Are you able to enjoy it? Are there things I like and things I hate? So you could talk about that in the context of a question I want to ask, which is, there's quite a large community of people from different backgrounds, often outside of AI, who are concerned about existential threat of artificial intelligence. You've seen this community develop over time. You've seen you have a perspective. So what do you think is the best way to talk about AI safety, to think about it, to have discourse about it within AI community and outside and grounded in the fact that Ex Machina is one of the main sources of information for the general public about AI? So I think you're putting it right. There's a big difference between the sort of discussion we ought to have within the AI community and the sort of discussion that really matter in the general public. So I think the picture of Terminator and AI loose and killing people and super intelligence that's going to destroy us, whatever we try, isn't really so useful for the public discussion. Because for the public discussion, the things I believe really matter are the short term and medium term, very likely negative impacts of AI on society, whether it's from security, like, you know, big brother scenarios with face recognition or killer robots, or the impact on the job market, or concentration of power and discrimination, all kinds of social issues, which could actually, some of them could really threaten democracy, for example. Just to clarify, when you said killer robots, you mean autonomous weapon, weapon systems. Yes, I don't mean that's right. So I think these short and medium term concerns should be important parts of the public debate. Now, existential risk, for me is a very unlikely consideration, but still worth academic investigation in the same way that you could say, should we study what could happen if meteorite, you know, came to earth and destroyed it. So I think it's very unlikely that this is going to happen in or happen in a reasonable future. The sort of scenario of an AI getting loose goes against my understanding of at least current machine learning and current neural nets and so on. It's not plausible to me. But of course, I don't have a crystal ball and who knows what AI will be in 50 years from now. So I think it is worth that scientists study those problems. It's just not a pressing question as far as I'm concerned. So before I continue down that line, I have a few questions there. But what do you like and not like about Ex Machina as a movie? Because I actually watched it for the second time and enjoyed it. I hated it the first time, and I enjoyed it quite a bit more the second time when I sort of learned to accept certain pieces of it, see it as a concept movie. What was your experience? What were your thoughts? So the negative is the picture it paints of science is totally wrong. Science in general and AI in particular. Science is not happening in some hidden place by some, you know, really smart guy, one person. This is totally unrealistic. This is not how it happens. Even a team of people in some isolated place will not make it. Science moves by small steps, thanks to the collaboration and community of a large number of people interacting. And all the scientists who are expert in their field kind of know what is going on, even in the industrial labs. It's information flows and leaks and so on. And the spirit of it is very different from the way science is painted in this movie. Yeah, let me ask on that point. It's been the case to this point that kind of even if the research happens inside Google or Facebook, inside companies, it still kind of comes out, ideas come out. Do you think that will always be the case with AI? Is it possible to bottle ideas to the point where there's a set of breakthroughs that go completely undiscovered by the general research community? Do you think that's even possible? It's possible, but it's unlikely. It's not how it is done now. It's not how I can foresee it in the foreseeable future. But of course, I don't have a crystal ball and science is a crystal ball. And so who knows? This is science fiction after all. I think it's ominous that the lights went off during that discussion. So the problem, again, there's one thing is the movie and you could imagine all kinds of science fiction. The problem for me, maybe similar to the question about existential risk, is that this kind of movie paints such a wrong picture of what is the actual science and how it's going on that it can have unfortunate effects on people's understanding of current science. And so that's kind of sad. There's an important principle in research, which is diversity. So in other words, research is exploration. Research is exploration in the space of ideas. And different people will focus on different directions. And this is not just good, it's essential. So I'm totally fine with people exploring directions that are contrary to mine or look orthogonal to mine. I am more than fine. I think it's important. I and my friends don't claim we have universal truth about what will, especially about what will happen in the future. Now that being said, we have our intuitions and then we act accordingly according to where we think we can be most useful and where society has the most to gain or to lose. We should have those debates and not end up in a society where there's only one voice and one way of thinking and research money is spread out. So disagreement is a sign of good research, good science. Yes. The idea of bias in the human sense of bias. How do you think about instilling in machine learning something that's aligned with human values in terms of bias? We intuitively as human beings have a concept of what bias means, of what fundamental respect for other human beings means. But how do we instill that into machine learning systems, do you think? So I think there are short term things that are already happening and then there are long term things that we need to do. In the short term, there are techniques that have been proposed and I think will continue to be improved and maybe alternatives will come up to take data sets in which we know there is bias, we can measure it. Pretty much any data set where humans are being observed taking decisions will have some sort of bias, discrimination against particular groups and so on. And we can use machine learning techniques to try to build predictors, classifiers that are going to be less biased. We can do it, for example, using adversarial methods to make our systems less sensitive to these variables we should not be sensitive to. So these are clear, well defined ways of trying to address the problem. Maybe they have weaknesses and more research is needed and so on. But I think in fact they are sufficiently mature that governments should start regulating companies where it matters, say like insurance companies, so that they use those techniques. Because those techniques will probably reduce the bias but at a cost. For example, maybe their predictions will be less accurate and so companies will not do it until you force them. All right, so this is short term. Long term, I'm really interested in thinking how we can instill moral values into computers. Obviously, this is not something we'll achieve in the next five or 10 years. How can we, you know, there's already work in detecting emotions, for example, in images, in sounds, in texts, and also studying how different agents interacting in different ways may correspond to patterns of, say, injustice, which could trigger anger. So these are things we can do in the medium term and eventually train computers to model, for example, how humans react emotionally. I would say the simplest thing is unfair situations which trigger anger. This is one of the most basic emotions that we share with other animals. I think it's quite feasible within the next few years that we can build systems that can detect these kinds of things to the extent, unfortunately, that they understand enough about the world around us, which is a long time away. But maybe we can initially do this in virtual environments. So you can imagine a video game where agents interact in some ways and then some situations trigger an emotion. I think we could train machines to detect those situations and predict that the particular emotion will likely be felt if a human was playing one of the characters. You have shown excitement and done a lot of excellent work with unsupervised learning. But there's been a lot of success on the supervised learning side. Yes, yes. And one of the things I'm really passionate about is how humans and robots work together. And in the context of supervised learning, that means the process of annotation. Do you think about the problem of annotation put in a more interesting way as humans teaching machines? Yes. Is there? Yes. I think it's an important subject. Reducing it to annotation may be useful for somebody building a system tomorrow. But longer term, the process of teaching, I think, is something that deserves a lot more attention from the machine learning community. So there are people who have coined the term machine teaching. So what are good strategies for teaching a learning agent? And can we design and train a system that is going to be a good teacher? So in my group, we have a project called BBI or BBI game, where there is a game or scenario where there's a learning agent and a teaching agent. Presumably, the teaching agent would eventually be a human. But we're not there yet. And the role of the teacher is to use its knowledge of the environment, which it can acquire using whatever way brute force to help the learner learn as quickly as possible. So the learner is going to try to learn by itself, maybe using some exploration and whatever. But the teacher can choose, can have an influence on the interaction with the learner, so as to guide the learner, maybe teach it the things that the learner has most trouble with, or just add the boundary between what it knows and doesn't know, and so on. So there's a tradition of these kind of ideas from other fields and like tutorial systems, for example, and AI. And of course, people in the humanities have been thinking about these questions. But I think it's time that machine learning people look at this, because in the future, we'll have more and more human machine interaction with the human in the loop. And I think understanding how to make this work better, all the problems around that are very interesting and not sufficiently addressed. You've done a lot of work with language, too. What aspect of the traditionally formulated Turing test, a test of natural language understanding and generation in your eyes is the most difficult of conversation? What in your eyes is the hardest part of conversation to solve for machines? So I would say it's everything having to do with the non linguistic knowledge, which implicitly you need in order to make sense of sentences, things like the Winograd schema. So these sentences that are semantically ambiguous. In other words, you need to understand enough about the world in order to really interpret properly those sentences. I think these are interesting challenges for machine learning, because they point in the direction of building systems that both understand how the world works and this causal relationships in the world and associate that knowledge with how to express it in language, either for reading or writing. You speak French? Yes, it's my mother tongue. It's one of the romance languages. Do you think passing the Turing test and all the underlying challenges we just mentioned depend on language? Do you think it might be easier in French than it is in English, or is independent of language? I think it's independent of language. I would like to build systems that can use the same principles, the same learning mechanisms to learn from human agents, whatever their language. Well, certainly us humans can talk more beautifully and smoothly in poetry, some Russian originally. I know poetry in Russian is maybe easier to convey complex ideas than it is in English. But maybe I'm showing my bias and some people could say that about French. But of course, the goal ultimately is our human brain is able to utilize any kind of those languages to use them as tools to convey meaning. Yeah, of course, there are differences between languages, and maybe some are slightly better at some things, but in the grand scheme of things, where we're trying to understand how the brain works and language and so on, I think these differences are minute. So you've lived perhaps through an AI winter of sorts? Yes. How did you stay warm and continue your research? Stay warm with friends. With friends. Okay, so it's important to have friends. And what have you learned from the experience? Listen to your inner voice. Don't, you know, be trying to just please the crowds and the fashion. And if you have a strong intuition about something that is not contradicted by actual evidence, go for it. I mean, it could be contradicted by people. Not your own instinct of based on everything you've learned? Of course, you have to adapt your beliefs when your experiments contradict those beliefs. But you have to stick to your beliefs. Otherwise, it's what allowed me to go through those years. It's what allowed me to persist in directions that, you know, took time, whatever other people think, took time to mature and bring fruits. So history of AI is marked with these, of course, it's marked with technical breakthroughs, but it's also marked with these seminal events that capture the imagination of the community. Most recent, I would say, AlphaGo beating the world champion human Go player was one of those moments. What do you think the next such moment might be? Okay, so first of all, I think that these so called seminal events are overrated. As I said, science really moves by small steps. Now what happens is you make one more small step and it's like the drop that, you know, that fills the bucket and then you have drastic consequences because now you're able to do something you were not able to do before. Or now, say, the cost of building some device or solving a problem becomes cheaper than what existed and you have a new market that opens up, right? So especially in the world of commerce and applications, the impact of a small scientific progress could be huge. But in the science itself, I think it's very, very gradual. And where are these steps being taken now? So there's unsupervised learning. So if I look at one trend that I like in my community, so for example, at Milan, my institute, what are the two hardest topics? GANs and reinforcement learning. Even though in Montreal in particular, reinforcement learning was something pretty much absent just two or three years ago. So there's really a big interest from students and there's a big interest from people like me. So I would say this is something where we're going to see more progress, even though it hasn't yet provided much in terms of actual industrial fallout. Like even though there's AlphaGo, there's no, like Google is not making money on this right now. But I think over the long term, this is really, really important for many reasons. So in other words, I would say reinforcement learning may be more generally agent learning because it doesn't have to be with rewards. It could be in all kinds of ways that an agent is learning about its environment. Now reinforcement learning you're excited about, do you think GANs could provide something, at the moment? Well, GANs or other generative models, I believe, will be crucial ingredients in building agents that can understand the world. A lot of the successes in reinforcement learning in the past has been with policy gradient, where you just learn a policy, you don't actually learn a model of the world. But there are lots of issues with that. And we don't know how to do model based RL right now. But I think this is where we have to go in order to build models that can generalize faster and better like to new distributions that capture to some extent, at least the underlying causal mechanisms in the world. Last question. What made you fall in love with artificial intelligence? If you look back, what was the first moment in your life when you were fascinated by either the human mind or the artificial mind? You know, when I was an adolescent, I was reading a lot. And then I started reading science fiction. There you go. That's it. That's where I got hooked. And then, you know, I had one of the first personal computers and I got hooked in programming. And so it just, you know, Start with fiction and then make it a reality. That's right. Yoshua, thank you so much for talking to me. My pleasure.
Yoshua Bengio: Deep Learning | Lex Fridman Podcast #4
The following is a conversation with Vladimir Vapnik. He's the co inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union and worked at the Institute of Control Sciences in Moscow. Then in the United States, he worked at AT&T, NEC Labs, Facebook Research, and now is a professor at Columbia University. His work has been cited over 170,000 times. He has some very interesting ideas about artificial intelligence and the nature of learning, especially on the limits of our current approaches and the open problems in the field. This conversation is part of MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, please subscribe on YouTube or rate it on iTunes or your podcast provider of choice, or simply connect with me on Twitter or other social networks at Lex Friedman spelled F R I D. And now here's my conversation with Vladimir Vapnik. Einstein famously said that God doesn't play dice. Yeah. You have studied the world through the eyes of statistics. So let me ask you in terms of the nature of reality, fundamental nature of reality, does God play dice? We don't know some factors. And because we don't know some factors, which could be important, it looks like God plays dice. But we should describe it. In philosophy, they distinguish between two positions, positions of instrumentalism, where you're creating theory for prediction and position of realism, where you're trying to understand what God did. Can you describe instrumentalism and realism a little bit? For example, if you have some mechanical laws, what is that? Is it law which is true always and everywhere? Or it is law which allow you to predict position of moving element? What you believe. You believe that it is God's law, that God created the world, which obey to this physical law. Or it is just law for predictions. And which one is instrumentalism? For predictions. If you believe that this is law of God, and it's always true everywhere, that means that you're realist. So you're trying to really understand God's thought. So the way you see the world is as an instrumentalist? You know, I'm working for some models, model of machine learning. So in this model, we can see setting, and we try to solve, resolve the setting to solve the problem. And you can do in two different way. From the point of view of instrumentalist, and that's what everybody does now. Because they say that goal of machine learning is to find the rule for classification. That is true. But it is instrument for prediction. But I can say the goal of machine learning is to learn about conditional probability. So how God played use, and if he play, what is probability for one, what is probability for another, given situation. But for prediction, I don't need this. I need the rule. But for understanding, I need conditional probability. So let me just step back a little bit first to talk about, you mentioned, which I read last night, the parts of the 1960 paper by Eugene Wigner, Unreasonable Effectiveness of Mathematics and Natural Sciences. Such a beautiful paper, by the way. Made me feel, to be honest, to confess my own work in the past few years on deep learning, heavily applied. Made me feel that I was missing out on some of the beauty of nature in the way that math can uncover. So let me just step away from the poetry of that for a second. How do you see the role of math in your life? Is it a tool, is it poetry? Where does it sit? And does math for you have limits of what it can describe? Some people say that math is language which use God. Use God. So I believe that... Speak to God or use God or... Use God. Use God. Yeah. So I believe that this article about effectiveness, unreasonable effectiveness of math, is that if you're looking at mathematical structures, they know something about reality. And the most scientists from Natural Science, they're looking on equation and trying to understand reality. So the same in machine learning. If you try very carefully look on all equations which define conditional probability, you can understand something about reality more than from your fantasy. So math can reveal the simple underlying principles of reality perhaps. You know what means simple? It is very hard to discover them. But then when you discover them and look at them, you see how beautiful they are. And it is surprising why people did not see that before. You're looking on equation and derive it from equations. For example, I talked yesterday about least square method. And people had a lot of fantasy how to improve least square method. But if you're going step by step by solving some equations, you suddenly will get some term which after thinking, you understand that it describes position of observation point. In least square method, we throw out a lot of information. We don't look in composition of point of observations, we're looking only on residuals. But when you understood that, that's very simple idea, but it's not too simple to understand. And you can derive this just from equations. So some simple algebra, a few steps will take you to something surprising that when you think about, you understand. And that is proof that human intuition is not too rich and very primitive. And it does not see very simple situations. So let me take a step back. In general, yes. But what about human, as opposed to intuition, ingenuity? Moments of brilliance. Do you have to be so hard on human intuition? Are there moments of brilliance in human intuition? They can leap ahead of math and then the math will catch up? I don't think so. I think that the best human intuition, it is putting in axioms. And then it is technical. See where the axioms take you. But if they correctly take axioms. But it axiom polished during generations of scientists. And this is integral wisdom. That is beautifully put. But if you maybe look at, when you think of Einstein and special relativity, what is the role of imagination coming first there in the moment of discovery of an idea? So there is obviously a mix of math and out of the box imagination there. That I don't know. Whatever I did, I exclude any imagination. Because whatever I saw in machine learning that comes from imagination, like features, like deep learning, they are not relevant to the problem. When you are looking very carefully from mathematical equations, you are deriving very simple theory, which goes far beyond theoretically than whatever people can imagine. Because it is not good fantasy. It is just interpretation. It is just fantasy. But it is not what you need. You don't need any imagination to derive the main principle of machine learning. When you think about learning and intelligence, maybe thinking about the human brain and trying to describe mathematically the process of learning, that is something like what happens in the human brain. Do you think we have the tools currently? Do you think we will ever have the tools to try to describe that process of learning? It is not description what is going on. It is interpretation. It is your interpretation. Your vision can be wrong. You know, one guy invented microscope, Levenhuk, for the first time. Only he got this instrument and he kept secret about microscope. But he wrote a report in London Academy of Science. In his report, when he was looking at the blood, he looked everywhere, on the water, on the blood, on the sperm. But he described blood like fight between queen and king. So, he saw blood cells, red cells, and he imagined that it is army fighting each other. And it was his interpretation of situation. And he sent this report in Academy of Science. They very carefully looked because they believed that he is right. He saw something. Yes. But he gave wrong interpretation. And I believe the same can happen with brain. With brain, yeah. The most important part. You know, I believe in human language. In some proverbs, there is so much wisdom. For example, people say that it is better than thousand days of diligent studies one day with great teacher. But if I will ask you what teacher does, nobody knows. And that is intelligence. But we know from history and now from math and machine learning that teacher can do a lot. So, what from a mathematical point of view is the great teacher? I don't know. That's an open question. No, but we can say what teacher can do. He can introduce some invariants, some predicate for creating invariants. How he doing it? I don't know because teacher knows reality and can describe from this reality a predicate, invariants. But he knows that when you're using invariant, you can decrease number of observations hundred times. So, but maybe try to pull that apart a little bit. I think you mentioned like a piano teacher saying to the student, play like a butterfly. Yeah. I play piano. I play guitar for a long time. Yeah, maybe it's romantic, poetic, but it feels like there's a lot of truth in that statement. Like there is a lot of instruction in that statement. And so, can you pull that apart? What is that? The language itself may not contain this information. It is not blah, blah, blah. It is not blah, blah, blah. It affects you. It's what? It affects you. It affects your playing. Yes, it does, but it's not the laying. It feels like what is the information being exchanged there? What is the nature of information? What is the representation of that information? I believe that it is sort of predicate, but I don't know. That is exactly what intelligence and machine learning should be. Yes. Because the rest is just mathematical technique. I think that what was discovered recently is that there is two mechanism of learning. One called strong convergence mechanism and weak convergence mechanism. Before, people use only one convergence. In weak convergence mechanism, you can use predicate. That's what play like butterfly and it will immediately affect your playing. You know, there is English proverb, great. If it looks like a duck, swims like a duck, and quack like a duck, then it is probably duck. Yes. But this is exact about predicate. Looks like a duck, what it means. You saw many ducks that you're training data. So, you have description of how looks integral looks ducks. Yeah. The visual characteristics of a duck. Yeah. But you want and you have model for recognition. So, you would like so that theoretical description from model coincide with empirical description, which you saw on territory. So, about looks like a duck, it is general. But what about swims like a duck? You should know that duck swims. You can say it play chess like a duck. Okay. Duck doesn't play chess. And it is completely legal predicate, but it is useless. So, half teacher can recognize not useless predicate. So, up to now, we don't use this predicate in existing machine learning. So, why we need zillions of data. But in this English proverb, they use only three predicate. Looks like a duck, swims like a duck, and quack like a duck. So, you can't deny the fact that swims like a duck and quacks like a duck has humor in it, has ambiguity. Let's talk about swim like a duck. It doesn't say jump like a duck. Why? Because... It's not relevant. But that means that you know ducks, you know different birds, you know animals. And you derive from this that it is relevant to say swim like a duck. So, underneath, in order for us to understand swims like a duck, it feels like we need to know millions of other little pieces of information. Which we pick up along the way. You don't think so. There doesn't need to be this knowledge base in those statements carries some rich information that helps us understand the essence of duck. Yeah. How far are we from integrating predicates? You know that when you consider complete theory of machine learning. So, what it does, you have a lot of functions. And then you're talking it looks like a duck. You see your training data. From training data you recognize like expected duck should look. Then you remove all functions which does not look like you think it should look from training data. So, you decrease amount of function from which you pick up one. Then you give a second predicate and again decrease the set of function. And after that you pick up the best function you can find. It is standard machine learning. So, why you need not too many examples? Because your predicates aren't very good? That means that predicates are very good because every predicate is invented to decrease admissible set of function. So, you talk about admissible set of functions and you talk about good functions. So, what makes a good function? So, admissible set of function is set of function which has small capacity or small diversity, small VC dimension example. Which contain good function inside. So, by the way for people who don't know VC, you're the V in the VC. So, how would you describe to lay person what VC theory is? How would you describe VC? So, when you have a machine. So, machine capable to pick up one function from the admissible set of function. But set of admissible function can be big. So, it contain all continuous functions and it's useless. You don't have so many examples to pick up function. But it can be small. Small, we call it capacity but maybe better called diversity. So, not very different function in the set. It's infinite set of function but not very diverse. So, it is small VC dimension. When VC dimension is small, you need small amount of training data. So, the goal is to create admissible set of functions which is have small VC dimension and contain good function. Then you will be able to pick up the function using small amount of observations. So, that is the task of learning? Yeah. Is creating a set of admissible functions that has a small VC dimension and then you've figure out a clever way of picking up? No, that is goal of learning which I formulated yesterday. Statistical learning theory does not involve in creating admissible set of function. In classical learning theory, everywhere, 100% in textbook, the set of function, admissible set of function is given. But this is science about nothing because the most difficult problem to create admissible set of functions given, say, a lot of functions, continuum set of function, create admissible set of functions. That means that it has finite VC dimension, small VC dimension and contain good function. So, this was out of consideration. So, what's the process of doing that? I mean, it's fascinating. What is the process of creating this admissible set of functions? That is invariant. That's invariant. Yeah, you're looking of properties of training data and properties means that you have some function and you just count what is value, average value of function on training data. You have model and what is expectation of this function on the model and they should coincide. So, the problem is about how to pick up functions. It can be any function. In fact, it is true for all functions. But because when we're talking, say, duck does not jumping, so you don't ask question jump like a duck because it is trivial. It does not jumping and doesn't help you to recognize jump. But you know something, which question to ask and you're asking it seems like a duck, but looks like a duck at this general situation. Looks like, say, guy who have this illness, this disease. It is legal. So, there is a general type of predicate looks like and special type of predicate, which related to this specific problem. And that is intelligence part of all this business and that where teacher is involved. Incorporating the specialized predicates. What do you think about deep learning as neural networks, these arbitrary architectures as helping accomplish some of the tasks you're thinking about? Their effectiveness or lack thereof? What are the weaknesses and what are the possible strengths? You know, I think that this is fantasy, everything which like deep learning, like features. Let me give you this example. One of the greatest books is Churchill book about history of Second World War. And he started this book describing that in old time when war is over, so the great kings, they gathered together, almost all of them were relatives, and they discussed what should be done, how to create peace. And they came to agreement. And when happened First World War, the general public came in power. And they were so greedy that robbed Germany. And it was clear for everybody that it is not peace, that peace will last only 20 years because they were not professionals. And the same I see in machine learning. There are mathematicians who are looking for the problem from a very deep point of view, mathematical point of view. And there are computer scientists who mostly does not know mathematics. They just have interpretation of that. And they invented a lot of blah, blah, blah interpretations like deep learning. Why you need deep learning? Mathematic does not know deep learning. Mathematic does not know neurons. It is just function. If you like to say piecewise linear function, say that and do in class of piecewise linear function. But they invent something. And then they try to prove advantage of that through interpretations, which mostly wrong. And when it's not enough, they appeal to brain, which they know nothing about that. Nobody knows what's going on in the brain. So, I think that more reliable work on math. This is a mathematical problem. Do your best to solve this problem. Try to understand that there is not only one way of convergence, which is strong way of convergence. There is a weak way of convergence, which requires predicate. And if you will go through all this stuff, you will see that you don't need deep learning. Even more, I would say one of the theory, which called represented theory. It says that optimal solution of mathematical problem, which is described learning is on shadow network, not on deep learning. And a shallow network. Yeah. The ultimate problem is there. Absolutely. In the end, what you're saying is exactly right. The question is you have no value for throwing something on the table, playing with it, not math. It's like a neural network where you said throwing something in the bucket or the biological example and looking at kings and queens or the cells or the microscope. You don't see value in imagining the cells or kings and queens and using that as inspiration and imagination for where the math will eventually lead you. You think that interpretation basically deceives you in a way that's not productive. I think that if you're trying to analyze this business of learning and especially discussion about deep learning, it is discussion about interpretation, not about things, about what you can say about things. That's right. But aren't you surprised by the beauty of it? So not mathematical beauty, but the fact that it works at all or are you criticizing that very beauty, our human desire to interpret, to find our silly interpretations in these constructs? Let me ask you this. Are you surprised and does it inspire you? How do you feel about the success of a system like AlphaGo at beating the game of Go? Using neural networks to estimate the quality of a board and the quality of the position. That is your interpretation, quality of the board. Yeah, yes. Yeah. So it's not our interpretation. The fact is a neural network system, it doesn't matter, a learning system that we don't I think mathematically understand that well, beats the best human player, does something that was thought impossible. That means that it's not a very difficult problem. So you empirically, we've empirically have discovered that this is not a very difficult problem. Yeah. It's true. So maybe, can't argue. So even more I would say that if they use deep learning, it is not the most effective way of learning theory. And usually when people use deep learning, they're using zillions of training data. Yeah. But you don't need this. So I describe challenge, can we do some problems which do well deep learning method, this deep net, using hundred times less training data. Even more, some problems deep learning cannot solve because it's not necessary they create admissible set of function. To create deep architecture means to create admissible set of functions. You cannot say that you're creating good admissible set of functions. You just, it's your fantasy. It does not come from us. But it is possible to create admissible set of functions because you have your training data. That actually for mathematicians, when you consider a variant, you need to use law of large numbers. When you're making training in existing algorithm, you need uniform law of large numbers, which is much more difficult, it requires VC dimension and all this stuff. But nevertheless, if you use both weak and strong way of convergence, you can decrease a lot of training data. You could do the three, the swims like a duck and quacks like a duck. So let's step back and think about human intelligence in general. Clearly that has evolved in a non mathematical way. It wasn't, as far as we know, God or whoever didn't come up with a model and place in our brain of admissible functions. It kind of evolved. I don't know, maybe you have a view on this. So Alan Turing in the 50s, in his paper, asked and rejected the question, can machines think? It's not a very useful question, but can you briefly entertain this useful, useless question? Can machines think? So talk about intelligence and your view of it. I don't know that. I know that Turing described imitation. If computer can imitate human being, let's call it intelligent. And he understands that it is not thinking computer. He completely understands what he's doing. But he set up problem of imitation. So now we understand that the problem is not in imitation. I'm not sure that intelligence is just inside of us. It may be also outside of us. I have several observations. So when I prove some theorem, it's very difficult theorem, in couple of years, in several places, people prove the same theorem, say, Sawyer Lemma, after us was done, then another guys proved the same theorem. In the history of science, it's happened all the time. For example, geometry, it's happened simultaneously, first it did Lobachevsky and then Gauss and Boyai and another guys, and it's approximately in 10 times period, 10 years period of time. And I saw a lot of examples like that. And many mathematicians think that when they develop something, they develop something in general which affect everybody. So maybe our model that intelligence is only inside of us is incorrect. It's our interpretation. It might be there exists some connection with world intelligence. I don't know. You're almost like plugging in into... Yeah, exactly. And contributing to this... Into a big network. Into a big, maybe in your own network. On the flip side of that, maybe you can comment on big O complexity and how you see classifying algorithms by worst case running time in relation to their input. So that way of thinking about functions, do you think p equals np, do you think that's an interesting question? Yeah, it is an interesting question. But let me talk about complexity in about worst case scenario. There is a mathematical setting. When I came to United States in 1990, people did not know, they did not know statistical learning theory. So in Russia, it was published to monographs, our monographs, but in America they didn't know. Then they learned and somebody told me that it is worst case theory and they will create real case theory, but till now it did not. Because it is mathematical too. You can do only what you can do using mathematics. And which has a clear understanding and clear description. And for this reason, we introduce complexity. And we need this because using, actually it is diversity, I like this one more. You see the mention, you can prove some theorems. But we also create theory for case when you know probability measure. And that is the best case which can happen, it is entropy theory. So from mathematical point of view, you know the best possible case and the worst possible case. You can derive different model in medium, but it's not so interesting. You think the edges are interesting? The edges are interesting because it is not so easy to get good bound, exact bound. It's not many cases where you have the bound is not exact. But interesting principles which discover the mass. Do you think it's interesting because it's challenging and reveals interesting principles that allow you to get those bounds? Or do you think it's interesting because it's actually very useful for understanding the essence of a function of an algorithm? So it's like me judging your life as a human being by the worst thing you did and the best thing you did versus all the stuff in the middle. It seems not productive. I don't think so because you cannot describe situation in the middle. So it will be not general. So you can describe edges cases and it is clear it has some model, but you cannot describe model for every new case. So you will be never accurate when you're using model. But from a statistical point of view, the way you've studied functions and the nature of learning in the world, don't you think that the real world has a very long tail? That the edge cases are very far away from the mean, the stuff in the middle or no? I don't know that. I think that from my point of view, if you will use formal statistic, you need uniform law of large numbers. If you will use this invariance business, you will need just law of large numbers. And there's this huge difference between uniform law of large numbers and large numbers. Is it useful to describe that a little more or should we just take it to... For example, when I'm talking about duck, I give three predicates and that was enough. But if you will try to do formal distinguish, you will need a lot of observations. So that means that information about looks like a duck contain a lot of bit of information, formal bits of information. So we don't know that how much bit of information contain things from artificial and from intelligence. And that is the subject of analysis. Till now, all business, I don't like how people consider artificial intelligence. They consider us some codes which imitate activity of human being. It is not science, it is applications. You would like to imitate go ahead, it is very useful and a good problem. But you need to learn something more. How people try to do, how people can to develop, say, predicates seems like a duck or play like butterfly or something like that. Not the teacher says you, how it came in his mind, how he choose this image. So that process... That is problem of intelligence. That is the problem of intelligence and you see that connected to the problem of learning? Absolutely. Because you immediately give this predicate like specific predicate seems like a duck or quack like a duck. It was chosen somehow. So what is the line of work, would you say? If you were to formulate as a set of open problems, that will take us there, to play like a butterfly. We'll get a system to be able to... Let's separate two stories. One mathematical story that if you have predicate, you can do something. And another story how to get predicate. It is intelligence problem and people even did not start to understand intelligence. Because to understand intelligence, first of all, try to understand what do teachers. How teacher teach, why one teacher better than another one. Yeah. And so you think we really even haven't started on the journey of generating the predicates? No. We don't understand. We even don't understand that this problem exists. Because did you hear... You do. No, I just know name. I want to understand why one teacher better than another and how affect teacher, student. It is not because he repeating the problem which is in textbook. He makes some remarks. He makes some philosophy of reasoning. Yeah, that's a beautiful... So it is a formulation of a question that is the open problem. Why is one teacher better than another? Right. What he does better. Yeah. What... What... Why in... At every level? How do they get better? What does it mean to be better? The whole... Yeah. Yeah. From whatever model I have, one teacher can give a very good predicate. One teacher can say swims like a dog and another can say jump like a dog. And jump like a dog carries zero information. So what is the most exciting problem in statistical learning you've ever worked on or are working on now? I just finished this invariant story and I'm happy that... I believe that it is ultimate learning story. At least I can show that there are no another mechanism, only two mechanisms. But they separate statistical part from intelligent part and I know nothing about intelligent part. And if you will know this intelligent part, so it will help us a lot in teaching, in learning. In learning. Yeah. You will know it when we see it? So for example, in my talk, the last slide was a challenge. So you have say NIST digit recognition problem and deep learning claims that they did it very well, say 99.5% of correct answers. But they use 60,000 observations. Can you do the same using hundred times less? But incorporating invariants, what it means, you know, digit one, two, three. But looking on that, explain to me which invariant I should keep to use hundred examples or say hundred times less examples to do the same job. Yeah, that last slide, unfortunately your talk ended quickly, but that last slide was a powerful open challenge and a formulation of the essence here. What is the exact problem of intelligence? Because everybody, when machine learning started and it was developed by mathematicians, they immediately recognized that we use much more training data than humans needed. But now again, we came to the same story, have to decrease. That is the problem of learning. It is not like in deep learning, they use zillions of training data because maybe zillions are not enough if you have a good invariants. Maybe you will never collect some number of observations. But now it is a question to intelligence, how to do that? Because statistical part is ready, as soon as you supply us with predicate, we can do good job with small amount of observations. And the very first challenge is well known digit recognition. And you know digits, and please tell me invariants. I think about that, I can say for digit three, I would introduce concept of horizontal symmetry. So the digit three has horizontal symmetry, say more than, say, digit two or something like that. But as soon as I get the idea of horizontal symmetry, I can mathematically invent a lot of measure of horizontal symmetry, or then vertical symmetry, or diagonal symmetry, whatever, if I have idea of symmetry. But what else? I think on digit I see that it is meta predicate, which is not shape, it is something like symmetry, like how dark is whole picture, something like that, which can self rise a predicate. You think such a predicate could rise out of something that is not general, meaning it feels like for me to be able to understand the difference between two and three, I would need to have had a childhood of 10 to 15 years playing with kids, going to school, being yelled by parents, all of that, walking, jumping, looking at ducks, and then I would be able to generate the right predicate for telling the difference between two and a three. Or do you think there's a more efficient way? I don't know. I know for sure that you must know something more than digits. Yes. And that's a powerful statement. Yeah. But maybe there are several languages of description, these elements of digits. So I'm talking about symmetry, about some properties of geometry, I'm talking about something abstract. I don't know that. But this is a problem of intelligence. So in one of our articles, it is trivial to show that every example can carry not more than one bit of information in real. Because when you show example and you say this is one, you can remove, say, a function which does not tell you one, say, is the best strategy. If you can do it perfectly, it's remove half of the functions. But when you use one predicate, which looks like a duck, you can remove much more functions than half. And that means that it contains a lot of bit of information from formal point of view. But when you have a general picture of what you want to recognize and general picture of the world, can you invent this predicate? And that predicate carries a lot of information. Beautifully put. Maybe just me, but in all the math you show, in your work, which is some of the most profound mathematical work in the field of learning AI and just math in general, I hear a lot of poetry and philosophy. You really kind of talk about philosophy of science. There's a poetry and music to a lot of the work you're doing and the way you're thinking about it. So do you, where does that come from? Do you escape to poetry? Do you escape to music or not? I think that there exists ground truth. There exists ground truth? Yeah. And that can be seen everywhere. The smart guy, philosopher, sometimes I'm surprised how they deep see. Sometimes I see that some of them are completely out of subject. But the ground truth I see in music. Music is the ground truth? Yeah. And in poetry, many poets, they believe, they take dictation. So what piece of music as a piece of empirical evidence gave you a sense that they are touching something in the ground truth? It is structure. The structure of the math of music. Yeah, because when you're listening to Bach, you see the structure. Very clear, very classic, very simple, and the same in math when you have axioms in geometry, you have the same feeling. And in poetry, sometimes you see the same. And if you look back at your childhood, you grew up in Russia, you maybe were born as a researcher in Russia, you've developed as a researcher in Russia, you've came to United States and a few places. If you look back, what was some of your happiest moments as a researcher, some of the most profound moments, not in terms of their impact on society, but in terms of their impact on how damn good you feel that day and you remember that moment? You know, every time when you found something, it is great in the life, every simple things. But my general feeling is that most of my time was wrong. You should go again and again and again and try to be honest in front of yourself, not to make interpretation, but try to understand that it's related to ground truth, it is not my blah, blah, blah interpretation and something like that. But you're allowed to get excited at the possibility of discovery. Oh yeah. You have to double check it. No, but how it's related to another ground truth, is it just temporary or it is for forever? You know, you always have a feeling when you found something, how big is that? So 20 years ago when we discovered statistical learning theory, nobody believed, except for one guy, Dudley from MIT, and then in 20 years it became fashion, and the same with support vector machines, that is kernel machines. So with support vector machines and learning theory, when you were working on it, you had a sense, you had a sense of the profundity of it, how this seems to be right, this seems to be powerful. Right. Absolutely. Immediately. I recognized that it will last forever, and now when I found this invariant story, I have a feeling that it is complete learning, because I have proof that there are no different mechanisms. You can have some cosmetic improvement you can do, but in terms of invariants, you need both invariants and statistical learning, and they should work together. But also I'm happy that we can formulate what is intelligence from that, and to separate from technical part, and that is completely different. Absolutely. Well, Vladimir, thank you so much for talking today. Thank you. It's an honor.
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The following is a conversation with Guido van Rossum, creator of Python, one of the most popular programming languages in the world, used in almost any application that involves computers from web back end development to psychology, neuroscience, computer vision, robotics, deep learning, natural language processing, and almost any subfield of AI. This conversation is part of MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Guido van Rossum. You were born in the Netherlands in 1956. Your parents and the world around you was deeply deeply impacted by World War Two, as was my family from the Soviet Union. So with that context, what is your view of human nature? Are some humans inherently good, and some inherently evil? Or do we all have both good and evil within us? Guido van Rossum Ouch, I did not expect such a deep one. I, I guess we all have good and evil potential in us. And a lot of it depends on circumstances and context. Peter Bell out of that world, at least on the Soviet Union side in Europe, sort of out of suffering, out of challenge, out of that kind of set of traumatic events, often emerges beautiful art, music, literature. In an interview I read or heard, you said you enjoyed Dutch literature when you were a child. Can you tell me about the books that had an influence on you in your childhood? Guido van Rossum Well, with as a teenager, my favorite writer was my favorite Dutch author was a guy named Willem Frederik Hermans, who's writing, certainly his early novels were all about sort of ambiguous things that happened during World War Two. I think he was a young adult during that time. And he wrote about it a lot, and very interesting, very good books, I thought, I think. Peter Bell In a nonfiction way? Guido van Rossum No, it was all fiction, but it was very much set in the ambiguous world of resistance against the Germans, where often you couldn't tell whether someone was truly in the resistance or really a spy for the Germans. And some of the characters in his novels sort of crossed that line, and you never really find out what exactly happened. Peter Bell And in his novels, there's always a good guy and a bad guy, the nature of good and evil. Is it clear there's a hero? Guido van Rossum No, his heroes are often more, his main characters are often anti heroes. And so they're not very heroic. They're often, they fail at some level to accomplish their lofty goals. Peter Bell And looking at the trajectory through the rest of your life, has literature, Dutch or English or translation had an impact outside the technical world that you existed in? Guido van Rossum I still read novels. I don't think that it impacts me that much directly. Peter Bell It doesn't impact your work. Guido van Rossum It's a separate world. My work is highly technical and sort of the world of art and literature doesn't really directly have any bearing on it. Peter Bell You don't think there's a creative element to the design? You know, some would say design of a language is art. Guido van Rossum I'm not disagreeing with that. I'm just saying that sort of I don't feel direct influences from more traditional art on my own creativity. Peter Bell Right. Of course, you don't feel doesn't mean it's not somehow deeply there in your subconscious. Guido van Rossum Who knows? Peter Bell Who knows? So let's go back to your early teens. Your hobbies were building electronic circuits, building mechanical models. What if you can just put yourself back in the mind of that young Guido 12, 13, 14, was that grounded in a desire to create a system? So to create something? Or was it more just tinkering? Just the joy of puzzle solving? Guido van Rossum I think it was more the latter, actually. I maybe towards the end of my high school period, I felt confident enough that that I designed my own circuits that were sort of interesting somewhat. But a lot of that time, I literally just took a model kit and follow the instructions, putting the things together. I mean, I think the first few years that I built electronics kits, I really did not have enough understanding of sort of electronics to really understand what I was doing. I mean, I could debug it, and I could sort of follow the instructions very carefully, which has always stayed with me. But I had a very naive model of, like, how do I build a circuit? Of, like, how a transistor works? And I don't think that in those days, I had any understanding of coils and capacitors, which actually sort of was a major problem when I started to build more complex digital circuits, because I was unaware of the sort of the analog part of the – how they actually work. And I would have things that – the schematic looked – everything looked fine, and it didn't work. And what I didn't realize was that there was some megahertz level oscillation that was throwing the circuit off, because I had a sort of – two wires were too close, or the switches were kind of poorly built. But through that time, I think it's really interesting and instructive to think about, because echoes of it are in this time now. So in the 1970s, the personal computer was being born. So did you sense, in tinkering with these circuits, did you sense the encroaching revolution in personal computing? So if at that point, we would sit you down and ask you to predict the 80s and the 90s, do you think you would be able to do so successfully to unroll the process that's happening? No, I had no clue. I remember, I think, in the summer after my senior year – or maybe it was the summer after my junior year – well, at some point, I think, when I was 18, I went on a trip to the Math Olympiad in Eastern Europe, and there was like – I was part of the Dutch team, and there were other nerdy kids that sort of had different experiences, and one of them told me about this amazing thing called a computer. And I had never heard that word. My own explorations in electronics were sort of about very simple digital circuits, and I had sort of – I had the idea that I somewhat understood how a digital calculator worked. And so there is maybe some echoes of computers there, but I never made that connection. I didn't know that when my parents were paying for magazine subscriptions using punched cards, that there was something called a computer that was involved that read those cards and transferred the money between accounts. I was also not really interested in those things. It was only when I went to university to study math that I found out that they had a computer, and students were allowed to use it. And there were some – you're supposed to talk to that computer by programming it. What did that feel like, finding – Yeah, that was the only thing you could do with it. The computer wasn't really connected to the real world. The only thing you could do was sort of – you typed your program on a bunch of punched cards. You gave the punched cards to the operator, and an hour later the operator gave you back your printout. And so all you could do was write a program that did something very abstract. And I don't even remember what my first forays into programming were, but they were sort of doing simple math exercises and just to learn how a programming language worked. Did you sense, okay, first year of college, you see this computer, you're able to have a program and it generates some output. Did you start seeing the possibility of this, or was it a continuation of the tinkering with circuits? Did you start to imagine that one, the personal computer, but did you see it as something that is a tool, like a word processing tool, maybe for gaming or something? Or did you start to imagine that it could be going to the world of robotics, like the Frankenstein picture that you could create an artificial being? There's like another entity in front of you. You did not see the computer. I don't think I really saw it that way. I was really more interested in the tinkering. It's maybe not a sort of a complete coincidence that I ended up sort of creating a programming language which is a tool for other programmers. I've always been very focused on the sort of activity of programming itself and not so much what happens with the program you write. Right. I do remember, and I don't remember, maybe in my second or third year, probably my second actually, someone pointed out to me that there was this thing called Conway's Game of Life. You're probably familiar with it. I think – In the 70s, I think is when they came up with it. So there was a Scientific American column by someone who did a monthly column about mathematical diversions. I'm also blanking out on the guy's name. It was very famous at the time and I think up to the 90s or so. And one of his columns was about Conway's Game of Life and he had some illustrations and he wrote down all the rules and sort of there was the suggestion that this was philosophically interesting, that that was why Conway had called it that. And all I had was like the two pages photocopy of that article. I don't even remember where I got it. But it spoke to me and I remember implementing a version of that game for the batch computer we were using where I had a whole Pascal program that sort of read an initial situation from input and read some numbers that said do so many generations and print every so many generations and then out would come pages and pages of sort of things. I remember much later I've done a similar thing using Python but that original version I wrote at the time I found interesting because I combined it with some trick I had learned during my electronics hobbyist times. I essentially first on paper I designed a simple circuit built out of logic gates that took nine bits of input which is sort of the cell and its neighbors and produced a new value for that cell and it's like a combination of a half adder and some other clipping. It's actually a full adder. And so I had worked that out and then I translated that into a series of Boolean operations on Pascal integers where you could use the integers as bitwise values. And so I could basically generate 60 bits of a generation in like eight instructions or so. Nice. So I was proud of that. It's funny that you mentioned, so for people who don't know Conway's Game of Life, it's a cellular automata where there's single compute units that kind of look at their neighbors and figure out what they look like in the next generation based on the state of their neighbors and this is deeply distributed system in concept at least. And then there's simple rules that all of them follow and somehow out of this simple rule when you step back and look at what occurs, it's beautiful. There's an emergent complexity. Even though the underlying rules are simple, there's an emergent complexity. Now the funny thing is you've implemented this and the thing you're commenting on is you're proud of a hack you did to make it run efficiently. When you're not commenting on, it's a beautiful implementation, you're not commenting on the fact that there's an emergent complexity that you've coded a simple program and when you step back and you print out the following generation after generation, that's stuff that you may have not predicted would happen is happening. And is that magic? I mean, that's the magic that all of us feel when we program. When you create a program and then you run it and whether it's Hello World or it shows something on screen, if there's a graphical component, are you seeing the magic in the mechanism of creating that? I think I went back and forth. As a student, we had an incredibly small budget of computer time that we could use. It was actually measured. I once got in trouble with one of my professors because I had overspent the department's budget. It's a different story. I actually wanted the efficient implementation because I also wanted to explore what would happen with a larger number of generations and a larger size of the board. Once the implementation was flawless, I would feed it different patterns and then I think maybe there was a follow up article where there were patterns that were like gliders, patterns that repeated themselves after a number of generations but translated one or two positions to the right or up or something like that. I remember things like glider guns. Well, you can Google Conway's Game of Life. People still go aww and ooh over it. For a reason because it's not really well understood why. I mean, this is what Stephen Wolfram is obsessed about. We don't have the mathematical tools to describe the kind of complexity that emerges in these kinds of systems. The only way you can do is to run it. I'm not convinced that it's sort of a problem that lends itself to classic mathematical analysis. One theory of how you create an artificial intelligence or artificial being is you kind of have to, same with the Game of Life, you kind of have to create a universe and let it run. That creating it from scratch in a design way, coding up a Python program that creates a fully intelligent system may be quite challenging. You might need to create a universe just like the Game of Life. You might have to experiment with a lot of different universes before there is a set of rules that doesn't essentially always just end up repeating itself in a trivial way. Yeah, and Stephen Wolfram works with these simple rules, says that it's kind of surprising how quickly you find rules that create interesting things. You shouldn't be able to, but somehow you do. And so maybe our universe is laden with rules that will create interesting things that might not look like humans, but emergent phenomena that's interesting may not be as difficult to create as we think. Sure. But let me sort of ask, at that time, some of the world, at least in popular press, was kind of captivated, perhaps at least in America, by the idea of artificial intelligence, that these computers would be able to think pretty soon. And did that touch you at all? In science fiction or in reality in any way? I didn't really start reading science fiction until much, much later. I think as a teenager I read maybe one bundle of science fiction stories. Was it in the background somewhere, like in your thoughts? That sort of the using computers to build something intelligent always felt to me, because I felt I had so much understanding of what actually goes on inside a computer. I knew how many bits of memory it had and how difficult it was to program. And sort of, I didn't believe at all that you could just build something intelligent out of that, that would really sort of satisfy my definition of intelligence. I think the most influential thing that I read in my early twenties was Gödel Escherbach. That was about consciousness, and that was a big eye opener in some sense. In what sense? So, on your own brain, did you at the time or do you now see your own brain as a computer? Or is there a total separation of the way? So yeah, you're very pragmatically practically know the limits of memory, the limits of this sequential computing or weakly paralyzed computing, and you just know what we have now, and it's hard to see how it creates. But it's also easy to see, it was in the 40s, 50s, 60s, and now at least similarities between the brain and our computers. Oh yeah, I mean, I totally believe that brains are computers in some sense. I mean, the rules they use to play by are pretty different from the rules we can sort of implement in our current hardware, but I don't believe in, like, a separate thing that infuses us with intelligence or consciousness or any of that. There's no soul, I've been an atheist probably from when I was 10 years old, just by thinking a bit about math and the universe, and well, my parents were atheists. Now, I know that you could be an atheist and still believe that there is something sort of about intelligence or consciousness that cannot possibly emerge from a fixed set of rules. I am not in that camp. I totally see that, sort of, given how many millions of years evolution took its time, DNA is a particular machine that sort of encodes information and an unlimited amount of information in chemical form and has figured out a way to replicate itself. I thought that that was, maybe it's 300 million years ago, but I thought it was closer to half a billion years ago, that that's sort of originated and it hasn't really changed, that the sort of the structure of DNA hasn't changed ever since. That is like our binary code that we have in hardware. I mean... The basic programming language hasn't changed, but maybe the programming itself... Obviously, it did sort of, it happened to be a set of rules that was good enough to sort of develop endless variability and sort of the idea of self replicating molecules competing with each other for resources and one type eventually sort of always taking over. That happened before there were any fossils, so we don't know how that exactly happened, but I believe it's clear that that did happen. Can you comment on consciousness and how you see it? Because I think we'll talk about programming quite a bit. We'll talk about, you know, intelligence connecting to programming fundamentally, but consciousness is this whole other thing. Do you think about it often as a developer of a programming language and as a human? Those are pretty sort of separate topics. Sort of my line of work working with programming does not involve anything that goes in the direction of developing intelligence or consciousness, but sort of privately as an avid reader of popular science writing, I have some thoughts which is mostly that I don't actually believe that consciousness is an all or nothing thing. I have a feeling that, and I forget what I read that influenced this, but I feel that if you look at a cat or a dog or a mouse, they have some form of intelligence. If you look at a fish, it has some form of intelligence, and that evolution just took a long time, but I feel that the sort of evolution of more and more intelligence that led to sort of the human form of intelligence followed the evolution of the senses, especially the visual sense. I mean, there is an enormous amount of processing that's needed to interpret a scene, and humans are still better at that than computers are. And I have a feeling that there is a sort of, the reason that like mammals in particular developed the levels of consciousness that they have and that eventually sort of going from intelligence to self awareness and consciousness has to do with sort of being a robot that has very highly developed senses. Has a lot of rich sensory information coming in, so that's a really interesting thought that whatever that basic mechanism of DNA, whatever that basic building blocks of programming, if you just add more abilities, more high resolution sensors, more sensors, you just keep stacking those things on top that this basic programming in trying to survive develops very interesting things that start to us humans to appear like intelligence and consciousness. As far as robots go, I think that the self driving cars have that sort of the greatest opportunity of developing something like that, because when I drive myself, I don't just pay attention to the rules of the road. I also look around and I get clues from that, oh, this is a shopping district, oh, here's an old lady crossing the street, oh, here is someone carrying a pile of mail, there's a mailbox, I bet you they're going to cross the street to reach that mailbox. And I slow down, and I don't even think about that. And so, there is so much where you turn your observations into an understanding of what other consciousnesses are going to do, or what other systems in the world are going to be, oh, that tree is going to fall. I see sort of, I see much more of, I expect somehow that if anything is going to become unconscious, it's going to be the self driving car and not the network of a bazillion computers in a Google or Amazon data center that are all networked together to do whatever they do. So, in that sense, so you actually highlight, because that's what I work in Thomas Vehicles, you highlight the big gap between what we currently can't do and what we truly need to be able to do to solve the problem. Under that formulation, then consciousness and intelligence is something that basically a system should have in order to interact with us humans, as opposed to some kind of abstract notion of a consciousness. Consciousness is something that you need to have to be able to empathize, to be able to fear, understand what the fear of death is, all these aspects that are important for interacting with pedestrians, you need to be able to do basic computation based on our human desires and thoughts. And if you sort of, yeah, if you look at the dog, the dog clearly knows, I mean, I'm not the dog owner, but I have friends who have dogs, the dogs clearly know what the humans around them are going to do, or at least they have a model of what those humans are going to do and they learn. Some dogs know when you're going out and they want to go out with you, they're sad when you leave them alone, they cry, they're afraid because they were mistreated when they were younger. We don't assign sort of consciousness to dogs, or at least not all that much, but I also don't think they have none of that. So I think it's consciousness and intelligence are not all or nothing. The spectrum is really interesting. But in returning to programming languages and the way we think about building these kinds of things, about building intelligence, building consciousness, building artificial beings. So I think one of the exciting ideas came in the 17th century and with Leibniz, Hobbes, Descartes, where there's this feeling that you can convert all thought, all reasoning, all the thing that we find very special in our brains, you can convert all of that into logic. So you can formalize it, formal reasoning, and then once you formalize everything, all of knowledge, then you can just calculate and that's what we're doing with our brains is we're calculating. So there's this whole idea that this is possible, that this we can actually program. But they weren't aware of the concept of pattern matching in the sense that we are aware of it now. They sort of thought they had discovered incredible bits of mathematics like Newton's calculus and their sort of idealism, their sort of extension of what they could do with logic and math sort of went along those lines and they thought there's like, yeah, logic. There's like a bunch of rules and a bunch of input. They didn't realize that how you recognize a face is not just a bunch of rules but is a shit ton of data plus a circuit that sort of interprets the visual clues and the context and everything else and somehow can massively parallel pattern match against stored rules. I mean, if I see you tomorrow here in front of the Dropbox office, I might recognize you. Even if I'm wearing a different shirt, yeah, but if I see you tomorrow in a coffee shop in Belmont, I might have no idea that it was you or on the beach or whatever. I make those kind of mistakes myself all the time. I see someone that I only know as like, oh, this person is a colleague of my wife's and then I see them at the movies and I didn't recognize them. But do you see those, you call it pattern matching, do you see that rules is unable to encode that? Everything you see, all the pieces of information you look around this room, I'm wearing a black shirt, I have a certain height, I'm a human, all these, there's probably tens of thousands of facts you pick up moment by moment about this scene. You take them for granted and you aggregate them together to understand the scene. You don't think all of that could be encoded to where at the end of the day, you can just put it all on the table and calculate? I don't know what that means. I mean, yes, in the sense that there is no actual magic there, but there are enough layers of abstraction from the facts as they enter my eyes and my ears to the understanding of the scene that I don't think that AI has really covered enough of that distance. It's like if you take a human body and you realize it's built out of atoms, well, that is a uselessly reductionist view, right? The body is built out of organs, the organs are built out of cells, the cells are built out of proteins, the proteins are built out of amino acids, the amino acids are built out of atoms and then you get to quantum mechanics. So that's a very pragmatic view. I mean, obviously as an engineer, I agree with that kind of view, but you also have to consider the Sam Harris view of, well, intelligence is just information processing. Like you said, you take in sensory information, you do some stuff with it and you come up with actions that are intelligent. That makes it sound so easy. I don't know who Sam Harris is. Oh, well, it's a philosopher. So like this is how philosophers often think, right? And essentially that's what Descartes was, is wait a minute, if there is, like you said, no magic, so he basically says it doesn't appear like there's any magic, but we know so little about it that it might as well be magic. So just because we know that we're made of atoms, just because we know we're made of organs, the fact that we know very little how to get from the atoms to organs in a way that's recreatable means that you shouldn't get too excited just yet about the fact that you figured out that we're made of atoms. Right, and the same about taking facts as our sensory organs take them in and turning that into reasons and actions, that sort of, there are a lot of abstractions that we haven't quite figured out how to deal with those. I mean, sometimes, I don't know if I can go on a tangent or not, so if I take a simple program that parses, say I have a compiler that parses a program, in a sense the input routine of that compiler, of that parser, is a sensing organ, and it builds up a mighty complicated internal representation of the program it just saw, it doesn't just have a linear sequence of bytes representing the text of the program anymore, it has an abstract syntax tree, and I don't know how many of your viewers or listeners are familiar with compiler technology, but there's… Fewer and fewer these days, right? That's also true, probably. People want to take a shortcut, but there's sort of, this abstraction is a data structure that the compiler then uses to produce outputs that is relevant, like a translation of that program to machine code that can be executed by hardware, and then that data structure gets thrown away. When a fish or a fly sees, sort of gets visual impulses, I'm sure it also builds up some data structure, and for the fly that may be very minimal, a fly may have only a few, I mean, in the case of a fly's brain, I could imagine that there are few enough layers of abstraction that it's not much more than when it's darker here than it is here, well it can sense motion, because a fly sort of responds when you move your arm towards it, so clearly its visual processing is intelligent, well, not intelligent, but it has an abstraction for motion, and we still have similar things in, but much more complicated in our brains, I mean, otherwise you couldn't drive a car if you couldn't, if you didn't have an incredibly good abstraction for motion. Yeah, in some sense, the same abstraction for motion is probably one of the primary sources of our, of information for us, we just know what to do, I think we know what to do with that, we've built up other abstractions on top. We build much more complicated data structures based on that, and we build more persistent data structures, sort of after some processing, some information sort of gets stored in our memory pretty much permanently, and is available on recall, I mean, there are some things that you sort of, you're conscious that you're remembering it, like, you give me your phone number, I, well, at my age I have to write it down, but I could imagine, I could remember those seven numbers, or ten digits, and reproduce them in a while, if I sort of repeat them to myself a few times, so that's a fairly conscious form of memorization. On the other hand, how do I recognize your face, I have no idea. My brain has a whole bunch of specialized hardware that knows how to recognize faces, I don't know how much of that is sort of coded in our DNA, and how much of that is trained over and over between the ages of zero and three, but somehow our brains know how to do lots of things like that, that are useful in our interactions with other humans, without really being conscious of how it's done anymore. Right, so our actual day to day lives, we're operating at the very highest level of abstraction, we're just not even conscious of all the little details underlying it. There's compilers on top of, it's like turtles on top of turtles, or turtles all the way down, there's compilers all the way down, but that's essentially, you say that there's no magic, that's what I, what I was trying to get at, I think, is with Descartes started this whole train of saying that there's no magic, I mean, there's all this beforehand. Well didn't Descartes also have the notion though that the soul and the body were fundamentally separate? Separate, yeah, I think he had to write in God in there for political reasons, so I don't know actually, I'm not a historian, but there's notions in there that all of reasoning, all of human thought can be formalized. I think that continued in the 20th century with Russell and with Gadot's incompleteness theorem, this debate of what are the limits of the things that could be formalized, that's where the Turing machine came along, and this exciting idea, I mean, underlying a lot of computing that you can do quite a lot with a computer. You can encode a lot of the stuff we're talking about in terms of recognizing faces and so on, theoretically, in an algorithm that can then run on a computer. And in that context, I'd like to ask programming in a philosophical way, what does it mean to program a computer? So you said you write a Python program or compiled a C++ program that compiles to some byte code, it's forming layers, you're programming a layer of abstraction that's higher, how do you see programming in that context? Can it keep getting higher and higher levels of abstraction? I think at some point the higher levels of abstraction will not be called programming and they will not resemble what we call programming at the moment. There will not be source code, I mean, there will still be source code sort of at a lower level of the machine, just like there are still molecules and electrons and sort of proteins in our brains, but, and so there's still programming and system administration and who knows what, to keep the machine running, but what the machine does is a different level of abstraction in a sense, and as far as I understand the way that for the last decade or more people have made progress with things like facial recognition or the self driving cars is all by endless, endless amounts of training data where at least as a lay person, and I feel myself totally as a lay person in that field, it looks like the researchers who publish the results don't necessarily know exactly how their algorithms work, and I often get upset when I sort of read a sort of a fluff piece about Facebook in the newspaper or social networks and they say, well, algorithms, and that's like a totally different interpretation of the word algorithm, because for me, the way I was trained or what I learned when I was eight or ten years old, an algorithm is a set of rules that you completely understand that can be mathematically analyzed and you can prove things. You can like prove that Aristotelian sieve produces all prime numbers and only prime numbers. Yeah. So I don't know if you know who Andrej Karpathy is, I'm afraid not. So he's a head of AI at Tesla now, but he was at Stanford before and he has this cheeky way of calling this concept software 2.0. So let me disentangle that for a second. So kind of what you're referring to is the traditional, the algorithm, the concept of an algorithm, something that's there, it's clear, you can read it, you understand it, you can prove it's functioning as kind of software 1.0. And what software 2.0 is, is exactly what you described, which is you have neural networks, which is a type of machine learning that you feed a bunch of data and that neural network learns to do a function. All you specify is the inputs and the outputs you want and you can't look inside. You can't analyze it. All you can do is train this function to map the inputs to the outputs by giving a lot of data. And that's as programming becomes getting a lot of data. That's what programming is. Well, that would be programming 2.0. To programming 2.0. I wouldn't call that programming. It's just a different activity. Just like building organs out of cells is not called chemistry. Well, so let's just step back and think sort of more generally, of course. But you know, it's like as a parent teaching your kids, things can be called programming. In that same sense, that's how programming is being used. You're providing them data, examples, use cases. So imagine writing a function not by, not with for loops and clearly readable text, but more saying, well, here's a lot of examples of what this function should take. And here's a lot of examples of when it takes those functions, it should do this. And then figure out the rest. So that's the 2.0 concept. And so the question I have for you is like, it's a very fuzzy way. This is the reality of a lot of these pattern recognition systems and so on. It's a fuzzy way of quote unquote programming. What do you think about this kind of world? Should it be called something totally different than programming? If you're a software engineer, does that mean you're designing systems that are very, can be systematically tested, evaluated, they have a very specific specification and then this other fuzzy software 2.0 world, machine learning world, that's something else totally? Or is there some intermixing that's possible? Well the question is probably only being asked because we don't quite know what that software 2.0 actually is. And I think there is a truism that every task that AI has tackled in the past, at some point we realized how it was done and then it was no longer considered part of artificial intelligence because it was no longer necessary to use that term. It was just, oh now we know how to do this. And a new field of science or engineering has been developed and I don't know if sort of every form of learning or sort of controlling computer systems should always be called programming. So I don't know, maybe I'm focused too much on the terminology. But I expect that there just will be different concepts where people with sort of different education and a different model of what they're trying to do will develop those concepts. I guess if you could comment on another way to put this concept is, I think the kind of functions that neural networks provide is things as opposed to being able to upfront prove that this should work for all cases you throw at it. All you're able, it's the worst case analysis versus average case analysis. All you're able to say is it seems on everything we've tested to work 99.9% of the time, but we can't guarantee it and it fails in unexpected ways. We can't even give you examples of how it fails in unexpected ways, but it's like really good most of the time. Is there no room for that in current ways we think about programming? programming 1.0 is actually sort of getting to that point too, where the sort of the ideal of a bug free program has been abandoned long ago by most software developers. We only care about bugs that manifest themselves often enough to be annoying. And we're willing to take the occasional crash or outage or incorrect result for granted because we can't possibly, we don't have enough programmers to make all the code bug free and it would be an incredibly tedious business. And if you try to throw formal methods at it, it becomes even more tedious. So every once in a while the user clicks on a link and somehow they get an error and the average user doesn't panic. They just click again and see if it works better the second time, which often magically it does, or they go up and they try some other way of performing their tasks. So that's sort of an end to end recovery mechanism and inside systems there is all sorts of retries and timeouts and fallbacks and I imagine that that sort of biological systems are even more full of that because otherwise they wouldn't survive. Do you think programming should be taught and thought of as exactly what you just said? I come from this kind of, you're always denying that fact always. In sort of basic programming education, the sort of the programs you're having students write are so small and simple that if there is a bug you can always find it and fix it. Because the sort of programming as it's being taught in some, even elementary, middle schools, in high school, introduction to programming classes in college typically, it's programming in the small. Very few classes sort of actually teach software engineering, building large systems. Every summer here at Dropbox we have a large number of interns. Every tech company on the West Coast has the same thing. These interns are always amazed because this is the first time in their life that they see what goes on in a really large software development environment. Everything they've learned in college was almost always about a much smaller scale and somehow that difference in scale makes a qualitative difference in how you do things and how you think about it. If you then take a few steps back into decades, 70s and 80s, when you were first thinking about Python or just that world of programming languages, did you ever think that there would be systems as large as underlying Google, Facebook, and Dropbox? Did you, when you were thinking about Python? I was actually always caught by surprise by sort of this, yeah, pretty much every stage of computing. So maybe just because you've spoken in other interviews, but I think the evolution of programming languages are fascinating and it's especially because it leads from my perspective towards greater and greater degrees of intelligence. I learned the first programming language I played with in Russia was with the Turtle logo. Logo, yeah. And if you look, I just have a list of programming languages, all of which I've now played with a little bit. I mean, they're all beautiful in different ways from Fortran, Cobalt, Lisp, Algol 60, Basic, Logo again, C, as a few, the object oriented came along in the 60s, Simula, Pascal, Smalltalk. All of that leads. They're all the classics. The classics. Yeah. The classic hits, right? Steam, that's built on top of Lisp. On the database side, SQL, C++, and all of that leads up to Python, Pascal too, and that's before Python, MATLAB, these kind of different communities, different languages. So can you talk about that world? I know that sort of Python came out of ABC, which I actually never knew that language. I just, having researched this conversation, went back to ABC and it looks remarkably, it has a lot of annoying qualities, but underneath those, like all caps and so on, but underneath that, there's elements of Python that are quite, they're already there. That's where I got all the good stuff. All the good stuff. So, but in that world, you're swimming these programming languages, were you focused on just the good stuff in your specific circle, or did you have a sense of what is everyone chasing? You said that every programming language is built to scratch an itch. Were you aware of all the itches in the community? And if not, or if yes, I mean, what itch were you trying to scratch with Python? Well, I'm glad I wasn't aware of all the itches because I would probably not have been able to do anything. I mean, if you're trying to solve every problem at once, you'll solve nothing. Well, yeah, it's too overwhelming. And so I had a very, very focused problem. I wanted a programming language that sat somewhere in between shell scripting and C. And now, arguably, there is like, one is higher level, one is lower level. And Python is sort of a language of an intermediate level, although it's still pretty much at the high level end. I was thinking about much more about, I want a tool that I can use to be more productive as a programmer in a very specific environment. And I also had given myself a time budget for the development of the tool. And that was sort of about three months for both the design, like thinking through what are all the features of the language syntactically and semantically, and how do I implement the whole pipeline from parsing the source code to executing it. So I think both with the timeline and the goals, it seems like productivity was at the core of it as a goal. So like, for me in the 90s, and the first decade of the 21st century, I was always doing machine learning, AI programming for my research was always in C++. And then the other people who are a little more mechanical engineering, electrical engineering, are MATLABby. They're a little bit more MATLAB focused. Those are the world, and maybe a little bit Java too. But people who are more interested in emphasizing the object oriented nature of things. So within the last 10 years or so, especially with the oncoming of neural networks and these packages that are built on Python to interface with neural networks, I switched to Python and it's just, I've noticed a significant boost that I can't exactly, because I don't think about it, but I can't exactly put into words why I'm just much, much more productive. Just being able to get the job done much, much faster. So how do you think, whatever that qualitative difference is, I don't know if it's quantitative, it could be just a feeling, I don't know if I'm actually more productive, but how do you think about... You probably are. Yeah. Well, that's right. I think there's elements, let me just speak to one aspect that I think that was affecting my productivity is C++ was, I really enjoyed creating performant code and creating a beautiful structure where everything that, you know, this kind of going into this, especially with the newer and newer standards of templated programming of just really creating this beautiful formal structure that I found myself spending most of my time doing that as opposed to getting it, parsing a file and extracting a few keywords or whatever the task was trying to do. So what is it about Python? How do you think of productivity in general as you were designing it now, sort of through the decades, last three decades, what do you think it means to be a productive programmer? And how did you try to design it into the language? There are different tasks and as a programmer, it's useful to have different tools available that sort of are suitable for different tasks. So I still write C code, I still write shell code, but I write most of my things in Python. Why do I still use those other languages, because sometimes the task just demands it. And well, I would say most of the time the task actually demands a certain language because the task is not write a program that solves problem X from scratch, but it's more like fix a bug in existing program X or add a small feature to an existing large program. But even if you're not constrained in your choice of language by context like that, there is still the fact that if you write it in a certain language, then you have this balance between how long does it take you to write the code and how long does the code run? And when you're in the phase of exploring solutions, you often spend much more time writing the code than running it because every time you've run it, you see that the output is not quite what you wanted and you spend some more time coding. And a language like Python just makes that iteration much faster because there are fewer details that you have to get right before your program compiles and runs. There are libraries that do all sorts of stuff for you, so you can sort of very quickly take a bunch of existing components, put them together, and get your prototype application running. Just like when I was building electronics, I was using a breadboard most of the time, so I had this sprawl out circuit that if you shook it, it would stop working because it was not put together very well, but it functioned and all I wanted was to see that it worked and then move on to the next schematic or design or add something to it. Once you've sort of figured out, oh, this is the perfect design for my radio or light sensor or whatever, then you can say, okay, how do we design a PCB for this? How do we solder the components in a small space? How do we make it so that it is robust against, say, voltage fluctuations or mechanical disruption? I know nothing about that when it comes to designing electronics, but I know a lot about that when it comes to writing code. So the initial steps are efficient, fast, and there's not much stuff that gets in the way, but you're kind of describing, like Darwin described the evolution of species, right? You're observing of what is true about Python. Now if you take a step back, if the act of creating languages is art and you had three months to do it, initial steps, so you just specified a bunch of goals, sort of things that you observe about Python, perhaps you had those goals, but how do you create the rules, the syntactic structure, the features that result in those? So I have in the beginning and I have follow up questions about through the evolution of Python too, but in the very beginning when you were sitting there creating the lexical analyzer or whatever. Python was still a big part of it because I sort of, I said to myself, I don't want to have to design everything from scratch, I'm going to borrow features from other languages that I like. Oh, interesting. So you basically, exactly, you first observe what you like. Yeah, and so that's why if you're 17 years old and you want to sort of create a programming language, you're not going to be very successful at it because you have no experience with other languages, whereas I was in my, let's say mid 30s, I had written parsers before, so I had worked on the implementation of ABC, I had spent years debating the design of ABC with its authors, with its designers, I had nothing to do with the design, it was designed fully as it ended up being implemented when I joined the team. But so you borrow ideas and concepts and very concrete sort of local rules from different languages like the indentation and certain other syntactic features from ABC, but I chose to borrow string literals and how numbers work from C and various other things. So in then, if you take that further, so yet you've had this funny sounding, but I think surprisingly accurate and at least practical title of benevolent dictator for life for quite, you know, for the last three decades or whatever, or no, not the actual title, but functionally speaking. So you had to make decisions, design decisions. Can you maybe, let's take Python 2, so releasing Python 3 as an example. It's not backward compatible to Python 2 in ways that a lot of people know. So what was that deliberation, discussion, decision like? Yeah. What was the psychology of that experience? Do you regret any aspects of how that experience undergone that? Well, yeah, so it was a group process really. At that point, even though I was BDFL in name and certainly everybody sort of respected my position as the creator and the current sort of owner of the language design, I was looking at everyone else for feedback. Sort of Python 3.0 in some sense was sparked by other people in the community pointing out, oh, well, there are a few issues that sort of bite users over and over. Can we do something about that? And for Python 3, we took a number of those Python words as they were called at the time and we said, can we try to sort of make small changes to the language that address those words? And we had sort of in the past, we had always taken backwards compatibility very seriously. And so many Python words in earlier versions had already been resolved because they could be resolved while maintaining backwards compatibility or sort of using a very gradual path of evolution of the language in a certain area. And so we were stuck with a number of words that were widely recognized as problems, not like roadblocks, but nevertheless sort of things that some people trip over and you know that that's always the same thing that people trip over when they trip. And we could not think of a backwards compatible way of resolving those issues. But it's still an option to not resolve the issues, right? And so yes, for a long time, we had sort of resigned ourselves to, well, okay, the language is not going to be perfect in this way and that way and that way. And we sort of, certain of these, I mean, there are still plenty of things where you can say, well, that particular detail is better in Java or in R or in Visual Basic or whatever. And we're okay with that because, well, we can't easily change it. It's not too bad. We can do a little bit with user education or we can have a static analyzer or warnings in the parse or something. But there were things where we thought, well, these are really problems that are not going away. They are getting worse in the future. We should do something about that. But ultimately there is a decision to be made, right? So was that the toughest decision in the history of Python you had to make as the benevolent dictator for life? Or if not, what are there, maybe even on the smaller scale, what was the decision where you were really torn up about? Well, the toughest decision was probably to resign. All right, let's go there. Hold on a second then. Let me just, because in the interest of time too, because I have a few cool questions for you and let's touch a really important one because it was quite dramatic and beautiful in certain kinds of ways. In July this year, three months ago, you wrote, now that PEP 572 is done, I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions. I would like to remove myself entirely from the decision process. I'll still be there for a while as an ordinary core developer and I'll still be available to mentor people, possibly more available. But I'm basically giving myself a permanent vacation from being BDFL, benevolent dictator for life. And you all will be on your own. First of all, it's almost Shakespearean. I'm not going to appoint a successor. So what are you all going to do? Create a democracy, anarchy, a dictatorship, a federation? So that was a very dramatic and beautiful set of statements. It's almost, it's open ended nature called the community to create a future for Python. It's just kind of a beautiful aspect to it. So what, and dramatic, you know, what was making that decision like? What was on your heart, on your mind, stepping back now a few months later? I'm glad you liked the writing because it was actually written pretty quickly. It was literally something like after months and months of going around in circles, I had finally approved PEP572, which I had a big hand in its design, although I didn't initiate it originally. I sort of gave it a bunch of nudges in a direction that would be better for the language. So sorry, just to ask, is async IO, that's the one or no? PEP572 was actually a small feature, which is assignment expressions. That had been, there was just a lot of debate where a lot of people claimed that they knew what was Pythonic and what was not Pythonic, and they knew that this was going to destroy the language. This was like a violation of Python's most fundamental design philosophy, and I thought that was all bullshit because I was in favor of it, and I would think I know something about Python's design philosophy. So I was really tired and also stressed of that thing, and literally after sort of announcing I was going to accept it, a certain Wednesday evening I had finally sent the email, it's accepted. I can just go implement it. So I went to bed feeling really relieved, that's behind me. And I wake up Thursday morning, 7 a.m., and I think, well, that was the last one that's going to be such a terrible debate, and that's the last time that I let myself be so stressed out about a pep decision. I should just resign. I've been sort of thinking about retirement for half a decade, I've been joking and sort of mentioning retirement, sort of telling the community at some point in the future I'm going to retire, don't take that FL part of my title too literally. And I thought, okay, this is it. I'm done, I had the day off, I wanted to have a good time with my wife, we were going to a little beach town nearby, and in I think maybe 15, 20 minutes I wrote that thing that you just called Shakespearean. The funny thing is I didn't even realize what a monumental decision it was, because five minutes later I read that link to my message back on Twitter, where people were already discussing on Twitter, Guido resigned as the BDFL. And I had posted it on an internal forum that I thought was only read by core developers, so I thought I would at least have one day before the news would sort of get out. The on your own aspects had also an element of quite, it was quite a powerful element of the uncertainty that lies ahead, but can you also just briefly talk about, for example I play guitar as a hobby for fun, and whenever I play people are super positive, super friendly, they're like, this is awesome, this is great. But sometimes I enter as an outside observer, I enter the programming community and there seems to sometimes be camps on whatever the topic, and the two camps, the two or plus camps, are often pretty harsh at criticizing the opposing camps. As an onlooker, I may be totally wrong on this, but what do you think of this? Yeah, holy wars are sort of a favorite activity in the programming community. And what is the psychology behind that? Is that okay for a healthy community to have? Is that a productive force ultimately for the evolution of a language? Well, if everybody is patting each other on the back and never telling the truth, it would not be a good thing. I think there is a middle ground where sort of being nasty to each other is not okay, but there is a middle ground where there is healthy ongoing criticism and feedback that is very productive. And you mean at every level you see that. I mean, someone proposes to fix a very small issue in a code base, chances are that some reviewer will sort of respond by saying, well, actually, you can do it better the other way. When it comes to deciding on the future of the Python core developer community, we now have, I think, five or six competing proposals for a constitution. So that future, do you have a fear of that future, do you have a hope for that future? I'm very confident about that future. By and large, I think that the debate has been very healthy and productive. And I actually, when I wrote that resignation email, I knew that Python was in a very good spot and that the Python core developer community, the group of 50 or 100 people who sort of write or review most of the code that goes into Python, those people get along very well most of the time. A large number of different areas of expertise are represented, different levels of experience in the Python core dev community, different levels of experience completely outside it in software development in general, large systems, small systems, embedded systems. So I felt okay resigning because I knew that the community can really take care of itself. And out of a grab bag of future feature developments, let me ask if you can comment, maybe on all very quickly, concurrent programming, parallel computing, async IO. These are things that people have expressed hope, complained about, whatever, have discussed on Reddit. Async IO, so the parallelization in general, packaging, I was totally clueless on this. I just used pip to install stuff, but apparently there's pipenv, poetry, there's these dependency packaging systems that manage dependencies and so on. They're emerging and there's a lot of confusion about what's the right thing to use. Then also functional programming, are we going to get more functional programming or not, this kind of idea. And of course the GIL connected to the parallelization, I suppose, the global interpreter lock problem. Can you just comment on whichever you want to comment on? Well, let's take the GIL and parallelization and async IO as one topic. I'm not that hopeful that Python will develop into a sort of high concurrency, high parallelism language. That's sort of the way the language is designed, the way most users use the language, the way the language is implemented, all make that a pretty unlikely future. So you think it might not even need to, really the way people use it, it might not be something that should be of great concern. I think async IO is a special case because it sort of allows overlapping IO and only IO and that is a sort of best practice of supporting very high throughput IO, many connections per second. I'm not worried about that. I think async IO will evolve. There are a couple of competing packages. We have some very smart people who are sort of pushing us to make async IO better. Parallel computing, I think that Python is not the language for that. There are ways to work around it, but you can't expect to write an algorithm in Python and have a compiler automatically parallelize that. What you can do is use a package like NumPy and there are a bunch of other very powerful packages that sort of use all the CPUs available because you tell the package, here's the data, here's the abstract operation to apply over it, go at it, and then we're back in the C++ world. Those packages are themselves implemented usually in C++. That's where TensorFlow and all these packages come in, where they parallelize across GPUs, for example, they take care of that for you. In terms of packaging, can you comment on the future of packaging in Python? Packaging has always been my least favorite topic. It's a really tough problem because the OS and the platform want to own packaging, but their packaging solution is not specific to a language. If you take Linux, there are two competing packaging solutions for Linux or for Unix in general, but they all work across all languages. Several languages like Node, JavaScript, Ruby, and Python all have their own packaging solutions that only work within the ecosystem of that language. What should you use? That is a tough problem. My own approach is I use the system packaging system to install Python, and I use the Python packaging system then to install third party Python packages. That's what most people do. Ten years ago, Python packaging was really a terrible situation. Nowadays, pip is the future, there is a separate ecosystem for numerical and scientific Python based on Anaconda. Those two can live together. I don't think there is a need for more than that. That's packaging. Well, at least for me, that's where I've been extremely happy. I didn't even know this was an issue until it was brought up. In the interest of time, let me sort of skip through a million other questions I have. So I watched the five and a half hour oral history that you've done with the Computer History Museum, and the nice thing about it, it gave this, because of the linear progression of the interview, it gave this feeling of a life, you know, a life well lived with interesting things in it, sort of a pretty, I would say a good spend of this little existence we have on Earth. So, outside of your family, looking back, what about this journey are you really proud of? Are there moments that stand out, accomplishments, ideas? Is it the creation of Python itself that stands out as a thing that you look back and say, damn, I did pretty good there? Well, I would say that Python is definitely the best thing I've ever done, and I wouldn't sort of say just the creation of Python, but the way I sort of raised Python, like a baby. I didn't just conceive a child, but I raised a child, and now I'm setting the child free in the world, and I've set up the child to sort of be able to take care of himself, and I'm very proud of that. And as the announcer of Monty Python's Flying Circus used to say, and now for something completely different, do you have a favorite Monty Python moment, or a moment in Hitchhiker's Guide, or any other literature show or movie that cracks you up when you think about it? You can always play me the dead parrot sketch. Oh, that's brilliant. That's my favorite as well. It's pushing up the daisies. Okay, Greta, thank you so much for talking with me today. Lex, this has been a great conversation.
Guido van Rossum: Python | Lex Fridman Podcast #6
The following is a conversation with Jeff Atwood. He is the cofounder of Stack Overflow and Stack Exchange, websites that are visited by millions of people every single day. Much like with Wikipedia, it is difficult to understate the impact on global knowledge and productivity that these networks of sites have created. Jeff is also the author of the famed blog Coding Horror and the founder of Discourse, an open source software project that seeks to improve the quality of our online community discussions. This conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Jeff Atwood. Having co created and managed for a few years the world's largest community of programmers in Stack Overflow 10 years ago, what do you think motivates most programmers? Is it fame, fortune, glory, process of programming itself, or is it the sense of belonging to a community? It's puzzles, really. I think it's this idea of working on puzzles independently of other people and just solving a problem, sort of like on your own almost. Although, nobody really works alone in programming anymore. But I will say there's an aspect of hiding yourself away and just beating on a problem until you solve it, like brute force basically to me is what a lot of programming is. The computer's so fast that you can do things that would take forever for a human, but you can just do them so many times and so often that you get the answer. You're saying just the pure act of tinkering with the code is the thing that drives most problems. The struggle balance within the joy of overcoming, the brute force process of pain and suffering that eventually leads to something that actually works. Well, data's fun, too. There's this thing called the shuffling problem. The naive shuffle that most programmers write has a huge flaw, and there's a lot of articles online about this because it can be really bad if you're a casino and you have an unsophisticated programmer writing your shuffle algorithm. There's surprising ways to get this wrong, but the neat thing is the way to figure that out is just to run your shuffle a bunch of times and see how many orientations of cards you get. You should get an equal distribution of all the cards. And with the naive method of shuffling, if you just look at the data, if you just brute force it and say, OK, I don't know what's going to happen, you just write a program that does it a billion times and then see what the buckets look like of the data. And the Monty Hall problem is another example of that, where you have three doors and somebody gives you information about another door. So the correct answer is you should always switch in the Monty Hall problem, which is not intuitive, and it freaks people out all the time. But you can solve it with data. If you write a program that does the Monty Hall game and then never switches, then always switches, just compare, you would immediately see that you don't have to be smart. You don't have to figure out the answer algorithmically. You can just brute force it out with data and say, well, I know the answer is this because I ran the program a billion times, and these are the data buckets that I got from it. So empirically find it. But what's the joy of that? So for you, for you personally, outside of family, what motivates you in this process? Well, to be honest, I don't really write a lot of code anymore. What I do at Discourse is managery stuff, which I always despised. As a programmer, you think of managers as people who don't really do anything themselves. But the weird thing about code is you realize that language is code. The ability to direct other people lets you get more stuff done than you could by yourself anyway. You said language is code? Language is code. Meaning communication with other humans? Yes, it is. You can think of it as a systematic. So what is it like to be, what makes, before we get into programming, what makes a good manager? What makes a good leader? Well, I think a leader, it's all about leading by example, first of all, sort of doing and being the things that you want to be. Now, this can be kind of exhausting, particularly when you have kids, because you realize that your kids are watching you all the time, even in ways that you've stopped seeing yourself. The hardest person to see on the planet is really yourself. It's a lot easier to see other people and make judgments about them. But yourself, you're super biased. You don't actually see yourself the way other people see you. Often, you're very, very hard on yourself in a way that other people really aren't going to be. So that's one of the insights is you've got to be really diligent about thinking, am I behaving in a way that represents how I want other people to behave, like leading through example? There's a lot of examples of leaders that really mess this up, like they make decisions that are like, wow, it's a bad example for other people. So I think leading by example is one. The other one I believe is working really hard. And I don't mean working exhaustively, but showing a real passion for the problem, not necessarily your solution to the problem, but the problem itself is just one that you really believe in. Like with discourse, for example, the problem that we're looking at, which is my current project, is how do you get people in groups to communicate in a way that doesn't break down into the howling of wolves? How do you deal with trolling? Not like technical problems. How do I get people to post paragraphs? How do I get people to use bold? How do I get people to use complete sentences, although those are problems as well? But how do I get people to get along with each other and then solve whatever problem it is they set out to solve, or reach some consensus on discussion, or just not hurt each other even? Maybe it's a discussion that doesn't really matter, but are people yelling at each other? And why? Like that's not the purpose of this kind of communication. So I would say leadership is about setting an example, doing the things that represent what you want to be, and making sure that you're actually doing those things. And there's a trick to that too, because the things you don't do also say a lot about what you are. Yeah, so let's pause on that one. So those two things are fascinating. So how do you have as a leader that self awareness? So you just said it's really hard to be self aware. So for you personally, or maybe for other leaders you've seen or look up to, how do you know both that the things you're doing are the wrong things to be doing, the way you speak to others, the way you behave, and the things you're not doing? How do you get that signal? I think there's two aspects to that. One is like processing feedback that you're getting, so. How do you get feedback? Well, right, so are you getting feedback, right? So one way we do it, for example, with discourse, we have three cofounders, and we periodically talk about decisions before we make them. So it's not like one person can make a mistake, or like, wow, there can be misunderstandings, things like that. So it's part of like group consensus of leadership is like it's good to have, I think systems where there's one leader, and that leader has the rule of absolute law are just really dangerous in my experience. For communities, for example, like if you have a community that's run by one person, that one person makes all the decisions, that person's gonna have a bad day. Something could happen to that person, something, there's a lot of variables. So like first, when you think about leadership, have multiple people doing leadership and have them talk amongst each other. So giving each other feedback about the decisions that they're making. And then when you do get feedback, I think there's that little voice in your head, right? Or your gut or wherever you wanna put it in your body. I think that voice is really important. Like I think most people who have any kind of moral compass or like want to do, most people want to do the right thing. I do believe that. I mean, there might be a handful of sociopaths out there that don't, but most people, they want other people to think of them as a good person. And why wouldn't you, right? Like, do you want people to despise you? I mean, that's just weird, right? So you have that little voice that sort of the angel and devil on your shoulder sort of talking to you about like what you're doing, how you're doing, how does it make you feel to make these decisions, right? And I think having some attunement to that voice is important. But you said that voice also for, I think this is a programmer situation too, where sometimes the devil on the shoulder is a little too loud. So you're a little too self critical for a lot of developers, and especially when you have introverted personality. How do you struggle with a self criticism or the criticism of others? So one of the things of leadership is to do something that's potentially unpopular or where people doubt you and you still go through with the decision. So what's that balance like? I think you have to walk people through your decision making, right? Like you have to, this is where blogging is really important and communication is so important. Again, code language is just another kind of code. It's like, here is the program by which I arrived at the conclusion that I'm gonna reach, right? It's one thing to say like, this is a decision, it's final, deal with it, right? That's not usually satisfying to people. But if you say, look, we've been thinking about this problem for a while. Here's some stuff that's happened. Here's what we think is right. Here's our goals. Here's what we wanna achieve. And we've looked at these options and we think this available options is the best option. People will be like, oh, okay, right? Maybe I don't totally agree with you, but I can kind of see where you're coming from and I see it's not just arbitrary decision delivered from a cloud of flames in the sky, right? It's like a human trying to reach some kind of consensus about goals. And their goals might be different than yours. That's completely legit, right? But if you're making that clear, it's like, oh, well, the reason we don't agree is because we have totally different goals, right? Like, how could we agree? It's not that you're a bad person. It's that we have radically different goals in mind when we started looking at this problem. And the other one you said is passion. So, or hard work, sorry. Well, those are tied together in my mind. Let's say hard work and passion. Like for me, like I just really love the problem discourse is setting out to solve because in a way it's like, there's a vision of the world where it all devolves into Facebook basically owning everything and every aspect of human communication, right? And this has always been kind of a scary world for me. First, cause I don't, I think Facebook is really good at execution. I got to compliment them. They're very competent in terms of what they're doing, but Facebook has not much of a moral compass in terms of Facebook cares about Facebook, really. They don't really care about you and your problems. What they care about is how big they can make Facebook, right? Is that you talking about the company or just the mechanism of how Facebook works? Kind of both really, right? Like, and the idea with discourse, the reason I'm so passionate about it is cause I believe every community should have the right to own themselves, right? Like they should have their own software that they can run that belongs to them. That's their space where they can set the rules. And if they don't like it, they can move to different hosting or, you know, whatever they need to happen can happen. But like this idea of a company town where all human communication is implicitly owned by WhatsApp, Instagram, and Facebook. And it's really disturbing too, cause Facebook is really smart. Like I said, they're great at execution. Buying in WhatsApp and buying Instagram were incredibly smart decisions. And they also do this thing, I don't know if you know, but they have this VPN software that they give away for free on smartphones and it indirectly feeds all the data about the traffic back to Facebook. So they can see what's actually getting popular through the VPNs, right? They have low level access to the network data because users have let them have that. So. So let's take a small pause here. First of all, discourse. Can you talk about, can you lay out the land of all the different ways you can have communities? So there's Stack Overflow that you've built. There's discourse. So Stack Overflow is kind of like a Wiki, Wikipedia you talk about. And it's a very specific scalpel, very focused. So what is the purpose of discourse and maybe contrast that with Facebook? First of all, say, what is discourse? Yeah. Start from the beginning. Well, let me start from the very beginning. So Stack Overflow is a very structured Wiki style Q and A for programmers, right? And that was the problem we first worked on. And when we started, we thought it was discussions because we looked at like programming forums and other things, but we quickly realized we were doing Q and A, which is a very narrow subset of human communication, right? Sorry, so when you started Stack Overflow, you thought you didn't even know the Q and A. Not really. You didn't know it would be Q and A. Well, we didn't know. We had an idea of like, okay, these are things that we see working online. We had a goal, right? Our goal was there was this site, Experts Exchange, with a very unfortunate name. Thank you for killing that site. Yeah, I know, right? Like a lot of people don't remember it anymore, which is great. Like that's the measure of success when people don't remember the thing that you were trying to replace, then you've totally won. So it was a place to get answers to programming questions, but it wasn't clear if it was like focused Q and A, if it was a discussion. There were plenty of programming forums. So we weren't really sure. We were like, okay, we'll take aspects of dig and Reddit, like voting were very important. Reordering answers based on votes. Wiki style stuff of like being able to edit posts, not just your posts, but other people's posts to make them better and keep them more up to date. Ownership of blogging of like, okay, this is me. I'm saying this in my voice, this is the stuff that I know. And your reputation accrues to you and it's peer recognition. So you asked earlier, like what motivates programmers? I think peer recognition motivates them a lot. That was one of the key insights of Stack Overflow was like recognition from your peers is why things get done. Not necessarily money, not necessarily your boss, but like your peers saying, wow, this person really knows their stuff, has a lot of value. So the reputation system came from that. So we were sort of Frankensteining a bunch of stuff together in Stack Overflow, like stuff we had seen working and we knew worked and that became Stack Overflow. Over time, we realized it wasn't really discussion. It was very focused questions and answers. There wasn't a lot of room on the page for let me talk about this tangential thing. It was more like, okay, is it answering the question? Is it clarifying the question? Or could it be an alternative answer to the same question? Because there's usually more than one way to do it in programming, there's like say five to 10 ways. And one of the patterns we got into early on with Stack Overflow was there were questions where there would be like hundreds of answers. And we're like, wow, how can there be a programming question with 500, 200, 500 answers? And we looked at those and we realized those were not really questions in the traditional sense. They were discussions. It was stuff that we allowed early on that we eventually decided wasn't allowed such as what's your favorite programming food? What's the funniest programming cartoon you've seen? And we had to sort of backfill a bunch of rules about like, why isn't this allowed? Such as, is this a real problem you're facing? Like nobody goes to work and says, wow, I can't work cause I don't know what the funniest programming cartoon is. So sorry, can't compile this code now, right? It's not a real problem you're facing in your job. So that was run rule. And the second, like, what can you really learn from that? It's like what I call accidental learning or Reddit style learning. Where you're just like, oh, I'll just browse some things and oh, wow, you know, did you know tree frogs only live three years? I mean, I just made that up. I don't know if that's true. But I didn't really set out to learn that. I don't need to know that, right? It's accidental learning. It was more intentional learning where you're like, okay, I have a problem. And I want to learn about stuff around this problem having, right? And it could be theory, it could be compiler theory, it could be other stuff, but I'm having a compiler problem. Hence, I need to know the compiler theory, that aspect of it that gets me to my answer, right? So kind of a directed learning. So we had to backfill all these rules as we sort of figured out what the heck it was we were doing. And the system came very strict over time. And a lot of people still complain about that. And I wrote my latest blog entry, what does Stack Overflow want to be when it grows up? Celebrating the 10 year anniversary, yeah. Yeah, so 10 years. And the system has trended towards strictness. There's a variety of reasons for this. One is people don't like to see other people get reputation for stuff as they view as frivolous, which I can actually understand. Because if you saw a programmer got like 500 upvotes for funniest programming cartoon or funniest comment they had seen in code, it's like, well, why do they have that reputation? Is it because they wrote the joke? Probably not. I mean, if they did, maybe, or the cartoon, right? They're getting a bunch of reputation based on someone else's work that's not even programming. It's just a joke, right? It's related to programming. So you begin to resent that. You're like, well, that's not fair. And it isn't. At some level, they're correct. I mean, I empathize. Because it's not correct to get reputation for that. Versus here's a really gnarly regular expression problem. And here's a really clever, insightful, detailed answer laying out, oh, here's why you're seeing the behavior that you're seeing. Here, let me teach you some things about how to avoid that in the future. That's great. That's gold, right? You want people to get reputation for that, not so much for, wow, look at this funny thing I saw, right? Great. So there's this very specific Q&A format. And then take me through the journey towards discourse and Facebook and Twitter. So you started at the beginning that Stack Overflow evolved to have a purpose. So what is discourse, this passion you have for creating community for discussion? When was that born and how? Well, part of it is based on the realization that Stack Overflow is only good for very specific subjects where it's based on data, facts, and science, where answers can be kind of verified to be true. Another form of that is there's the book of knowledge, like the tome of knowledge that defines whatever it is. You can refer to that book and it'll give you the answer. There has to be, it only works on subjects where there's like semi clear answers to things that can be verified in some form. Now again, there's always more than one way to do it. There's complete flexibility and system around that. But where it falls down is stuff like poker and LEGO. Like we had, if you go to stackexchange.com, we have an engine that tries to launch different Q&A topics, right? And people can propose Q&A topics, sample questions, and if it gets enough support within the network, we launched that Q&A site. So some of the ones we launched were poker and LEGO and they did horribly, right? Because I mean, they might still be there lingering on in some form, but it was an experiment. This is like a test, right? And some subjects work super well on the stack engine and some don't. But the reason LEGO and poker don't work is because they're so social, really. It's not about what's the rule here in poker. It's like, well, what kind of cigars do we like to smoke while playing poker? Or what's a cool set of cards to use when I'm playing poker? Or what's some strategies? Say I have this hand come up with some strategies I could use. It's more of a discussion around what's happening with LEGO. Same thing, here's this cool LEGO set I found. Look how awesome this is. And I'm like, yeah, that's freaking awesome, right? It's not a question, right? There's all these social components and discussions that don't fit at all. We literally have to disallow those in Stack Overflow because it's not about being social. It's about problems that you're facing in your work that you need concrete answers for. You have a real demonstrated problem that's blocking you in something. Nobody's blocked by, what should I do when I have a straight flush? It's not a blocking problem in the world. It's just an opportunity to hang out and discuss. So discourse was a way to address that and say, look, discussion forum software was very, very bad. And when I came out of Stack Overflow in early 2012, it was still very, very bad. I expected it improved in the four years since I last looked, but it had not improved at all. And I was like, well, that's kind of terrible because I love these communities of people talking about things that they love. They're just communities of interest, right? And there's no good software for them. Startups would come to me and say, hey, Jeff, I want to have this startup. Here's my idea. And the first thing I would say to them is, well, first, why are you asking me? I don't really know your field necessarily. Why aren't you asking the community, the people that are interested in this problem, the people that are using your product, why aren't you talking to them? And then they'd say, oh, great idea. How do I do that? And then that's when I started playing sad trombone because I realized all the software involving talking to your users, customers, audience, patrons, whatever it is, it was all really bad. It was stuff that I would be embarrassed to recommend to other people. And yet, that's where I felt they could get the biggest and strongest, most effective input for what they should be doing with their product, right? It's from their users, from their community, right? That's what we did on Stack Overflow. So what we're talking about with forums, the, what is it, the dark matter of the internet, it's still, I don't know if it's still, but for the longest time, it has some of the most passionate and fascinating discussions. And what's the usual structure? There's usually, it's linear, so it's sequential. So you're posting one after the other and there's pagination, so it's every, there's 10 posts and then you go to the next page. And that format still is used by, like I'm, we're doing a lot of research with Tesla vehicles and there's a Tesla Motors Club forum, which is extremely. We really wanted to run that actually. They pinged us about it, I don't think we got it, but I really would have liked to gotten that one. But they've started before even 2012, I believe. I mean, they've been running for a long time. It's still an extremely rich source of information. So what's broken about that system and how are you trying to fix it? I think there's a lot of power in connecting people that love the same stuff around that specific topic. Meaning Facebook's idea of connection is just any human that's related to another human, right? Like through friendship or any other reason. Facebook's idea of the world is sort of the status update, right? Like a friend of yours did something, ate at a restaurant, right? Whereas discussion forums were traditionally around the interest graph. Like I love electric cars, specifically I love Tesla, right? Like I love the way they approach the problem. I love the style of the founder. I just love the design ethic. And there's a lot to like about Tesla. I don't know if you saw the oatmeal, he did a whole love comic to Tesla. And it was actually kind of cool because I learned some stuff. He was talking about how great Tesla cars were specifically, like how they were built differently. And he went into a lot of great detail that was really interesting. And to me, that oatmeal post, if you read it, is the genesis of pretty much all interest communities. I just really love this stuff. So like for me, for example, there's yo yos, right? Like I'm into the yo yo communities. And these interest communities are just really fascinating to me. And I feel more connected to the yo yo communities than I do to friends that I don't see that often, right? Like to me, the powerful thing is the interest graph. And Facebook kind of dabbles in the interest graph. I mean, they have groups, you can sign up for groups and stuff, but it's really about the relationship graph. Like this is my coworker, this is my relative, this is my friend, but not so much about the interest. So I think that's the linchpin of which forums and communities are built on that I personally love. Like I said, leadership is about passion, right? And being passionate about stuff is a really valid way to look at the world. And I think it's a way a lot of stuff in the world gets done. Like I once had someone describe me as, he's like, Jeff, you're a guy who, you just get super passionate about a few things at a time, and you just go super deep in those things. And I was like, oh, that's kind of right. That's kind of what I do. I get into something and just be super into that for a couple of years or whatever, and just learn all I can about it, and go super deep in it. And that's how I enjoy experiencing the world, right? Like not being shallow on a bunch of things, but being really deep on a few things that I'm interested in. So forums kind of unlock that, right? And you don't want a world where everything belongs to Facebook, at least I don't. I want a world where communities can kind of own themselves, set their own norms, set their own rules, control the experience. Because community is also about ownership, right? Like if you're meeting at the Barnes and Noble every Thursday and Barnes and Noble says, get out of here, you guys don't buy enough books. Well, you know, you're kind of hosed, right? Barnes and Noble owns you, right? Like you can't. But if you have your own meeting space, you know, your own clubhouse, you can set your own rules, decide what you want to talk about there, and just really generate a lot better information than you could like hanging out at Barnes and Noble every Thursday at 3 p.m., right? So that's kind of the vision of Discourse, is a place where it's fully open source. You can take the software, you can install it anywhere, and, you know, you and a group of people can go deep on whatever it is that you're into. And this works for startups, right? Startups are a group of people who go super deep on a specific problem, right? And they want to talk to the community. It's like, well, install Discourse, right? That's what we do at Discourse. That's what I did at Stack Overflow. I spent a lot of time on Meta Stack Overflow, which is our internal, well, public community feedback site, and just experiencing what the users were experiencing, right, because they're the ones doing all the work in the system. And they had a lot of interesting feedback. And there's that 90, 10 rule of, like, 90% of the feedback you get is not really actionable for a variety of reasons. It might be bad feedback, it might be crazy feedback, it might be feedback you just can't act on right now. But there's 10% of it that's like gold. It's like literally gold and diamonds, where it's like feedback of really good improvements to your core product that are not super hard to get to and actually make a lot of sense. And my favorite is about 5% of those stuff I didn't even see coming. It's like, oh my God, I never even thought of that. But that's a brilliant idea, right? And I can point to so many features of Stack Overflow that we derive from Meta Stack Overflow feedback and Meta discourse, right? Same exact principle of discourse, you know? We're getting ideas from the community. I was like, oh my God, I never thought of that, but that's fantastic, right? Like, I love that relationship with the community. From having built these communities, what have you learned about? What's the process of getting a critical mass of members in a community? Is it luck, skill, timing, persistence? What is, is it the tools, like discourse, that empower that community? What's the key aspect of starting for one guy or gal and then building it to two and then 10 and a hundred and a thousand and so on? I think when you're starting with an N of one, I mean, I think it's persistence and also you have to be interesting. Like somebody I really admire once said something that I always liked about blogging. He's like, here's how you blog. You have to have something interesting to say and have an interesting way of saying it, right? And then do that for like 10 years. So that's the genesis, is like you have to have sort of something interesting to say that's not exactly what everybody else is saying and an interesting way of saying it, which is another way of saying, kind of entertaining way of saying it. And then as far as growing it, it's like ritual. You know, like you have to, like say you're starting a blog, you have to say, look, I'm gonna blog every week, three times a week, and you have to stick to that schedule, right? Because until you do that for like several years, you're never gonna get anywhere. Like it just takes years to get to where you need to get to. And part of that is having the discipline to stick with the schedule. And it helps, again, if it's something you're passionate about, this won't feel like work. You're like, I love this. I could talk about this all day, every day, right? You just have to do it in a way that's interesting to other people. And then as you're growing the community, that pattern of participation within the community of like generating these artifacts and inviting other people to help you like collaborate on these artifacts, like even in the case of blogging, like I felt in the early days of my blog, which I started in 2004, which is really the genesis of Stack Overflow. If you look at all my blog, it leads up to Stack Overflow, which was, I have all this energy in my blog, but I don't, like 40,000 people were subscribing to me. And I was like, I wanna do something. And then I met Joel and said, hey, Joel, I wanna do something, take this ball of energy from my blog and do something. And all the people reading my blog saw that. It's like, oh, cool. You're involving us. You're saying, look, you're part of this community. Let's build this thing together. Like they pick the name. Like we voted on the name for Stack Overflow on my blog. Like we came up, and naming is super hard. First of all, the hardest problem in computer science is coming up with a good name for stuff, right? But you can go back to my blog. There's the poll where we voted and Stack Overflow became the name of the site. And all the early beta users of Stack Overflow were audience of my blog plus Joel's blog, right? So we started from, like, if you look at the genesis, okay, I was just a programmer who said, hey, I love programming, but I have no outlet to talk about it. So I'm just gonna blog about it, because I don't have enough people to work to talk to about it. Because at the time I worked a place where, you know, programming wasn't the core output of the company, it was a pharmaceutical company. And I just love this stuff, you know, to an absurd degree. So I was like, I'll just blog about it. And then I'll find an audience and eventually found an audience, eventually found Joel, and eventually built Stack Overflow from that one core of activity, right? But it was that repetition of feeding back in feedback from my blog comments, feedback from Joel, feedback from the early Stack Overflow community. When people see that you're doing that, they will follow along with you, right? They'll say, cool, you're here in good faith. You're actually, you know, not listening to everything because that's impossible, that's impossible. But you're actually, you know, waiting our feedback and what you're doing. And why wouldn't I? Because who does all the work on Stack Overflow? Me, Joel? No, it's the other programmers that are doing all the work. So you gotta have some respect for that. And then, you know, discipline around, look, you know, we're trying to do a very specific thing here on Stack Overflow. We're not trying to solve all the world's problems. We're trying to solve this very specific Q and A problem in a very specific way. Not cause we're jerks about it, but because these strict set of rules help us get really good results, right? And programmers, that's an easy sell for the most part because programmers are used to dealing with ridiculous systems of rules like constantly. That's basically their job. So they're very, oh yeah, super strict system of rules that lets me get what I want. That's programming, right? That's what Stack Overflow is, so. So you're making it sound easy, but in 2004, let's go back there. In 2004, you started the blog, Coding Horror. Was it called that at the very beginning? It was. One of the smart things I did, it's from a book by Steve McConnell, Code Complete, which is one of my favorite programming books, still probably my number one programming book for anyone to read. So one of the smart things I did back then, I don't always do smart things when I start stuff. I contacted Steve and said, hey, I really like this. It was a sidebar illustration indicating danger in code, right? Coding Horror was like, watch out. And I love that illustration because it spoke to me. Because I saw that illustration go, oh my God, that's me. Like I'm always my own worst enemy. Like that's the key insight in programming is every time you write something, think how am I gonna screw myself? Because you will, constantly, right? So that icon was like, oh yeah, I need to constantly hold that mirror up and look, and say, look, you're very fallible. You're gonna screw this up. Like how can you build this in such a way that you're not gonna screw it up later? Like how can you get that discipline around making sure at every step I'm thinking through all the things that I could do wrong or that other people could do wrong? Because that is actually how you get to be a better programmer a lot of times, right? So that sidebar illustration, I loved it so much. And I wrote Steve before I started my blog and said, hey, can I have permission to use this because I just really like this illustration? And Steve was kind enough to give me permission to do that and just continues to give me permission, so yeah. Really, that's awesome. But in 2004, you started this blog. You know, you look at Stephen King, his book on writing, or Stephen Pressfield, War of Art book. I mean, it seems like writers suffer. I mean, it's a hard process of writing, right? There's gonna be suffering. I mean, I won't kid you. Well, the work is suffering, right? Like doing the work, like even when you're every week, you're like, okay, that blog post wasn't very good or people didn't like it or people said disparaging things about it. You have to like have the attitude like, you know, no matter what happens, I wanna do this for me, right? It's not about you, it's about me. I mean, in the end, it is about everyone because this is how good work gets out into the world. But you have to be pretty strict about saying like, you know, I'm selfish in the sense that I have to do this for me. You know, you mentioned Stephen King, like his book on writing. But like one of the things I do, for example, when writing is like, I read it out loud. One of the best pieces of advice for writing anything is read it out loud, like multiple times and make it sound like you're talking because that is the goal of good writing. It should sound like you said it with slightly better phrasing because you have more time to think about what you're saying but like, it should sound natural when you say it. And I think that's probably the single best writing advice I can give anyone. Just read it over and over out loud, make sure it sounds like something you would normally say and it sounds good. And what's your process of writing? See, there's usually a pretty good idea behind the blog post. So ideas, right. So I think you gotta have the concept that there's so many interesting things in the world. Like, I mean, my God, the world is amazing, right? Like you can never write about everything that's going on because it's so incredible. But if you can't come up with like, let's say one interesting thing per day to talk about, then you're not trying hard enough because the world is full of just super interesting stuff. And one great way to like mine stuff is go back to old books cause they bring up old stuff that's still super relevant. And I did that a lot cause I was like reading classic programming books and a lot of the early blog posts were like, oh, I was reading this programming book and they brought this really cool concept and I wanna talk about it some more. And you get the, I mean, you're not claiming credit for the idea but it gives you something interesting to talk about that's kind of evergreen, right? Like you don't have to go, what should I talk about? So we'll just go dig up some old classic programming books and find something that, oh, wow, that's interesting. Or how does that apply today? Or what about X and Y or compare these two concepts. So pull a couple of sentences from that book and then sort of play off of it, almost agree or disagree. So in 2007, you wrote that you were offered a significant amount of money to sell the blog. You chose not to. What were all the elements you were thinking about? Cause I'd like to take you back. It seems like there's a lot of nonlinear decisions you made through life. So what was that decision like? Right, so one of the things I love is the Choose Your Own Adventure books, which I loved as a kid and I feel like they're early programmer books cause they're all about if then statements, right? If this, then this. And they're also very, very unforgiving. Like there's all these sites that map the classic Choose Your Own Adventure books and how many outcomes are bad, a lot of bad outcomes. So part of the game is like, oh, I got a bad outcome. Go back one step, go back one further step. It's like, how did I get here, right? Like it's a sequence of decisions. And this is true of life, right? Like every decision is a sequence, right? Individually, any individual decision is not necessarily right or wrong, but they lead you down a path, right? So I do think there's some truth to that. So this particular decision, the blog had gotten fairly popular. There's a lot of RSS readers that I had discovered. And this guy contacted me out of the blue from this like bug tracking company. He's like, oh, I really wanna buy your blog for like, I think it was around, it was $100,000, it might have been like 80,000, but it was a lot, right? Like, and that's, you know, at the time, like I would have a year's worth of salary all at once. So I didn't really think about like, well, you know, and I remember talking to people at the time, I was like, wow, that's a lot of money. But then I'm like, I really like my blog, right? Like, do I wanna sell my blog? Cause it wouldn't really belong to me anymore at that point. And one of the guidelines that I like to, I don't like to give advice to people a lot, but one of the pieces of advice I do give, cause I do think it's really true and it's generally helpful is whenever you're looking at a set of decisions, like, oh gosh, should I do A, B or C, you gotta pick the thing that's a little scarier in that list because not, you know, not like jump off a cliff scary, but the thing that makes you nervous. Cause if you pick the safe choice, it's usually, you're not really pushing. You're not pushing yourself. You're not choosing the thing that's gonna help you grow. So for me, the scarier choice was to say no. I was like, well, no, let's just see where this is going. Right? Because then I own it. I mean, it belongs to me. It's my thing. And I can just take it and tell some other logical conclusion, right? Because imagine how different the world would have been had I said yes and sold the blog. It's like, there probably wouldn't be Stack Overflow. You know, a lot of other stuff would have changed. So for that particular decision, I think it was that same rule. Like what scares me a little bit more. Do the thing that scares you. Yeah. So speaking of which, startups. I think there's a specific, some more general questions that a lot of people would be interested in. You've started Stack Overflow. You started this course. So what's the, it was one, two, three guys, whatever it is in the beginning. What was that process like? Do you start talking about it? Do you start programming? Do you start, like, where's the birth and the catalyst that actually. Well, I can talk about it in the context of both Stack Overflow and Discourse. So I think the key thing initially is there is a problem. Something, there's some state of the world that's unsatisfactory to the point that, like, you're upset about it, right? Like, in that case, it was experts exchange. I mean, Joel's original idea, because I approached Joel as like, look, Joel, I have all this energy behind my blog. I want to do something. I want to build something. But I don't know what it is, because I'm honestly not a good idea person. I'm really not. I'm like the execution guy. I'm really good at execution, but I'm not good at, like, blue skying ideas. Not my forte. Which is another reason why I like the community feedback, because they blue sky all day long for you, right? So when I can just go in and cherry pick a blue sky idea from community, even if I have to spend three hours reading to get one good idea, it's worth it, man. But anyway, so the idea from Joel was, hey, experts exchange, it's got great data, but the experience is hideous, right? It's trying to trick you. It feels like used car salesman. It's just bad. So I was like, oh, that's awesome. It feeds into community. It feeds into, like, you know, we can make creative comments. So I think the core is to have a really good idea that you feel very strongly about in the beginning, that, like, there's a wrong in the world, an injustice that we will write through the process of building this thing. For Discourse, it was like, look, there's no good software for communities to just hang out and, like, do stuff, right? Like, whether it's problem solving, startup, whatever. Forums are such a great building block of online community, and they're hideous. They were so bad, right? It was embarrassing. Like, I literally was embarrassed to be associated with this software, right? I was like, we have to have software that you can be proud of. It's like, this is competitive with Reddit. This is competitive with Twitter. This is competitive with Facebook, right? I would be proud to have the software on my site. So that was the genesis of Discourse, was feeling very strongly about there needs to be a good solution for communities. So that's step one. Genesis of an idea you feel super strongly about, right? And then people galvanize around the idea. Like, Joel was already super excited about the idea. I was excited about the idea. So with the forum software, I was posting on Twitter. I had researched, as part of my research, I start researching the problem, right? And I found a game called Forum Wars, which was a parody of forum. It's still very, very funny, of forum behavior, circa, I would say, 2003. It's aged some, right? Like, the behavior's a little different in there of Twitter. But it was awesome. It was very funny. And it was like a game. It was like an RPG. And it had a forum attached to it. So it was like a game about forums with a forum attached. I was like, this is awesome, right? This is so cool. And the founder of that company, or that project, it wasn't really a company, contacted me, this guy Robin Ward from Toronto. He said, hey, I saw you've been talking about forums. And I really love that problem space. He was like, I'd still love to build really good forum software, because I don't think anything out there's any good. And I was like, awesome. At that point, I was like, we're starting a company. Because I couldn't have whooshed for a better person to walk through the door and say, I'm excited about this, too. Same thing with Joel, right? I mean, Joel is a legend in the industry, right? So when he walked through and said, I'm excited about this problem, I was like, me too, man. We can do this, right? So that, to me, is the most important step. It's like, having an idea you're super excited about, and another person, a cofounder, right? Because again, you get that dual leadership, right? Am I making a bad decision? Sometimes it's nice to have checks of like, is this a good idea? I don't know, right? So those are the crucial seeds. But then starting to build stuff, whether it's you programming or somebody else's. There is prototyping. So there's tons of research. There's tons of research, like, what's out there that failed? Because a lot of people look at the successes. Oh, look at how successful X is. Everybody looks at the successes. Those are boring. Show me the failures, because that is what's interesting. That's where people were experimenting. That's where people were pushing. And they failed, but they probably failed for reasons that weren't directly about the quality of their idea, right? So look at all the failures. Don't just look what everybody looks at, which is like, oh, gosh, look at all these successful people. Look at the failures. Look at the things that didn't work. Research the entire field. And so that's the research that I was doing that led me to Robin, right? Was that. And then when we, for example, when we did Stack Overflow, we're like, okay, well, I really like elements of voting and dig and read it. I like the Wikipedia, everything's up to date. Nothing is like an old tombstone that has horrible out of date information. We know that works. Wikipedia is an amazing resource. Blogging, the idea of ownership is so powerful, right? Like, oh, I, Joe wrote this, and look how good Joe's answer is, right? All these concepts were rolling together. Researching all the things that were out there that were working and why they were working and trying to fold them into, again, that Frankenstein's monster of what Stack Overflow is. And by the way, that wasn't a free decision because there's still a ton of tension in the Stack Overflow system. There's reasons people complain about Stack Overflow because it's so strict, right? Why is it so strict? Why are you guys always closing my questions? It's because there's so much tension that we built into the system around trying to get good, good results out of the system. And it's not a free. That stuff doesn't come for free, right? It's not like we, we all have perfect answers and nobody will have to get their feelings heard or nobody will have to get downvoted. It doesn't work that way, right? So this is an interesting point and a small tangent. You write about anxiety. So I've posted a lot of questions and written answers on Stack Overflow. On the question side, you usually go to something very specific to something I'm working on. And this is something you talk about that really the goal of Stack Overflow isn't about, is to write a question that's not about you, it's about the question that will help the community in the future. Right, but that's a tough sell, right? Because people are like, well, I don't really care about the community. What I care about is my problem. And that's fair, right? It's sort of that, again, that tension, that balancing act of we wanna help you, but we also wanna help everybody that comes behind you. The long line of people are gonna come up and say, oh, I kinda have that problem too, right? And if nobody's ever gonna come up and say, I have this problem too, then that question shouldn't exist on Stack Overflow because the question is too specific. And even that's tension, right? How do you judge that? How do you know that nobody's ever gonna have this particular question again? So there's a lot of tension in the system. Do you think that anxiety of asking the question, the anxiety of answering, that tension is inherent to programmers, is inherent to this kind of process? Or can it be improved? Can it be happy land where that tension is not quite so harsh? I don't think Stack Overflow can totally change the way it works. One thing they are working on finally is the ask page had not changed since 2011. I'm still kind of bitter about this because I feel like you have a Q&A system and what are the core pages in a Q&A system? Well, first of all, the question, all the answers and also the ask page, particularly when you're a new user or someone trying to ask a question, that's the point at which you need the most help. And we just didn't adapt with the times. But the good news is they're working on this, from what I understand, and it's gonna be a more wizard based format. And you could envision a world where as part of this wizard based program, when you're asking questions, okay, come up with a good title, what are good words to put in the title? One word that's not good to put in the title is problem, for example. I have a problem. Oh, you have a problem. Okay, a problem, that's great. You need specifics. So it's trying to help you make a good question title, for example, that step will be broken out, all that stuff. But one of those steps in that wizard of asking could say, hey, I'm a little nervous. I've never done this before. Can you put me in a queue for special mentoring? You could opt in to a special mentor. I think that would be fantastic. I don't have any objection to that at all in terms of being an opt in system. Because there are people that are like, I just wanna help them. I wanna help a person no matter what. I wanna go above and beyond. I wanna spend hours with this person. It depends what their goals are. It's a great idea. Who am I to judge? So that's fine. It's not precluded from happening. But there's a certain big city ethos that we started with. Like, look, we're in New York City. You don't come to New York City and expect them to be, oh, welcome to the city, Joe. How's it going? Come on in. Let me show you around. That's not how New York City works. Again, New York City has a reputation for being rude, which I actually don't think it is, having been there fairly recently. It's not rude. It's just like going about their business. Like, look, I have things to do. I'm busy. I'm a busy professional, as are you. And since you're a busy professional, certainly when you ask a question, you're gonna ask the best possible question. Because you're a busy professional and you would not accept anything less than a very well written question with a lot of detail about why you're doing it, what you're doing, what you researched, what you found, because you're a professional like me. And this rubs people sometimes the wrong way. And I don't think it's wrong to say, look, I don't want that experience. I want just a more chill place for beginners. And I still think Stack Overflow is not, was never designed for beginners, right? There's this misconception that, even Joel says sometimes, oh yeah, Stack Overflow for beginners. And I think if you're a prodigy, it can be. Right. But for the most part, not. But that's not really representative, right? Like, I think as a beginner, you want a totally different set of tools. You want like live screen sharing, live chat. You want access to resources. You want a playground, like a playground you can experiment in and like test and all this stuff that we just don't give people because that was never really the audience that we were designing Stack Overflow for. That doesn't mean it's wrong. And I think it would be awesome if there was a site like that on the internet, or if Stack Overflow said, hey, you know, we're gonna start doing this. That's fine too. You know, I'm not there. I'm not making those decisions. But I do think the pressure, the tension that you described is there for people to be, look, I'm a little nervous because I know I gotta do my best work, right? The other one is something you talk about, which is also really interesting to me, is duplicate questions or it's a really difficult problem that you highlight. It's super hard. Like you could take one little topic and you could probably write 10, 20, 30 ways of asking about that topic and there will be all different. I don't know if there should be one page that answers all of it. Is there a way that Stack Overflow can help disambiguate, like separate these duplicate questions or connect them together? Or is it a totally hopeless, difficult, impossible task? I think it's a very, very hard computer science problem. And partly because people are very good at using completely different words. It always amazed me on Stack Overflow. You'd have two questions that were functionally identical and one question had like zero words in common with the other question. Like, oh my God, from a computer science perspective, how do you even begin to solve that? And it happens all the time. People are super good at this, right? Accidentally at asking the same thing in like 10, 20 different ways. And the other complexity is we want some of those duplicates to exist because if there's five versions with different words, have those five versions point to the one centralized answer, right? It's like, okay, this is a duplicate, no worries. Here's the answer that you wanted over here on the prime example that we want to have, rather than having 10 copies of the question and the answer. Because if you have 10 copies of the question and answer, this also devalues the reputation system, which programmers hate, as I previously mentioned. You're getting reputation for an answer that somebody else already gave. It's like, well, it's an answer, but somebody else already gave that answer. So why are you getting reputation for the same answer as the other guy who gave it four years ago? People get offended by that, right? So the reputation system itself adds tension to the system in that the people who have a lot of reputation become very incentivized to enforce the reputation system. And for the most part, that's good. I know it sounds weird, but for most parts, like look, strict systems, I think to use Stack Overflow, you have to have the idea that, OK, strict systems ultimately work better. And I do think in programming, you're familiar with loose typing versus strict typing, right? The idea that you can declare a variable, not declare a variable, rather, just start using a variable. And OK, I see it's implicitly an integer. Bam, awesome. Duck equals 5. Well, duck is now an integer of 5, right? And you're like, cool, awesome, simpler, right? Why would I want to worry about typing? And for a long time, in the Ruby community, they're like, yeah, this is awesome. You just do a bunch of unit testing, which is testing your program's validity after the fact to catch any bugs that strict typing of variables would have caught. And now you have this thing called TypeScript for Microsoft from the guy who built C Sharp Anders, who's one of the greatest minds in software development, right, like in terms of language design. And says, no, no, no, we want to bolt on a strict type system to JavaScript because it makes things better. And now everybody's like, oh my god, we deployed TypeScript and found 50 latent bugs that we didn't know about, right? Like, this is super common. So I think there is a truth in programming that strictness, it's not the goal. We're not saying be super strict because strictness is correct. No, it's no, no. Strictness produces better results. That's what I'm saying, right? So strict typing of variables, I would say you almost universally have consensus now is basically correct. Should be that way in every language, right? Duck equals five should generate an error because no, you didn't declare. You didn't tell me that duck was an integer, right? That's a bug, right? Or maybe you mistyped. You typed deck instead of duck, right? You never know. This happens all the time, right? So with that in mind, I will say that the strictness of the system is correct. Now, that doesn't mean cruel. That doesn't mean mean. That doesn't mean angry. It just means strict, OK? So I think where there's misunderstanding is people get cranky, right? Like, another question you asked is like, why are programmers kind of mean sometimes? Well, who do programmers work with all day long? So I have a theory that if you're at a job and you work with assholes all day long, what do you eventually become? An asshole. An asshole. And what is the computer except the world's biggest asshole? Because the computer has no time for your bullshit. The computer, the minute you make a mistake, everything is crashing down, right? One semicolon has crashed space missions, right? So that's normal. So you begin to internalize that. You begin to think, oh, my coworker, the computer, is super strict and kind of a jerk about everything. So that's kind of how I'm going to be. Because I work with this computer, and I have to exceed to its terms on everything. So therefore, you start to absorb that. You start to think, oh, well, being really strict arbitrarily is really good. An error of error code 56249 is a completely good error message because that's what the computer gave me, right? So you kind of forget to be a person at some level. And you know how they say great detectives internalize criminals and kind of are criminals themselves, like this trope of the master detective is good because he can think like the criminal. Well, I do think that's true of programmers. Really good programmers think like the computer because that's their job. But if you internalize it too much, you become the computer. You kind of become a jerk to everybody because that's what you've internalized. You're almost not a jerk, but you have no patience for a lack of strictness, as you said. It's not out of a sense of meanness. It's accidental. But I do believe it's an occupational hazard of being a programmer is you start to behave like the computer. You're very unforgiving. You're very terse. You're very, oh, wrong, incorrect, move on. It's like, well, can you help me? What could I do to fix? No, wrong, next question. Like, that's normal for the computer. Just fail, next. I don't know if you remember in Saturday Night Live, in the 90s, they had this character who was an IT guy. The move guy. Move. Move. Was that Jimmy Fallon? No. No. Who played him? OK, yeah, I remember. Move. Right. He had no patience for it. Might have been Mad TV, actually. Wasn't it Mad TV? Might have been. But anyway, that's always been the perception. You start to behave like the computer. It's like, oh, you're wrong, out of the way, you know? You've written so many blog posts about programming, about programs, programming, programmers. What do you think makes a good, let's start with, what makes a good solo programmer? Well, I don't think you should be a solo programmer. I think to be a good solo programmer, it's kind of like what I talked about, well, not on Mike, but one of the things John Carmack, one of the best points he makes in the book Masters of Doom, which is a fantastic book, and anybody listening to this who hasn't read it, please read it. It's such a great book, is that at the time, they were working on stuff like Wolfenstein and Doom. They didn't have the resources that we have today. They didn't have Stack Overflow. They didn't have Wikipedia. They didn't have discourse forums. They didn't have places to go to get people to help them. They had to work on their own. And that's why it took a genius like Carmack to do this stuff, because you had to be a genius to invent from first principles. A lot of the stuff he was like, the hacks he was coming up with were genius, genius level stuff. But you don't need to be a genius anymore, and that means not working by yourself. You have to be good at researching stuff online. You have to be good at asking questions, really good questions that are really well researched, which implies, oh, I went out and researched for three hours before I wrote these questions. That's what you should be doing, because that's what's going to make you good. To me, this is the big difference between programming in the 80s versus programming today, is you kind of had to be by yourself back then. Where would you go for answers? I remember in the early days when I was learning Visual Basic for Windows, I would call the Microsoft Helpline on the phone when I had programming. Because I was like, I don't know what to do. So I would go and call, and they had these huge phone banks. And I'm like, can you imagine how alien that is now? Who would do that? That's crazy. So there was just nowhere else to go when you got stuck. I had the books that came with it. I read those, studied those religiously. I just saw a post from Steve Sanofsky that said the C++ version 7 came with 10,000 pages of written material. Because where else were you going to figure that stuff out? Go to the library? I mean, you didn't have Wikipedia. You didn't have Reddit. You didn't have anywhere to go to answer these questions. So you've talked about, through the years, basically not having an ego and not thinking that you're the best programmer in the world. So always kind of just looking to improve, to become a better programmer than you were yesterday. So how have you changed as a programmer and as a thinker, designer around programming over the past, what is it, 15 years, really, of being a public figure? I would say the big insight that I had is, eventually, as a programmer, you have to stop writing code to be effective, which is kind of disturbing. Because you really love it. But you realize being effective at programming, at programming in the general sense, doesn't mean writing code. And a lot of times, you can be much more successful by not writing code and writing code in terms of just solving the problems you have, essentially hiring people that are really good and setting them free and giving them basic direction on strategy and stuff. Because a lot of the problems you encounter aren't necessarily solved through really gnarly code. They're solved by conceptual solutions, which can then be turned into code. But are you even solving the right problem? So I would say, for me, the main insight I have is, to succeed as a programmer, you eventually kind of stop writing code. That's going to sound discouraging, probably, to people hearing. But I don't mean it that way. What I mean is that you're coding at a higher level language. Eventually, like, OK, so we're coding in assembly language. That's the beginning, right? You're hardcoded to the architecture. Then you have stuff like C, where it's like, wow, we can abstract across the architecture. We can write code. I can then compile that code for ARM or whatever x86 or whatever else is out there. And then even higher level than that, you're looking at Python, Ruby, interpreted languages. And then, to me, as a programmer, I'm like, OK, I want to go even higher. I want to go higher than that. How do I abstract higher than the language? It's like, well, you abstract in spoken language and written language, right? You're sort of inspiring people to get things done, giving them guidance, like, what if we did this? What if we did this? You're writing in the highest level language that there is, which is, for me, English, whatever your spoken language is. So it's all about being effective, right? And I think Patrick McKenzie, patio11 on Hacker News and works at Stripe, has a great post about this, of how calling yourself a programmer is a career limiting move at some level once you get far enough from your career. And I really believe that. And again, I apologize. This is sound discouraging. I don't mean it to be, but he's so right. Because all the stuff that goes on around the code, like the people, that's another thing, if you look at my early blog entries, is about, wow, programming is about people more than it's about code, which doesn't really make sense. But it's about, can these people even get along together? Can they understand each other? Can you even explain to me what it is you're working on? Are you solving the right problem? PeopleWare, another classic programming book, which, again, up there with Code Complete, please read PeopleWare. It's that software is people. People are the software, first and foremost. So a lot of the skills that I was working on early in the blog were about figuring out the people parts of programming, which were the harder parts. The hard part of programming, once you get a certain skill level in programming, you can pretty much solve any reasonable problem that's put in front of you. You're not writing algorithms from scratch. That just doesn't happen. So any sort of reasonable problem put in front of you, you're going to be able to solve. But what you can't solve is, our manager is a total jerk. You cannot solve that with code. That is not a code solvable problem. And yet, that will cripple you way more than, oh, we had to use this stupid framework I don't like, or Sam keeps writing bad code that I hate, or Dave is off there in the wilderness writing God knows what. These are not your problems. Your problem is your manager or a co worker is so toxic to everybody else in your team that nobody can get anything done, because everybody's so stressed out and freaked out. These are the problems that you have to attack. Absolutely. And so as you go to these higher level abstractions, as you've developed as a programmer to higher and higher level abstractions and go into natural language, you're also the guy who preached building it, diving in and doing it, and learn by doing. Yes. Do you worry that as you get to higher and higher level abstractions, you lose track of the lower level of just building? Do you worry about that, even not maybe now, but 10 years from now, 20 years from now? Well, no. I mean, there is always that paranoia around, oh, gosh, I don't feel it's valuable since I'm not writing code. But for me, when we started the discourse project, it was Ruby, which I didn't really know Ruby. I mean, as you pointed out, and this is another valuable observation in Stack Overflow, you can be super proficient in, for example, C Sharp, which I was working in. That's what we built Stack Overflow in and still is written in. And then switch to Ruby, and you're a newbie again. But you have the framework. I know what a for loop is. I know what recursion is. I know what a stack trace is. I have all the fundamental concepts to be a programmer. I just don't know Ruby. So I'm still on a higher level. I'm not like a beginner beginner, like you're saying. I'm just like, I need to apply my programming concepts I already know to Ruby. Well, so there's a question that's really interesting. So looking at Ruby, how do you go about learning enough that your intuition can be applied, carried over? That's what I was trying to get to. It's like what I realized, particularly when I started with just me and Robin, I realized if I bother Robin, I am now costing us productivity. Every time I go to Robin, rather than building our first alpha version of discourse, he's now answering my stupid questions about Ruby. Is that a good use of his time? Is that a good use of my time? And the answer to both of those was resoundingly no. We were getting to an alpha, and it was pretty much just, OK, we'll hire more programmers. We eventually hired Neil, and then eventually Sam, who came in as a cofounder. Actually, it was Sam first, then Neil later. But the answer to the problem is just hire other competent programmers. Now I shall pull myself up by my bootstraps and learn Ruby. But at some point, writing code becomes a liability to you in terms of getting things done. There's so many other things that go on in the project, like building the prototype. You mentioned, well, how do you, if you're not writing code, how does everybody keep focus on what are we building? Well, first, basic mockups and research. What do we even want to build? There's a little bit of that that goes on. But then very quickly, you get to the prototype stage. Like, build a prototype. Let's iterate on the prototype really, really rapidly. And that's what we do with discourse. And that's what we demoed to get our seed funding for discourse was the alpha version of discourse that we had running and ready to go. And it was very, it was bad. I mean, it was, I'll just tell you it was bad. We have screenshots of it. I'm just embarrassed to look at it now. But it was the prototype. We were figuring out what's working, what's not working. Because there's such a broad gap between the way you think things will work in your mind or even on paper and the way they work once you sit and live in the software, like actually spend time living and breathing in software, so different. So my philosophy is get to a prototype. And then what you're really optimizing for is speed of iteration, like how you can turn the crank. How quickly can we iterate? That's the absolutely critical metric of any software project. And I had a tweet recently that people liked. And I totally, this is so fundamental to what I do, is like if you want to measure the core competency of any software tech company, it's the speed at which somebody can say, hey, we really need this word in the product. Change this word, right? Because it will be more clear to the user. Like, instead of respond, it's reply or something. But there's some, from the conception of that idea to how quickly that single word can be changed in your software and rolled out to users, that is your life cycle. That's your health, your heartbeat. If your heartbeat is like super slow, you're basically dead. No, seriously. Like, if it takes two weeks or even a month to get that single word changed, everybody's like, oh my god, this is a great idea. That word is so much clearer. I'm talking about like a super, like everybody's on board for this change. It's not like, let's just change a word because we're bored. It's like, this is an awesome change. And then it takes months to roll out. It's like, well, you're dead. You can't iterate. You can't, how are you going to do anything, right? So anyway, about the heartbeat, it's like, get the prototype and then iterate on it. That's what I view as the central tenet of modern software development. That's fascinating that you put it that way. So I work and I build autonomous vehicles. And when you look at what, maybe compare Tesla to most other automakers, the heart beat for Tesla is literally days now in terms of they can over the air deploy software updates to all their vehicles, which is markedly different than every other automaker, which takes years to update a piece of software. And that's reflected in everything that's the final product. That's reflected in really how slowly they adapt to the times. And to be clear, I'm not saying being a hummingbird is the goal either. It's like, you don't want a heartbeat that's like so fast. It's like you're just freaking out. But it is a measure of health. You should have a healthy heartbeat. It's up to people listening to decide what that means. But it has to be healthy. It has to be reasonable. Because otherwise, you're just going to be frustrated because that's how you build software. You make mistakes. You roll it out. You live with it. You see what it feels like and say, oh, God, that was a terrible idea. Oh, my gosh, this could be even better if we did Y, right? You turn the crank. And then the more you do that, the faster you get ahead of your competitors ultimately. It's rate of change, right? Delta V, right? How fast are you moving? Well, within a year, you're going to be miles away by the time they catch up with you, right? That's the way it works. And plus, as a software developer and user, I love software that's constantly changing. Because I don't understand people who get super pissed off when like, oh, they changed the software on me. How dare they? I'm like, yes, change the software. Change it all the time, man. That's what makes this stuff great is that it can be changed so rapidly and become something that is greater than it is now. Now, granted, there are some changes that suck. I admit. I've seen it many times. But in general, that's what makes software cool, right? It's that it is so malleable. Fighting that is weird to me. Because it's like, well, you're fighting the essence of the thing that you're building. That doesn't make sense. You want to really embrace that. Not to be a hummingbird, but embrace it to a healthy cycle of your heartbeat, right? So you talk about that people really don't change. It's true. That's why probably a lot of the stuff you write about in your blog probably will remain true. Well, there's a flip side of the coin. People don't change. Like, investing and understanding people is like learning Unix in 1970. Because nothing has changed, right? All those things you've learned about people will still be valid 34 years from now. Whereas if you learn the latest JavaScript framework, that's going to be good for like two years, right? Exactly. But if you look at the future of programming, so there's a people component, but there's also the technology itself. What do you see as the future of programming? Will it change significantly, or as far as you can tell, people are ultimately programming, and so it's not something that you foresee changing in any fundamental way? Well, you've got to go look back on sort of the basics of programming. And one of things that always shocked me is like source control. Like, I didn't learn anything about source control. Granted, I graduated from college in 1992. But I remember hearing from people as late as like 1998, 1999, like even maybe today, they're not learning source control. And to me, it's like, well, how can you not learn source control? That is so fundamental to working with other programmers, working in a way that you don't lose your work. Just basic software, the literal bedrock of software development is source control. Now, you compare it today, like GitHub, right? Like Microsoft bought GitHub, which I think was an incredibly smart acquisition move on their part. Now they have anybody who wants reasonable source control to go sign up on GitHub. It's all set up for you, right? There's tons of walkthroughs, tons of tutorials. So from the concept of like, has programming advanced from, say, 1999, it's like, well, hell, we have GitHub. I mean, my god, yes, right? Like, it's massively advanced over what it was. Now, as to whether programming is significantly different, I'm going to say no. But I think the baseline of what we view as fundamentals will continue to go up and actually get better, like source control. That's one of the fundamentals that has gotten hundreds of orders of magnitude better than it was 10, 20 years ago. So those are the fundamentals. Let me introduce two things that maybe you can comment on. So one is mobile phones. So that could fundamentally transform what programming is, or maybe not. Maybe you can comment on that. And the other one is artificial intelligence, which promises to, in some ways, to do some of the programming for you is one way to think about it. So it's really what a programmer is, is using the intelligence that's inside your skull to do something useful. The hope with artificial intelligence is that it does some of the useful parts for you where you don't have to think about it. So do you see smartphones, the fact that everybody has one, and they're getting more and more powerful as potentially changing programming? And do you see AI as potentially changing programming? OK, so that's good. So smartphones have definitely changed. I mean, since, I guess, 2010 is when they really started getting super popular. I mean, in the last eight years, the world has literally changed, right? Everybody carries a computer around, and that's normal. I mean, that is such a huge change in society. I think we're still dealing with a lot of the positive and negative ramifications of that, right? Everybody's connected all the time. Everybody's on the computer all the time. That was my dream world as a geek, right? But it's like, be careful what you ask for, right? Like, wow, now everybody has a computer. It's not quite the utopia that we thought it would be, right? Computers can be used for a lot of stuff that's not necessarily great. So to me, that's the central focus of the smartphone, is just that it puts a computer in front of everyone. Granted, a small, touch screen, smallish, touch screen computer. But as for programming, I don't know. I don't think that I've kind of, over time, come to subscribe to the Unix view of the world when it comes to programming. You want to teach these basic command line things, and that is just what programming is going to be for, I think, a long, long time. I don't think there's any magical visual programming that's going to happen. I don't know. I've, over time, have become a believer in that Unix philosophy of just, you know, they kind of had to write with Unix. That's going to be the way it is for a long, long time. And we'll continue to, like I said, raise the baseline. The tools will get better. It'll get simpler. But it's still fundamentally going to be command line tools, fancy IDEs. That's kind of it for the foreseeable future. I'm not seeing any visual programming stuff on the horizon. Because you kind of think, like, what do you do on a smartphone that will be directly analogous to programming? Like, I'm trying to think, right? And there's really not much. So not necessarily analogous to programming, but the kind of things that, the kind of programs you would need to write might need to be very different. Yeah. And the kind of languages. I mean, but I probably also subscribe to the same, just because everything in this world might be written in JavaScript. Oh, yeah. That's already happening. I mean, discourse is a bet. Discourse itself, JavaScript, is another bet on that side of the table. And I still try and believe in that. So I would say smartphones have mostly a cultural shift more than a programming shift. Now, your other question was about artificial intelligence and sort of devices predicting what you're going to do. And I do think there's some strength to that. I think artificial intelligence is kind of overselling it in terms of what it's doing. It's more like, people are predictable, right? People do the same things. Let me give you an example. One check we put in a discourse that's been a lot of big commercial websites is, say you log in from New York City now. And then an hour later, you log in from San Francisco. It's like, well, hmm, that's interesting. How did you get from New York to San Francisco in one hour? So at that point, you're like, OK, this is a suspicious login at that point. So we would alert you. It's like, OK. But that's not AI, right? That's just a heuristic of like, how did you, in one hour, get 2,000 miles, right? That doesn't. I mean, you're grand. Maybe you're on a VPN. There's other ways to happen. That's just a basic prediction based on the idea that people pretty much don't move around that much. They may travel occasionally. But nobody, unless you're a traveling salesman that's literally traveling the world every day, there's so much repetition and predictability in terms of things you're going to do. And I think good software anticipates your needs. For example, Google, I think it's called Google Now or whatever that Google thing is that predicts your commute and predicts, based on your phone location, where are you every day? Well, that's probably where you work, that kind of stuff. I do think computers can get a lot better at that. I hesitate to call it full blown AI. It's just computers getting better at like, first of all, they have a ton of data because everybody has a smartphone. Now, all of a sudden, we have all this data that we didn't have before about location, about communication, and feeding that into some basic heuristics and maybe some fancy algorithms that turn it into predictions of anticipating your needs, like a friend would, right? Like, oh, hey, I see your home. Would you like some dinner, right? Like, let's go get some food, because that's usually what we do at this time of day, right? In the context of actually the act of programming, do you see IDEs improving and making the life of programming as better? I do think that is possible, because there's a lot of repetition in programming, right? Oh, you know, Clippy would be the bad example of, oh, I see. It looks like you're writing a for loop. But there are patterns in code, right? And actually, libraries are kind of like that, right? Rather than go code up your own HTTP request library, it's like, well, you'd use one of the existing ones that we have. That's already a troubleshot, right? It's not AI, per se. It's just building better LEGO bricks, bigger LEGO bricks, that have more functionality in them, so people don't have to worry about the low level stuff as much anymore. Like, WordPress, for example, to me, is like a tool for somebody who isn't a programmer to do something. I mean, you can turn WordPress into anything. It's kind of crazy, actually, through plugins, right? And that's not programming, per se. It's just LEGO bricks stacking WordPress elements, right? And a little bit of configuration glue. So I would say, maybe in a broader sense, what I'm seeing, like, there'll be more gluing and less actual programming. And that's a good thing, right? Because most of the stuff you need is kind of out there already. You said 1970s, Unix. Do you see PHP and these kind of old remnants of the early birth of programming remaining with us for a long time? Like you said, Unix in itself. Do you see, ultimately, this stuff just being there out of momentum? I kind of do. I mean, I was a big believer in Windows early on. And I was a big, you know, I was like, Unix, what a waste of time. But over time, I've completely flipped on that, where I was like, okay, the Unix guys were right. And pretty much Microsoft and Windows were kind of wrong, at least on the server side. Now, on the desktop, right, you need a GUI, you need all that stuff. And you have the two philosophies, like Apple built on Unix, effectively, Darwin. And on the desktop, it's a slightly different story. But on the server side, where you're gonna be programming. Now, it's a question of where the programming's gonna be. There's gonna be a lot more like client side programming, because technically, discourse is client side programming. The way you get discourse, we deliver a big ball of JavaScript, which is then executed locally. So we're really using a lot more local computing power. We'll still retrieve the data, obviously, we have to display the posts on the screen and so forth. But in terms of like sorting and a lot of the basic stuff, we're using the host processor. But to the extent that a lot of programming is still gonna be server side, I would say, yeah, the Unix philosophy definitely won. And there'll be different veneers over Unix, but it's still, if you peel away one or two layers, it's gonna be Unixy for a long, I think Unix won. I mean, so definitively. It's interesting to hear you say that, because you've done so much excellent work on the Microsoft side in terms of backend development. Cool. So what's the future hold for Jeff Atwood? I mean, the discourse, continuing the discourse in trying to improve conversation on the web? Well, discourse is what I've viewed as a, and originally I called it a five year project, then really quickly revised that to a 10 year project. So we started in early 2013, that's when we launched the first version. So we're still five years in. This is the part where it starts getting good. Like we have a good product now. Discourse, there's any project you build in software, it takes three years to build what you want it to build anyway. Like V1 is gonna be terrible, which it was. But you ship it anyway, because that's how you get better at stuff. It's about turning the crank. It's not about V1 being perfect, because that's ridiculous. It's about V1, then let's get really good at V1.1, 1.2, 1.3, like how fast can we iterate? And I think we're iterating like crazy on discourse, to the point that like, it's a really good product now. We have serious momentum. And my original vision was, I wanna be the WordPress of discussion. Meaning someone came to you and said, I wanna start a blog. Although the very question is kind of archaic now. It's like, who actually blogs anymore? But I wanted the answer to that to be, it would be WordPress normally, because that's the obvious choice for blogging most of the time. But if someone said, hey, I need a group of people to get together and do something, the answer should be discourse, right? That should be the default answer for people. Because it's open source, it's free, doesn't cost you anything. You control it, you can run it. Your minimum server cost for discourse is five bucks a month at this point. They actually got the VPS prices down. It used to be $10 a month for one gigabyte of RAM, which we have a kind of heavy stack. Like there's a lot of stuff in discourse. You need Postgres, you need Redis, you need Ruby, and Rails, you need a sidekick for scheduling. It's not a trivial amount of stuff because we were architected for like, look, we're building for the next 10 years. I don't care about shared PHP hosting. That's not my model. My idea is like, hey, eventually, this is gonna be very cheap for everybody and I wanna build it right. Using again, higher, bigger building block levels, right? That have more requirements. And there's a WordPress model of WordPress.org, WordPress.com. Is there a central hosting for discourse or no? There is. We're not strictly segmenting into the open source versus the commercial side. We have a hosting business. That's how discourse makes money is we host discourse instances and we have really close relationship with our customers of the symbiosis of them giving us feedback on the product. We definitely wait feedback from customers a lot heavier than feedback from somebody who just wanders by and gives feedback. But that's where we make all our money. But we don't have a strict division. We encourage people to use discourse. Like the whole point is that it's free, right? Anybody can set it up. I don't wanna be the only person that hosts discourse. That's absolutely not the goal. But it is a primary way for us to build a business and it's actually kind of a great business. I mean, the business is going really, really well in terms of hosting. So I used to work at Google Research. It's a company that's basically funded on advertisements. So it's Facebook. Let me ask if you can comment on it. I think advertisement is best. So you'd be extremely critical on what ads are but at its best, it's actually serving you. In a sense, it's giving you, it's connecting you to what you would want to explore. So it's like related posts or related content. It's the same, that's the best of advertisement. So discourse is connecting people based on their interests. It seems like a place where advertisement at its best could actually serve the users. Is that something that you're considering thinking about as a way to bring, to financially support the platform? That's interesting because I actually have a contrarian view of advertising, which I kind of agree with you. I recently installed AdBlocker reluctantly because I don't like to do that. But the performance of the ads, man, they're so heavy now and it's just crazy. So it's almost like a performance argument more than like, I actually am pro ads and I have a contrarian viewpoint. I agree with you. If you do ads right, it's serving you stuff you would be interested in anyway. I don't mind that, that actually is kind of a good thing. So plus I think it's rational to wanna support the people that are doing this work through seeing their ads. But that said, I run AdBlock now, which I didn't wanna do, but I was convinced by all these articles, like 30, 40 megabytes of stuff just to serve you ads. Yeah, it feels like ads now are like the experts exchange of whenever you start a stock overflow. It's a little bit, it's overwhelming. Oh, there's so many companies in ad tech that it's embarrassing. Like you can do that, have you seen those logo charts of like just the whole page? It's like you can't even see them, they're so small. There's so many companies in the space. But since you brought it up, I do wanna point out that very, very few discourse sites actually run using an ad supported model. It's not effective. Like it's too diluted, it's too weird, it doesn't pay well, and like users hate it. So it's a combination of like users hate it, it doesn't actually work that well in practice. Like in theory, yes, I agree with you. If you had clean, fast ads that were exactly the stuff you would be interested in, awesome. We're so far from that though, right? Like, and Google does an okay job. They do retargeting and stuff like that, but in the real world, discourse sites rarely can make ads work. It just doesn't work for so many reasons. But you know what does work is subscriptions, Patreon, affiliate codes for like Amazon, of like just, oh, here's a cool yo yo, click, and then you click and go to Amazon, they get a small percentage of that, which is fair, I think. I mean, because you saw the yo yo on that site and you clicked through and you bought it, right? That's fair for them to get 5% of that or 2% of that, whatever it is. Those things definitely work. In fact, a site that I used to participate on a lot, I helped the owner. One of the things, I got them to switch to discourse. I basically paid them to switch to discourse because I was like, look, you guys got to switch. I can't come here anymore on this terrible software. But I was like, look, and on top of that, like you're serving people ads that they hate. Like you should just go full on Patreon because he had a little bit of Patreon. Go full on Patreon, do the Amazon affiliates thing for any Amazon links that get posted and just do that and just triple down on that stuff. And that's worked really well for them and this creator in particular. So that stuff works, but traditional ads, I mean, definitely not working, at least on discourse. So last question. You've created the code keyboard. I've programmed most of my adult life on a Kinesis keyboard. I have one upstairs now. Can you describe what a mechanical keyboard is and why is it something that makes you happy? Well, you know, this is another fetish item, really. Like, it's not required. You can do programming on any kind of keyboard, even like an onscreen keyboard. Oh, god, that's terrifying. But you could. I mean, if you look back at the early days of computing, there were chiclet keyboards, which are awful. But what's a chiclet keyboard? Oh, god. OK, well, it's just like thin rubber membranes. Oh, the rubber ones, oh, no. Super bad, right? So it's a fetish item. All that really says is, look, I care really about keyboards because the keyboard is the primary method of communication with the computer. So it's just like having a nice mic for this podcast. You want a nice keyboard, right? Because it has a very tactile feel. I can tell exactly when I press the key. I get that little click. So, oh, and it feels good. And it's also kind of a fetish item. It's like, wow, I care enough about programming that I care about the tool, the primary tool, that I use to communicate with the computer, make sure it's as good as it feels good to use for me. And I can be very productive with it. So to be honest, it's a little bit of a fetish item, but a good one. It indicates that you're serious. It indicates you're interested. It indicates that you care about the fundamentals. Because you know what makes you a good programmer? Being able to type really fast, right? This is true, right? So a core skill is just being able to type fast enough to get your ideas out of your head into the code base. So just practicing your typing can make you a better programmer. It is also something that makes you, well, makes you enjoy typing, correct? The actual act, something about the process. Like I play piano. It's tactile. There's a tactile feel that ultimately feeds the passion, makes you happy. Right. No, totally. That's it. I mean, and it's funny because artisanal keyboards have exploded. Like Massdrop has gone ballistic with this stuff. There's probably like 500 keyboard projects on Massdrop alone. And there's some other guy I follow on Twitter. I used to write for the site The Tech Report way back in the day. And he's like, every week he's just posting what I call keyboard porn of just cool keyboards. Like, oh my god, those look really cool, right? It's like, how many keyboards does this guy have, right? It's kind of like me with yo yos. How many yo yos do you have? How many do you need? Well, technically one, but I like a lot. I don't know why. So same thing with keyboards. So yeah, they're awesome. Like, I highly recommend anybody that doesn't have a mechanical to research it, look into it, and see what you like. And it's ultimately a fetish item. But I think these sort of items, these religious artifacts that we have, are part of what make us human. Like, that part's important, right? It's kind of what makes life worth living. Yeah. It's not necessary in the strictest sense, but ain't nothing necessary if you think about it, right? Like, so yeah, why not? So sure. Jeff, thank you so much for talking today. Yeah, you're welcome. Thanks for having me.
Jeff Atwood: Stack Overflow and Coding Horror | Lex Fridman Podcast #7
The following is a conversation with Eric Schmidt. He was the CEO of Google for 10 years and a chairman for six more, guiding the company through an incredible period of growth and a series of world changing innovations. He is one of the most impactful leaders in the era of the internet and the powerful voice for the promise of technology in our society. It was truly an honor to speak with him as part of the MIT course on artificial general intelligence and the artificial intelligence podcast. And now here's my conversation with Eric Schmidt. What was the first moment when you fell in love with technology? I grew up in the 1960s as a boy where every boy wanted to be an astronaut and part of the space program. So like everyone else of my age, we would go out to the cow pasture behind my house, which was literally a cow pasture and we would shoot model rockets off. And that I think is the beginning. And of course, generationally today, it would be video games and all the amazing things that you can do online with computers. There's a transformative, inspiring aspect of science and math that maybe rockets would bring would instill in individuals. You've mentioned yesterday that eighth grade math is where the journey through mathematical universe diverges from many people. It's this fork in the roadway. There's a professor of math at Berkeley, Edward Frankel. He, I'm not sure if you're familiar with him. I am. He has written this amazing book I recommend to everybody called Love and Math. Two of my favorite words. He says that if painting was taught like math, then the students would be asked to paint a fence, which is his analogy of essentially how math is taught. And so you never get a chance to discover the beauty of the art of painting or the beauty of the art of math. So how, when, and where did you discover that beauty? I think what happens with people like myself is that your math enabled pretty early and all of a sudden you discover that you can use that to discover new insights. The great scientists will all tell a story, the men and women who are fantastic today, that somewhere when they were in high school or in college, they discovered that they could discover something themselves. And that sense of building something, of having an impact that you own, drives knowledge acquisition and learning. In my case, it was programming. And the notion that I could build things that had not existed that I had built, that it had my name on it. And this was before open source, but you could think of it as open source contributions. So today, if I were a 16 or 17 year old boy, I'm sure that I would aspire as a computer scientist to make a contribution like the open source heroes of the world today. That would be what would be driving me. And I'd be trying and learning and making mistakes and so forth in the ways that it works. The repository that GitHub represents and that open source libraries represent is an enormous bank of knowledge of all of the people who are doing that. And one of the lessons that I learned at Google was that the world is a very big place and there's an awful lot of smart people. And an awful lot of them are underutilized. So here's an opportunity, for example, building parts of programs, building new ideas to contribute to the greater of society. So in that moment in the 70s, the inspiring moment where there was nothing and then you created something through programming, that magical moment. So in 1975, I think you've created a program called Lex, which I especially like because my name is Lex. So thank you, thank you for creating a brand that established a reputation that's long lasting, reliable and has a big impact on the world and still used today. So thank you for that. But more seriously, in that time, in the 70s, as an engineer, personal computers were being born. Do you think you'd be able to predict the 80s, 90s and the aughts of where computers would go? I'm sure I could not and would not have gotten it right. I was the beneficiary of the great work of many, many people who saw it clearer than I did. With Lex, I worked with a fellow named Michael Lesk, who was my supervisor. And he essentially helped me architect and deliver a system that's still in use today. After that, I worked at Xerox Palo Alto Research Center, where the Alto was invented. And the Alto is the predecessor of the modern personal computer or Macintosh and so forth. And the Altos were very rare. And I had to drive an hour from Berkeley to go use them. But I made a point of skipping classes and doing whatever it took to have access to this extraordinary achievement. I knew that they were consequential. What I did not understand was scaling. I did not understand what would happen when you had 100 million as opposed to 100. And so the, since then, and I have learned the benefit of scale, I always look for things which are going to scale to platforms, right? So mobile phones, Android, all those things. There are, the world is in numerous, there are many, many people in the world, people really have needs. They really will use these platforms and you can build big businesses on top of them. So it's interesting. So when you see a piece of technology, now you think, what will this technology look like when it's in the hands of a billion people? That's right. So an example would be that the market is so competitive now that if you can't figure out a way for something to have a million users or a billion users, it probably is not going to be successful because something else will become the general platform and your idea will become a lost idea or a specialized service with relatively few users. So it's a path to generality. It's a path to general platform use. It's a path to broad applicability. Now there are plenty of good businesses that are tiny. So luxury goods, for example. But if you want to have an impact at scale, you have to look for things which are of common value, common pricing, common distribution and solve common problems. They're problems that everyone has. And by the way, people have lots of problems. Information, medicine, health, education and so forth. Work on those problems. Like you said, you're a big fan of the middle class. Because there's so many of them. There's so many of them. By definition. So any product, any thing that has a huge impact and improves their lives is a great business decision and it's just good for society. And there's nothing wrong with starting off in the high end as long as you have a plan to get to the middle class. There's nothing wrong with starting with a specialized market in order to learn and to build and to fund things. So you start with a luxury market to build a general purpose market. But if you define yourself as only a narrow market, someone else can come along with a general purpose market that can push you to the corner, can restrict the scale of operation, can force you to be a lesser impact than you might be. So it's very important to think in terms of broad businesses and broad impact. Even if you start in a little corner somewhere. So as you look to the 70s but also in the decades to come and you saw computers, did you see them as tools or was there a little element of another entity? I remember a quote saying AI began with our dream to create the gods. Is there a feeling when you wrote that program that you were creating another entity, giving life to something? I wish I could say otherwise, but I simply found the technology platforms so exciting. That's what I was focused on. I think the majority of the people that I've worked with, and there are a few exceptions, Steve Jobs being an example, really saw this as a great technological play. I think relatively few of the technical people understood the scale of its impact. So I used NCP, which is a predecessor to TCPIP. It just made sense to connect things. We didn't think of it in terms of the internet and then companies and then Facebook and then Twitter and then politics and so forth. We never did that build. We didn't have that vision. And I think most people, it's a rare person who can see compounding at scale. Most people can see, if you ask people to predict the future, they'll give you an answer of six to nine months or 12 months, because that's about as far as people can imagine. But there's an old saying, which actually was attributed to a professor at MIT a long time ago, that we overestimate what can be done in one year and we underestimate what can be done in a decade. And there's a great deal of evidence that these core platforms at hardware and software take a decade, right? So think about self driving cars. Self driving cars were thought about in the 90s. There were projects around them. The first DARPA Grand Challenge was roughly 2004. So that's roughly 15 years ago. And today we have self driving cars operating in a city in Arizona, right? It's 15 years and we still have a ways to go before they're more generally available. So you've spoken about the importance, you just talked about predicting into the future. You've spoken about the importance of thinking five years ahead and having a plan for those five years. The way to say it is that almost everybody has a one year plan. Almost no one has a proper five year plan. And the key thing to having a five year plan is to having a model for what's going to happen under the underlying platforms. So here's an example. Moore's Law as we know it, the thing that powered improvements in CPUs has largely halted in its traditional shrinking mechanism because the costs have just gotten so high. It's getting harder and harder. But there's plenty of algorithmic improvements and specialized hardware improvements. So you need to understand the nature of those improvements and where they'll go in order to understand how it will change the platform. In the area of network connectivity, what are the gains that are gonna be possible in wireless? It looks like there's an enormous expansion of wireless connectivity at many different bands. And that we will primarily, historically I've always thought that we were primarily gonna be using fiber, but now it looks like we're gonna be using fiber plus very powerful high bandwidth sort of short distance connectivity to bridge the last mile. That's an amazing achievement. If you know that, then you're gonna build your systems differently. By the way, those networks have different latency properties, right? Because they're more symmetric, the algorithms feel faster for that reason. And so when you think about whether it's a fiber or just technologies in general, so there's this barber wooden poem or quote that I really like. It's from the champions of the impossible rather than the slaves of the possible that evolution draws its creative force. So in predicting the next five years, I'd like to talk about the impossible and the possible. Well, and again, one of the great things about humanity is that we produce dreamers, right? We literally have people who have a vision and a dream. They are, if you will, disagreeable in the sense that they disagree with the, they disagree with what the sort of zeitgeist is. They say there is another way. They have a belief, they have a vision. If you look at science, science is always marked by such people who went against some conventional wisdom, collected the knowledge at the time and assembled it in a way that produced a powerful platform. And you've been amazingly honest about, in an inspiring way, about things you've been wrong about predicting and you've obviously been right about a lot of things, but in this kind of tension, how do you balance, as a company, in predicting the next five years, the impossible, planning for the impossible, so listening to those crazy dreamers, letting them do, letting them run away and make the impossible real, make it happen, and slow, you know, that's how programmers often think, and slowing things down and saying, well, this is the rational, this is the possible, the pragmatic, the dreamer versus the pragmatist, so it's helpful to have a model which encourages a predictable revenue stream as well as the ability to do new things. So in Google's case, we're big enough and well enough managed and so forth that we have a pretty good sense of what our revenue will be for the next year or two, at least for a while. And so we have enough cash generation that we can make bets, and indeed, Google has become alphabet, so the corporation is organized around these bets, and these bets are in areas of fundamental importance to the world, whether it's artificial intelligence, medical technology, self driving cars, connectivity through balloons, on and on and on. And there's more coming and more coming. So one way you could express this is that the current business is successful enough that we have the luxury of making bets. And another one that you could say is that we have the wisdom of being able to see that a corporate structure needs to be created to enhance the likelihood of the success of those bets. So we essentially turned ourselves into a conglomerate of bets and then this underlying corporation, Google, which is itself innovative. So in order to pull this off, you have to have a bunch of belief systems, and one of them is that you have to have bottoms up and tops down. The bottoms up we call 20% time, and the idea is that people can spend 20% of the time whatever they want, and the top down is that our founders in particular have a keen eye on technology and they're reviewing things constantly. So an example would be they'll hear about an idea or I'll hear about something and it sounds interesting, let's go visit them. And then let's begin to assemble the pieces to see if that's possible. And if you do this long enough, you get pretty good at predicting what's likely to work. So that's a beautiful balance that struck. Is this something that applies at all scale? It seems to be that Sergey, again, 15 years ago, came up with a concept called 10% of the budget should be on things that are unrelated. It was called 70, 20, 10. 70% of our time on core business, 20% on adjacent business, and 10% on other. And he proved mathematically, of course he's a brilliant mathematician, that you needed that 10% to make the sum of the growth work. And it turns out he was right. So getting into the world of artificial intelligence, you've talked quite extensively and effectively to the impact in the near term, the positive impact of artificial intelligence, whether it's especially machine learning in medical applications and education, and just making information more accessible, right? In the AI community, there is a kind of debate. There's this shroud of uncertainty as we face this new world with artificial intelligence in it. And there's some people, like Elon Musk, you've disagreed, at least on the degree of emphasis he places on the existential threat of AI. So I've spoken with Stuart Russell, Max Tegmark, who share Elon Musk's view, and Yoshua Bengio, Steven Pinker, who do not. And so there's a lot of very smart people who are thinking about this stuff, disagreeing, which is really healthy, of course. So what do you think is the healthiest way for the AI community to, and really for the general public, to think about AI and the concern of the technology being mismanaged in some kind of way? So the source of education for the general public has been robot killer movies. Right. And Terminator, et cetera. And the one thing I can assure you we're not building are those kinds of solutions. Furthermore, if they were to show up, someone would notice and unplug them, right? So as exciting as those movies are, and they're great movies, were the killer robots to start, we would find a way to stop them, right? So I'm not concerned about that. And much of this has to do with the timeframe of conversation. So you can imagine a situation 100 years from now when the human brain is fully understood and the next generation and next generation of brilliant MIT scientists have figured all this out, we're gonna have a large number of ethics questions, right? Around science and thinking and robots and computers and so forth and so on. So it depends on the question of the timeframe. In the next five to 10 years, we're not facing those questions. What we're facing in the next five to 10 years is how do we spread this disruptive technology as broadly as possible to gain the maximum benefit of it? The primary benefits should be in healthcare and in education. Healthcare because it's obvious. We're all the same even though we somehow believe we're not. As a medical matter, the fact that we have big data about our health will save lives, allow us to deal with skin cancer and other cancers, ophthalmological problems. There's people working on psychological diseases and so forth using these techniques. I can go on and on. The promise of AI in medicine is extraordinary. There are many, many companies and startups and funds and solutions and we will all live much better for that. The same argument in education. Can you imagine that for each generation of child and even adult, you have a tutor educator that's AI based, that's not a human but is properly trained, that helps you get smarter, helps you address your language difficulties or your math difficulties or what have you. Why don't we focus on those two? The gains societally of making humans smarter and healthier are enormous and those translate for decades and decades and we'll all benefit from them. There are people who are working on AI safety, which is the issue that you're describing and there are conversations in the community that should there be such problems, what should the rules be like? Google, for example, has announced its policies with respect to AI safety, which I certainly support and I think most everybody would support and they make sense, right? So it helps guide the research but the killer robots are not arriving this year and they're not even being built. And on that line of thinking, you said the time scale. In this topic or other topics, have you found it useful on the business side or the intellectual side to think beyond five, 10 years, to think 50 years out? Has it ever been useful or productive? In our industry, there are essentially no examples of 50 year predictions that have been correct. Let's review AI, right? AI, which was largely invented here at MIT and a couple of other universities in the 1956, 1957, 1958, the original claims were a decade or two. And when I was a PhD student, I studied AI a bit and it entered during my looking at it, a period which is known as AI winter, which went on for about 30 years, which is a whole generation of science, scientists and a whole group of people who didn't make a lot of progress because the algorithms had not improved and the computers had not approved. It took some brilliant mathematicians starting with a fellow named Jeff Hinton at Toronto and Montreal who basically invented this deep learning model which empowers us today. The seminal work there was 20 years ago and in the last 10 years, it's become popularized. So think about the timeframes for that level of discovery. It's very hard to predict. Many people think that we'll be flying around in the equivalent of flying cars, who knows? My own view, if I wanna go out on a limb, is to say that we know a couple of things about 50 years from now. We know that there'll be more people alive. We know that we'll have to have platforms that are more sustainable because the earth is limited in the ways we all know and that the kind of platforms that are gonna get built will be consistent with the principles that I've described. They will be much more empowering of individuals. They'll be much more sensitive to the ecology because they have to be, they just have to be. I also think that humans are gonna be a great deal smarter and I think they're gonna be a lot smarter because of the tools that I've discussed with you and of course, people will live longer. Life extension is continuing apace. A baby born today has a reasonable chance of living to 100, which is pretty exciting. It's well past the 21st century, so we better take care of them. And you mentioned an interesting statistic on some very large percentage, 60, 70% of people may live in cities. Today, more than half the world lives in cities and one of the great stories of humanity in the last 20 years has been the rural to urban migration. This has occurred in the United States, it's occurred in Europe, it's occurring in Asia and it's occurring in Africa. When people move to cities, the cities get more crowded, but believe it or not, their health gets better, their productivity gets better, their IQ and educational capabilities improve. So it's good news that people are moving to cities, but we have to make them livable and safe. So you, first of all, you are, but you've also worked with some of the greatest leaders in the history of tech. What insights do you draw from the difference in leadership styles of yourself, Steve Jobs, Elon Musk, Larry Page, now the new CEO, Sandra Pichai and others? From the, I would say, calm sages to the mad geniuses. One of the things that I learned as a young executive is that there's no single formula for leadership. They try to teach one, but that's not how it really works. There are people who just understand what they need to do and they need to do it quickly. Those people are often entrepreneurs. They just know and they move fast. There are other people who are systems thinkers and planners, that's more who I am, somewhat more conservative, more thorough in execution, a little bit more risk of risk. A little bit more risk averse. There's also people who are sort of slightly insane, in the sense that they are emphatic and charismatic and they feel it and they drive it and so forth. There's no single formula to success. There is one thing that unifies all of the people that you named, which is very high intelligence. At the end of the day, the thing that characterizes all of them is that they saw the world quicker, faster, they processed information faster. They didn't necessarily make the right decisions all the time, but they were on top of it. And the other thing that's interesting about all those people is they all started young. So think about Steve Jobs starting Apple roughly at 18 or 19. Think about Bill Gates starting at roughly 20, 21. Think about by the time they were 30, Mark Zuckerberg, a good example, at 19, 20. By the time they were 30, they had 10 years. At 30 years old, they had 10 years of experience of dealing with people and products and shipments and the press and business and so forth. It's incredible how much experience they had compared to the rest of us who were busy getting our PhDs. Yes, exactly. So we should celebrate these people because they've just had more life experience, right? And that helps inform the judgment. At the end of the day, when you're at the top of these organizations, all the easy questions have been dealt with, right? How should we design the buildings? Where should we put the colors on our product? What should the box look like, right? The problems, that's why it's so interesting to be in these rooms, the problems that they face, right, in terms of the way they operate, the way they deal with their employees, their customers, their innovation, are profoundly challenging. Each of the companies is demonstrably different culturally. They are not, in fact, cut of the same. They behave differently based on input. Their internal cultures are different. Their compensation schemes are different. Their values are different. So there's proof that diversity works. So, so when faced with a tough decision, in need of advice, it's been said that the best thing one can do is to find the best person in the world who can give that advice and find a way to be in a room with them, one on one and ask. So here we are, and let me ask in a long winded way, I wrote this down. In 1998, there were many good search engines, Lycos, Excite, AltaVista, Infoseek, Ask Jeeves maybe, Yahoo even. So Google stepped in and disrupted everything. They disrupted the nature of search, the nature of our access to information, the way we discover new knowledge. So now it's 2018, actually 20 years later. There are many good personal AI assistants, including, of course, the best from Google. So you've spoken in medical and education, the impact of such an AI assistant could bring. So we arrive at this question. So it's a personal one for me, but I hope my situation represents that of many other, as we said, dreamers and the crazy engineers. So my whole life, I've dreamed of creating such an AI assistant. Every step I've taken has been towards that goal. Now I'm a research scientist in human centered AI here at MIT. So the next step for me as I sit here, so facing my passion is to do what Larry and Sergey did in 98, this simple startup. And so here's my simple question. Given the low odds of success, the timing and luck required, the countless other factors that can't be controlled or predicted, which is all the things that Larry and Sergey faced, is there some calculation, some strategy to follow in this step? Or do you simply follow the passion just because there's no other choice? I think the people who are in universities are always trying to study the extraordinarily chaotic nature of innovation and entrepreneurship. My answer is that they didn't have that conversation. They just did it. They sensed a moment when in the case of Google, there was all of this data that needed to be organized and they had a better algorithm. They had invented a better way. So today with human centered AI, which is your area of research, there must be new approaches. It's such a big field. There must be new approaches, different from what we and others are doing. There must be startups to fund. There must be research projects to try. There must be graduate students to work on new approaches. Here at MIT, there are people who are looking at learning from the standpoint of looking at child learning. How do children learn starting at age one and two? And the work is fantastic. Those approaches are different from the approach that most people are taking. Perhaps that's a bet that you should make or perhaps there's another one. But at the end of the day, the successful entrepreneurs are not as crazy as they sound. They see an opportunity based on what's happened. Let's use Uber as an example. As Travis sells the story, he and his co founder were sitting in Paris and they had this idea because they couldn't get a cab. And they said, we have smartphones and the rest is history. So what's the equivalent of that Travis Eiffel Tower, where is a cab moment that you could, as an entrepreneur, take advantage of? Whether it's in human centered AI or something else. That's the next great startup. And the psychology of that moment. So when Sergey and Larry talk about, and listen to a few interviews, it's very nonchalant. Well, here's the very fascinating web data and here's an algorithm we have for, we just kind of want to play around with that data. And it seems like that's a really nice way to organize this data. I should say what happened to remember is that they were graduate students at Stanford and they thought this was interesting. So they built a search engine and they kept it in their room. And they had to get power from the room next door because they were using too much power in the room. So they ran an extension cord over, right? And then they went and they found a house and they had Google world headquarters of five people, right, to start the company. And they raised $100,000 from Andy Bechtolsheim, who was the Sun founder to do this and Dave Cheriton and a few others. The point is their beginnings were very simple but they were based on a powerful insight. That is a replicable model for any startup. It has to be a powerful insight. The beginnings are simple. And there has to be an innovation. In Larry and Sergey's case, it was PageRank, which was a brilliant idea, one of the most cited papers in the world today. What's the next one? So you're one of, if I may say, richest people in the world. And yet it seems that money is simply a side effect of your passions and not an inherent goal. But you're a fascinating person to ask. So much of our society at the individual level and at the company level and as nations is driven by the desire for wealth. What do you think about this drive? And what have you learned about, if I may romanticize the notion, the meaning of life, having achieved success on so many dimensions? There have been many studies of human happiness and above some threshold, which is typically relatively low for this conversation, there's no difference in happiness about money. The happiness is correlated with meaning and purpose, a sense of family, a sense of impact. So if you organize your life, assuming you have enough to get around and have a nice home and so forth, you'll be far happier if you figure out what you care about and work on that. It's often being in service to others. There's a great deal of evidence that people are happiest when they're serving others and not themselves. This goes directly against the sort of press induced excitement about powerful and wealthy leaders of one kind. And indeed these are consequential people. But if you are in a situation where you've been very fortunate as I have, you also have to take that as a responsibility and you have to basically work both to educate others and give them that opportunity, but also use that wealth to advance human society. In my case, I'm particularly interested in using the tools of artificial intelligence and machine learning to make society better. I've mentioned education, I've mentioned inequality and middle class and things like this, all of which are a passion of mine. It doesn't matter what you do, it matters that you believe in it, that it's important to you, and that your life will be far more satisfying if you spend your life doing that. I think there's no better place to end than a discussion of the meaning of life. Eric, thank you so much.
Eric Schmidt: Google | Lex Fridman Podcast #8
The following is a conversation with Stuart Russell. He's a professor of computer science at UC Berkeley and a coauthor of a book that introduced me and millions of other people to the amazing world of AI called Artificial Intelligence, A Modern Approach. So it was an honor for me to have this conversation as part of MIT course in artificial general intelligence and the artificial intelligence podcast. If you enjoy it, please subscribe on YouTube, iTunes or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman spelled F R I D. And now here's my conversation with Stuart Russell. So you've mentioned in 1975 in high school, you've created one of your first AI programs that play chess. Were you ever able to build a program that beat you at chess or another board game? So my program never beat me at chess. I actually wrote the program at Imperial College. So I used to take the bus every Wednesday with a box of cards this big and shove them into the card reader. And they gave us eight seconds of CPU time. It took about five seconds to read the cards in and compile the code. So we had three seconds of CPU time, which was enough to make one move, you know, with a not very deep search. And then we would print that move out and then we'd have to go to the back of the queue and wait to feed the cards in again. How deep was the search? Are we talking about one move, two moves, three moves? No, I think we got an eight move, a depth eight with alpha beta. And we had some tricks of our own about move ordering and some pruning of the tree. But you were still able to beat that program? Yeah, yeah. I was a reasonable chess player in my youth. I did an Othello program and a backgammon program. So when I got to Berkeley, I worked a lot on what we call meta reasoning, which really means reasoning about reasoning. And in the case of a game playing program, you need to reason about what parts of the search tree you're actually going to explore because the search tree is enormous, bigger than the number of atoms in the universe. And the way programs succeed and the way humans succeed is by only looking at a small fraction of the search tree. And if you look at the right fraction, you play really well. If you look at the wrong fraction, if you waste your time thinking about things that are never going to happen, moves that no one's ever going to make, then you're going to lose because you won't be able to figure out the right decision. So that question of how machines can manage their own computation, how they decide what to think about, is the meta reasoning question. And we developed some methods for doing that. And very simply, the machine should think about whatever thoughts are going to improve its decision quality. We were able to show that both for Othello, which is a standard two player game, and for Backgammon, which includes dice rolls, so it's a two player game with uncertainty. For both of those cases, we could come up with algorithms that were actually much more efficient than the standard alpha beta search, which chess programs at the time were using. And that those programs could beat me. And I think you can see the same basic ideas in Alpha Go and Alpha Zero today. The way they explore the tree is using a form of meta reasoning to select what to think about based on how useful it is to think about it. Is there any insights you can describe with our Greek symbols of how do we select which paths to go down? There's really two kinds of learning going on. So as you say, Alpha Go learns to evaluate board positions. So it can look at a go board. And it actually has probably a superhuman ability to instantly tell how promising that situation is. To me, the amazing thing about Alpha Go is not that it can be the world champion with its hands tied behind his back, but the fact that if you stop it from searching altogether, so you say, okay, you're not allowed to do any thinking ahead. You can just consider each of your legal moves and then look at the resulting situation and evaluate it. So what we call a depth one search. So just the immediate outcome of your moves and decide if that's good or bad. That version of Alpha Go can still play at a professional level. And human professionals are sitting there for five, 10 minutes deciding what to do and Alpha Go in less than a second can instantly intuit what is the right move to make based on its ability to evaluate positions. And that is remarkable because we don't have that level of intuition about Go. We actually have to think about the situation. So anyway, that capability that Alpha Go has is one big part of why it beats humans. The other big part is that it's able to look ahead 40, 50, 60 moves into the future. And if it was considering all possibilities, 40 or 50 or 60 moves into the future, that would be 10 to the 200 possibilities. So way more than atoms in the universe and so on. So it's very, very selective about what it looks at. So let me try to give you an intuition about how you decide what to think about. It's a combination of two things. One is how promising it is. So if you're already convinced that a move is terrible, there's no point spending a lot more time convincing yourself that it's terrible because it's probably not going to change your mind. So the real reason you think is because there's some possibility of changing your mind about what to do. And it's that changing your mind that would result then in a better final action in the real world. So that's the purpose of thinking is to improve the final action in the real world. So if you think about a move that is guaranteed to be terrible, you can convince yourself it's terrible, you're still not going to change your mind. But on the other hand, suppose you had a choice between two moves. One of them you've already figured out is guaranteed to be a draw, let's say. And then the other one looks a little bit worse. It looks fairly likely that if you make that move, you're going to lose. But there's still some uncertainty about the value of that move. There's still some possibility that it will turn out to be a win. Then it's worth thinking about that. So even though it's less promising on average than the other move, which is a good move, it's worth thinking about on average than the other move, which is guaranteed to be a draw. There's still some purpose in thinking about it because there's a chance that you will change your mind and discover that in fact it's a better move. So it's a combination of how good the move appears to be and how much uncertainty there is about its value. The more uncertainty, the more it's worth thinking about because there's a higher upside if you want to think of it that way. And of course in the beginning, especially in the AlphaGo Zero formulation, everything is shrouded in uncertainty. So you're really swimming in a sea of uncertainty. So it benefits you to, I mean, actually following the same process as you described, but because you're so uncertain about everything, you basically have to try a lot of different directions. Yeah. So the early parts of the search tree are fairly bushy that it will look at a lot of different possibilities, but fairly quickly, the degree of certainty about some of the moves, I mean, if a move is really terrible, you'll pretty quickly find out, right? You lose half your pieces or half your territory and then you'll say, okay, this is not worth thinking about anymore. And then so further down the tree becomes very long and narrow and you're following various lines of play, 10, 20, 30, 40, 50 moves into the future. And that again is something that human beings have a very hard time doing mainly because they just lack the short term memory. You just can't remember a sequence of moves that's 50 moves long. And you can't imagine the board correctly for that many moves into the future. Of course, the top players, I'm much more familiar with chess, but the top players probably have, they have echoes of the same kind of intuition instinct that in a moment's time AlphaGo applies when they see a board. I mean, they've seen those patterns, human beings have seen those patterns before at the top, at the grandmaster level. It seems that there is some similarities or maybe it's our imagination creates a vision of those similarities, but it feels like this kind of pattern recognition that the AlphaGo approaches are using is similar to what human beings at the top level are using. I think there's, there's some truth to that, but not entirely. Yeah. I mean, I think the, the extent to which a human grandmaster can reliably instantly recognize the right move and instantly recognize the value of the position. I think that's a little bit overrated. But if you sacrifice a queen, for example, I mean, there's these, there's these beautiful games of chess with Bobby Fischer, somebody where it's seeming to make a bad move. And I'm not sure there's a perfect degree of calculation involved where they've calculated all the possible things that happen, but there's an instinct there, right? That somehow adds up to Yeah. So I think what happens is you, you, you get a sense that there's some possibility in the position, even if you make a weird looking move, that it opens up some, some lines of, of calculation that otherwise would be definitely bad. And, and it's that intuition that there's something here in this position that might, might yield a win. And then you follow that, right? And, and in some sense, when a, when a chess player is following a line and in his or her mind, they're, they're mentally simulating what the other person is going to do, what the opponent is going to do. And they can do that as long as the moves are kind of forced, right? As long as there's, you know, there's a, a fort we call a forcing variation where the opponent doesn't really have much choice how to respond. And then you follow that, how to respond. And then you see if you can force them into a situation where you win. You know, we see plenty of mistakes even, even in grandmaster games where they just miss some simple three, four, five move combination that, you know, wasn't particularly apparent in, in the position, but was still there. That's the thing that makes us human. Yeah. So when you mentioned that in Othello, those games were after some matter reasoning improvements and research was able to beat you. How did that make you feel? Part of the meta reasoning capability that it had was based on learning and, and you could sit down the next day and you could just feel that it had got a lot smarter, you know, and all of a sudden you really felt like you're sort of pressed against the wall because it was, it was much more aggressive and, and was totally unforgiving of any minor mistake that you might make. And, and actually it seemed understood the game better than I did. And Gary Kasparov has this quote where during his match against Deep Blue, he said, he suddenly felt that there was a new kind of intelligence across the board. Do you think that's a scary or an exciting possibility for, for Kasparov and for yourself in, in the context of chess, purely sort of in this, like that feeling, whatever that is? I think it's definitely an exciting feeling. You know, this is what made me work on AI in the first place was as soon as I really understood what a computer was, I wanted to make it smart. You know, I started out with the first program I wrote was for the Sinclair programmable calculator. And I think you could write a 21 step algorithm. That was the biggest program you could write, something like that. And do little arithmetic calculations. So I think I implemented Newton's method for a square roots and a few other things like that. But then, you know, I thought, okay, if I just had more space, I could make this thing intelligent. And so I started thinking about AI and, and I think the, the, the thing that's scary is not, is not the chess program because, you know, chess programs, they're not in the taking over the world business. But if you extrapolate, you know, there are things about chess that don't resemble the real world, right? We know, we know the rules of chess. The chess board is completely visible to the program where of course the real world is not most, most of the real world is, is not visible from wherever you're sitting, so to speak. And to overcome those kinds of problems, you need qualitatively different algorithms. Another thing about the real world is that, you know, we, we regularly plan ahead on the timescales involving billions or trillions of steps. Now we don't plan those in detail, but you know, when you choose to do a PhD at Berkeley, that's a five year commitment and that amounts to about a trillion motor control steps that you will eventually be committed to. Including going up the stairs, opening doors, drinking water. Yeah. I mean, every, every finger movement while you're typing, every character of every paper and the thesis and everything. So you're not committing in advance to the specific motor control steps, but you're still reasoning on a timescale that will eventually reduce to trillions of motor control actions. And so for all of these reasons, you know, AlphaGo and Deep Blue and so on don't represent any kind of threat to humanity, but they are a step towards it, right? And progress in AI occurs by essentially removing one by one these assumptions that make problems easy. Like the assumption of complete observability of the situation, right? We remove that assumption, you need a much more complicated kind of computing design. It needs, it needs something that actually keeps track of all the things you can't see and tries to estimate what's going on. And there's inevitable uncertainty in that. So it becomes a much more complicated problem. But, you know, we are removing those assumptions. We are starting to have algorithms that can cope with much longer timescales, that can cope with uncertainty, that can cope with partial observability. And so each of those steps sort of magnifies by a thousand the range of things that we can do with AI systems. So the way I started in AI, I wanted to be a psychiatrist for a long time. I wanted to understand the mind in high school and of course program and so on. And I showed up University of Illinois to an AI lab and they said, okay, I don't have time for you, but here's a book, AI and Modern Approach. I think it was the first edition at the time. Here, go, go, go learn this. And I remember the lay of the land was, well, it's incredible that we solved chess, but we'll never solve go. I mean, it was pretty certain that go in the way we thought about systems that reason wasn't possible to solve. And now we've solved this. So it's a very... Well, I think I would have said that it's unlikely we could take the kind of algorithm that was used for chess and just get it to scale up and work well for go. And at the time what we thought was that in order to solve go, we would have to do something similar to the way humans manage the complexity of go, which is to break it down into kind of sub games. So when a human thinks about a go board, they think about different parts of the board as sort of weakly connected to each other. And they think about, okay, within this part of the board, here's how things could go in that part of board, here's how things could go. And then you try to sort of couple those two analyses together and deal with the interactions and maybe revise your views of how things are going to go in each part. And then you've got maybe five, six, seven, ten parts of the board. And that actually resembles the real world much more than chess does because in the real world, we have work, we have home life, we have sport, different kinds of activities, shopping, these all are connected to each other, but they're weakly connected. So when I'm typing a paper, I don't simultaneously have to decide which order I'm going to get the milk and the butter, that doesn't affect the typing. But I do need to realize, okay, I better finish this before the shops close because I don't have anything, I don't have any food at home. So there's some weak connection, but not in the way that chess works where everything is tied into a single stream of thought. So the thought was that to solve go, we'd have to make progress on stuff that would be useful for the real world. And in a way, AlphaGo is a little bit disappointing, right? Because the program designed for AlphaGo is actually not that different from Deep Blue or even from Arthur Samuel's checker playing program from the 1950s. And in fact, the two things that make AlphaGo work is one is this amazing ability to evaluate the positions, and the other is the meta reasoning capability, which allows it to explore some paths in the tree very deeply and to abandon other paths very quickly. So this word meta reasoning, while technically correct, inspires perhaps the wrong degree of power that AlphaGo has, for example, the word reasoning is a powerful word. So let me ask you, sort of, you were part of the symbolic AI world for a while, like AI was, there's a lot of excellent, interesting ideas there that unfortunately met a winter. And so do you think it reemerges? So I would say, yeah, it's not quite as simple as that. So the AI winter for the first winter that was actually named as such was the one in the late 80s. And that came about because in the mid 80s, there was a really a concerted attempt to push AI out into the real world using what was called expert system technology. And for the most part, that technology was just not ready for primetime. They were trying, in many cases, to do a form of uncertain reasoning, judgment, combinations of evidence, diagnosis, those kinds of things, which was simply invalid. And when you try to apply invalid reasoning methods to real problems, you can fudge it for small versions of the problem. But when it starts to get larger, the thing just falls apart. So many companies found that the stuff just didn't work, and they were spending tons of money on consultants to try to make it work. And there were other practical reasons, like they were asking the companies to buy incredibly expensive Lisp machine workstations, which were literally between $50,000 and $100,000 in 1980s money, which would be like between $150,000 and $300,000 per workstation in current prices. And then the bottom line, they weren't seeing a profit from it. Yeah, in many cases. I think there were some successes, there's no doubt about that. But people, I would say, overinvested. Every major company was starting an AI department, just like now. And I worry a bit that we might see similar disappointments, not because the current technology is invalid, but it's limited in its scope. And it's almost the duel of the scope problems that expert systems had. So what have you learned from that hype cycle? And what can we do to prevent another winter, for example? Yeah, so when I'm giving talks these days, that's one of the warnings that I give. So this is a two part warning slide. One is that rather than data being the new oil, data is the new snake oil. That's a good line. And then the other is that we might see a kind of very visible failure in some of the major application areas. And I think self driving cars would be the flagship. And I think when you look at the history, so the first self driving car was on the freeway, driving itself, changing lanes, overtaking in 1987. And so it's more than 30 years. And that kind of looks like where we are today, right? You know, prototypes on the freeway, changing lanes and overtaking. Now, I think that's one of the things that's been made, particularly on the perception side. So we worked a lot on autonomous vehicles in the early mid 90s at Berkeley. And we had our own big demonstrations. We put congressmen into self driving cars and had them zooming along the freeway. And the problem was clearly perception. At the time, the problem was perception. Yeah. So in simulation, with perfect perception, you could actually show that you can drive safely for a long time, even if the other cars are misbehaving and so on. But simultaneously, we worked on machine vision for detecting cars and tracking pedestrians and so on. And we couldn't get the cars to do that. And so we had to do that for pedestrians and so on. And we couldn't get the reliability of detection and tracking up to a high enough level, particularly in bad weather conditions, nighttime, rainfall. Good enough for demos, but perhaps not good enough to cover the general operation. Yeah. So the thing about driving is, you know, suppose you're a taxi driver, you know, and you drive every day, eight hours a day for 10 years, right? That's 100 million seconds of driving, you know, and any one of those seconds, you can make a fatal mistake. So you're talking about eight nines of reliability, right? Now, if your vision system only detects 98.3% of the vehicles, right, then that's sort of, you know, one in a bit nines of reliability. So you have another seven orders of magnitude to go. And this is what people don't understand. They think, oh, because I had a successful demo, I'm pretty much done. But you're not even within seven orders of magnitude of being done. And that's the difficulty. And it's not the, can I follow a white line? That's not the problem, right? We follow a white line all the way across the country. But it's the weird stuff that happens. It's all the edge cases, yeah. The edge case, other drivers doing weird things. You know, so if you talk to Google, right, so they had actually a very classical architecture where, you know, you had machine vision which would detect all the other cars and pedestrians and the white lines and the road signs. And then basically that was fed into a logical database. And then you had a classical 1970s rule based expert system telling you, okay, if you're in the middle lane and there's a bicyclist in the right lane who is signaling this, then you do that, right? And what they found was that every day they'd go out and there'd be another situation that the rules didn't cover. You know, so they'd come to a traffic circle and there's a little girl riding her bicycle the wrong way around the traffic circle. Okay, what do you do? We don't have a rule. Oh my God. Okay, stop. And then, you know, they come back and add more rules and they just found that this was not really converging. And if you think about it, right, how do you deal with an unexpected situation, meaning one that you've never previously encountered and the sort of reasoning required to figure out the solution for that situation has never been done. It doesn't match any previous situation in terms of the kind of reasoning you have to do. Well, you know, in chess programs, this happens all the time, right? You're constantly coming up with situations you haven't seen before and you have to reason about them and you have to think about, okay, here are the possible things I could do. Here are the outcomes. Here's how desirable the outcomes are and then pick the right one. You know, in the 90s, we were saying, okay, this is how you're going to have to do automated vehicles. They're going to have to have a look ahead capability, but the look ahead for driving is more difficult than it is for chess because there's humans and they're less predictable than chess pieces. Well, then you have an opponent in chess who's also somewhat unpredictable. But for example, in chess, you always know the opponent's intention. They're trying to beat you, right? Whereas in driving, you don't know is this guy trying to turn left or has he just forgotten to turn off his turn signal or is he drunk or is he changing the channel on his radio or whatever it might be. You've got to try and figure out the mental state, the intent of the other drivers to forecast the possible evolutions of their trajectories. And then you've got to figure out, okay, which is the trajectory for me that's going to be safest. And those all interact with each other because the other drivers are going to react to your trajectory and so on. So, you know, they've got the classic merging onto the freeway problem where you're kind of racing a vehicle that's already on the freeway and you're going to pull ahead of them or you're going to let them go first and pull in behind and you get this sort of uncertainty about who's going first. So all those kinds of things mean that you need a decision making architecture that's very different from either a rule based system or it seems to me kind of an end to end neural network system. So just as AlphaGo is pretty good when it doesn't do any look ahead, but it's way, way, way, way better when it does, I think the same is going to be true for driving. You can have a driving system that's pretty good when it doesn't do any look ahead, but that's not good enough. And we've already seen multiple deaths caused by poorly designed machine learning algorithms that don't really understand what they're doing. Yeah. On several levels, I think on the perception side, there's mistakes being made by those algorithms where the perception is very shallow. On the planning side, the look ahead, like you said, and the thing that we come up against that's really interesting when you try to deploy systems in the real world is you can't think of an artificial intelligence system as a thing that responds to the world always. You have to realize that it's an agent that others will respond to as well. So in order to drive successfully, you can't just try to do obstacle avoidance. Right. You can't pretend that you're invisible, right? You're the invisible car. Right. It doesn't work that way. I mean, but you have to assert yet others have to be scared of you. Just we're all, there's this tension, there's this game. So if we study a lot of work with pedestrians, if you approach pedestrians as purely an obstacle avoidance, so you're doing look ahead as in modeling the intent that they're not going to, they're going to take advantage of you. They're not going to respect you at all. There has to be a tension, a fear, some amount of uncertainty. That's how we have created. Or at least just a kind of a resoluteness. You have to display a certain amount of resoluteness. You can't be too tentative. And yeah, so the solutions then become pretty complicated, right? You get into game theoretic analyses. And so at Berkeley now, we're working a lot on this kind of interaction between machines and humans. And that's exciting. And so my colleague, Ankur Dragan, actually, if you formulate the problem game theoretically, you just let the system figure out the solution. It does interesting unexpected things. Like sometimes at a stop sign, if no one is going first, the car will actually back up a little, right? And just to indicate to the other cars that they should go. And that's something it invented entirely by itself. We didn't say this is the language of communication at stop signs. It figured it out. That's really interesting. So let me one just step back for a second. Just this beautiful philosophical notion. So Pamela McCordick in 1979 wrote, AI began with the ancient wish to forge the gods. So when you think about the history of our civilization, do you think that there is an inherent desire to create, let's not say gods, but to create superintelligence? Is it inherent to us? Is it in our genes? That the natural arc of human civilization is to create things that are of greater and greater power and perhaps echoes of ourselves. So to create the gods as Pamela said. Maybe. I mean, we're all individuals, but certainly we see over and over again in history, individuals who thought about this possibility. Hopefully when I'm not being too philosophical here, but if you look at the arc of this, where this is going and we'll talk about AI safety, we'll talk about greater and greater intelligence. Do you see that there in, when you created the Othello program and you felt this excitement, what was that excitement? Was it excitement of a tinkerer who created something cool like a clock? Or was there a magic or was it more like a child being born? Yeah. So I mean, I certainly understand that viewpoint. And if you look at the Lighthill report, which was, so in the 70s, there was a lot of controversy in the UK about AI and whether it was for real and how much money the government should invest. And there was a long story, but the government commissioned a report by Lighthill, who was a physicist, and he wrote a very damning report about AI, which I think was the point. And he said that these are frustrated men who are unable to have children would like to create and create a life as a kind of replacement, which I think is really pretty unfair. But there is a kind of magic, I would say, when you build something and what you're building in is really just, you're building in some understanding of the principles of learning and decision making. And to see those principles actually then turn into intelligent behavior in specific situations, it's an incredible thing. And that is naturally going to make you think, okay, where does this end? And so there's magical optimistic views of where it ends, whatever your view of optimism is, whatever your view of utopia is, it's probably different for everybody. But you've often talked about concerns you have of how things may go wrong. So I've talked to Max Tegmark. There's a lot of interesting ways to think about AI safety. You're one of the seminal people thinking about this problem amongst sort of being in the weeds of actually solving specific AI problems. You're also thinking about the big picture of where are we going? So can you talk about several elements of it? Let's just talk about maybe the control problem. So this idea of losing ability to control the behavior in our AI system. So how do you see that? How do you see that coming about? What do you think we can do to manage it? Well, so it doesn't take a genius to realize that if you make something that's smarter than you, you might have a problem. Alan Turing wrote about this and gave lectures about this in 1951. He did a lecture on the radio and he basically says, once the machine thinking method starts, very quickly they'll outstrip humanity. And if we're lucky, we might be able to turn off the power at strategic moments, but even so, our species would be humbled. Actually, he was wrong about that. If it's sufficiently intelligent machine, it's not going to let you switch it off. It's actually in competition with you. So what do you think is most likely going to happen? What do you think is meant just for a quick tangent, if we shut off this super intelligent machine that our species will be humbled? I think he means that we would realize that we are inferior, right? That we only survive by the skin of our teeth because we happen to get to the off switch just in time. And if we hadn't, then we would have lost control over the earth. Are you more worried when you think about this stuff about super intelligent AI, or are you more worried about super powerful AI that's not aligned with our values? So the paperclip scenarios kind of... So the main problem I'm working on is the control problem, the problem of machines pursuing objectives that are, as you say, not aligned with human objectives. And this has been the way we've thought about AI since the beginning. You build a machine for optimizing, and then you put in some objective, and it optimizes, right? And we can think of this as the King Midas problem, right? Because if the King Midas put in this objective, everything I touch should turn to gold. And the gods, that's like the machine, they said, okay, done. You now have this power. And of course, his father, his drink, and his family all turned to gold. And then he dies of misery and starvation. And it's a warning, it's a failure mode that pretty much every culture in history has had some story along the same lines. There's the genie that gives you three wishes, and the third wish is always, you know, please undo the first two wishes because I messed up. And when Arthur Samuel wrote his checker playing program, which learned to play checkers considerably better than Arthur Samuel could play, and actually reached a pretty decent standard. Norbert Wiener, who was one of the major mathematicians of the 20th century, he's sort of the father of modern automation control systems. He saw this and he basically extrapolated, as Turing did, and said, okay, this is how we could lose control. And specifically, that we have to be certain that the purpose we put into the machine is the purpose which we really desire. And the problem is, we can't do that. You mean we're not, it's a very difficult to encode, to put our values on paper is really difficult, or you're just saying it's impossible? So theoretically, it's possible, but in practice, it's extremely unlikely that we could specify correctly in advance, the full range of concerns of humanity. You talked about cultural transmission of values, I think is how humans to human transmission of values happens, right? Well, we learn, yeah, I mean, as we grow up, we learn about the values that matter, how things should go, what is reasonable to pursue and what isn't reasonable to pursue. You think machines can learn in the same kind of way? Yeah, so I think that what we need to do is to get away from this idea that you build an optimising machine, and then you put the objective into it. Because if it's possible that you might put in a wrong objective, and we already know this is possible because it's happened lots of times, right? That means that the machine should never take an objective that's given as gospel truth. Because once it takes the objective as gospel truth, then it believes that whatever actions it's taking in pursuit of that objective are the correct things to do. So you could be jumping up and down and saying, no, no, no, no, you're going to destroy the world, but the machine knows what the true objective is and is pursuing it, and tough luck to you. And this is not restricted to AI, right? This is, I think, many of the 20th century technologies, right? So in statistics, you minimise a loss function, the loss function is exogenously specified. In control theory, you minimise a cost function. In operations research, you maximise a reward function, and so on. So in all these disciplines, this is how we conceive of the problem. And it's the wrong problem because we cannot specify with certainty the correct objective, right? We need uncertainty, we need the machine to be uncertain about what it is that it's supposed to be maximising. Favourite idea of yours, I've heard you say somewhere, well, I shouldn't pick favourites, but it just sounds beautiful, we need to teach machines humility. It's a beautiful way to put it, I love it. That they're humble, they know that they don't know what it is they're supposed to be doing, and that those objectives, I mean, they exist, they're within us, but we may not be able to we may not be able to explicate them, we may not even know how we want our future to go. Exactly. And the machine, a machine that's uncertain is going to be deferential to us. So if we say, don't do that, well, now the machines learn something a bit more about our true objectives, because something that it thought was reasonable in pursuit of our objective, turns out not to be, so now it's learned something. So it's going to defer because it wants to be doing what we really want. And that point, I think, is absolutely central to solving the control problem. And it's a different kind of AI when you take away this idea that the objective is known, then in fact, a lot of the theoretical frameworks that we're so familiar with, you know, Markov decision processes, goal based planning, you know, standard games research, all of these techniques actually become inapplicable. And you get a more complicated problem because now the interaction with the human becomes part of the problem. Because the human by making choices is giving you more information about the true objective and that information helps you achieve the objective better. And so that really means that you're mostly dealing with game theoretic problems where you've got the machine and the human and they're coupled together, rather than a machine going off by itself with a fixed objective. LW. Which is fascinating on the machine and the human level that we, when you don't have an objective, means you're together coming up with an objective. I mean, there's a lot of philosophy that, you know, you could argue that life doesn't really have meaning. We together agree on what gives it meaning and we kind of culturally create things that give why the heck we are on this earth anyway. We together as a society create that meaning and you have to learn that objective. And one of the biggest, I thought that's where you were going to go for a second, one of the biggest troubles we run into outside of statistics and machine learning and AI and just human civilization is when you look at, I came from, I was born in the Soviet Union and the history of the 20th century, we ran into the most trouble, us humans, when there was a certainty about the objective and you do whatever it takes to achieve that objective, whether you're talking about Germany or communist Russia. You get into trouble with humans. I would say with, you know, corporations, in fact, some people argue that, you know, we don't have to look forward to a time when AI systems take over the world. They already have and they call corporations, right? That corporations happen to be using people as components right now, but they are effectively algorithmic machines and they're optimizing an objective, which is quarterly profit that isn't aligned with overall wellbeing of the human race. And they are destroying the world. They are primarily responsible for our inability to tackle climate change. So I think that's one way of thinking about what's going on with corporations, but I think the point you're making is valid that there are many systems in the real world where we've sort of prematurely fixed on the objective and then decoupled the machine from those that's supposed to be serving. And I think you see this with government, right? Government is supposed to be a machine that serves people, but instead it tends to be taken over by people who have their own objective and use government to optimize that objective regardless of what people want. Do you find appealing the idea of almost arguing machines where you have multiple AI systems with a clear fixed objective. We have in government, the red team and the blue team, they're very fixed on their objectives and they argue and they kind of may disagree, but it kind of seems to make it work somewhat that the duality of it. Okay. Let's go a hundred years back when there was still was going on or at the founding of this country, there was disagreements and that disagreement is where, so it was a balance between certainty and forced humility because the power was distributed. Yeah. I think that the nature of debate and disagreement argument takes as a premise, the idea that you could be wrong, which means that you're not necessarily absolutely convinced that your objective is the correct one. If you were absolutely convinced, there'd be no point in having any discussion or argument because you would never change your mind and there wouldn't be any sort of synthesis or anything like that. I think you can think of argumentation as an implementation of a form of uncertain reasoning. I've been reading recently about utilitarianism and the history of efforts to define in a sort of clear mathematical way, if you like a formula for moral or political decision making. It's really interesting that the parallels between the philosophical discussions going back 200 years and what you see now in discussions about existential risk because it's almost exactly the same. Someone would say, okay, well here's a formula for how we should make decisions. Utilitarianism is roughly each person has a utility function and then we make decisions to maximize the sum of everybody's utility. Then people point out, well, in that case, the best policy is one that leads to the enormously vast population, all of whom are living a life that's barely worth living. This is called the repugnant conclusion. Another version is that we should maximize pleasure and that's what we mean by utility. Then you'll get people effectively saying, well, in that case, we might as well just have everyone hooked up to a heroin drip. They didn't use those words, but that debate was happening in the 19th century as it is now about AI that if we get the formula wrong, we're going to have AI systems working towards an outcome that in retrospect would be exactly wrong. Do you think there's, as beautifully put, so the echoes are there, but do you think, I mean, if you look at Sam Harris, our imagination worries about the AI version of that because of the speed at which the things going wrong in the utilitarian context could happen. Is that a worry for you? Yeah. I think that in most cases, not in all, but if we have a wrong political idea, we see it starting to go wrong and we're not completely stupid and so we say, okay, maybe that was a mistake. Let's try something different. Also, we're very slow and inefficient about implementing these things and so on. So you have to worry when you have corporations or political systems that are extremely efficient. But when we look at AI systems or even just computers in general, they have this different characteristic from ordinary human activity in the past. So let's say you were a surgeon, you had some idea about how to do some operation. Well, and let's say you were wrong, that way of doing the operation would mostly kill the patient. Well, you'd find out pretty quickly, like after three, maybe three or four tries. But that isn't true for pharmaceutical companies because they don't do three or four operations. They manufacture three or four billion pills and they sell them and then they find out maybe six months or a year later that, oh, people are dying of heart attacks or getting cancer from this drug. And so that's why we have the FDA, right? Because of the scalability of pharmaceutical production. And there have been some unbelievably bad episodes in the history of pharmaceuticals and adulteration of products and so on that have killed tens of thousands or paralyzed hundreds of thousands of people. Now with computers, we have that same scalability problem that you can sit there and type for I equals one to five billion do, right? And all of a sudden you're having an impact on a global scale. And yet we have no FDA, right? There's absolutely no controls at all over what a bunch of undergraduates with too much caffeine can do to the world. And we look at what happened with Facebook, well, social media in general and click through optimization. So you have a simple feedback algorithm that's trying to just optimize click through, right? That sounds reasonable, right? Because you don't want to be feeding people ads that they don't care about or not interested in. And you might even think of that process as simply adjusting the feeding of ads or news articles or whatever it might be to match people's preferences, right? Which sounds like a good idea. But in fact, that isn't how the algorithm works, right? You make more money, the algorithm makes more money if it can better predict what people are going to click on, because then it can feed them exactly that, right? So the way to maximize click through is actually to modify the people to make them more predictable. And one way to do that is to feed them information, which will change their behavior and preferences towards extremes that make them predictable. Whatever is the nearest extreme or the nearest predictable point, that's where you're going to end up. And the machines will force you there. And I think there's a reasonable argument to say that this, among other things, is contributing to the destruction of democracy in the world. And where was the oversight of this process? Where were the people saying, okay, you would like to apply this algorithm to 5 billion people on the face of the earth. Can you show me that it's safe? Can you show me that it won't have various kinds of negative effects? No, there was no one asking that question. There was no one placed between the undergrads with too much caffeine and the human race. They just did it. But some way outside the scope of my knowledge, so economists would argue that the, what is it, the invisible hand, so the capitalist system, it was the oversight. So if you're going to corrupt society with whatever decision you make as a company, then that's going to be reflected in people not using your product. That's one model of oversight. We shall see, but in the meantime, but you might even have broken the political system that enables capitalism to function. Well, you've changed it. We shall see. Change is often painful. So my question is absolutely, it's fascinating. You're absolutely right that there was zero oversight on algorithms that can have a profound civilization changing effect. So do you think it's possible? I mean, I haven't, have you seen government? So do you think it's possible to create regulatory bodies oversight over AI algorithms, which are inherently such cutting edge set of ideas and technologies? Yeah, but I think it takes time to figure out what kind of oversight, what kinds of controls. I mean, it took time to design the FDA regime, you know, and some people still don't like it and they want to fix it. And I think there are clear ways that it could be improved. But the whole notion that you have stage one, stage two, stage three, and here are the criteria for what you have to do to pass a stage one trial, right? We haven't even thought about what those would be for algorithms. So, I mean, I think there are things we could do right now with regard to bias, for example, we have a pretty good technical handle on how to detect algorithms that are propagating bias that exists in data sets, how to de bias those algorithms, and even what it's going to cost you to do that. So I think we could start having some standards on that. I think there are things to do with impersonation and falsification that we could work on. Fakes, yeah. A very simple point. So impersonation is a machine acting as if it was a person. I can't see a real justification for why we shouldn't insist that machines self identify as machines. Where is the social benefit in fooling people into thinking that this is really a person when it isn't? I don't mind if it uses a human like voice, that's easy to understand, that's fine, but it should just say, I'm a machine in some form. And how many people are speaking to that? I would think relatively obvious facts. Yeah, I mean, there is actually a law in California that bans impersonation, but only in certain restricted circumstances. So for the purpose of engaging in a fraudulent transaction and for the purpose of modifying someone's voting behavior. So those are the circumstances where machines have to self identify. But I think arguably, it should be in all circumstances. And then when you talk about deep fakes, we're just at the beginning, but already it's possible to make a movie of anybody saying anything in ways that are pretty hard to detect. Including yourself because you're on camera now and your voice is coming through with high resolution. Yeah, so you could take what I'm saying and replace it with pretty much anything else you wanted me to be saying. And it's a very simple thing. Take what I'm saying and replace it with pretty much anything else you wanted me to be saying. And even it would change my lips and facial expressions to fit. And there's actually not much in the way of real legal protection against that. I think in the commercial area, you could say, yeah, you're using my brand and so on. There are rules about that. But in the political sphere, I think at the moment, anything goes. That could be really, really damaging. And let me just try to make not an argument, but try to look back at history and say something dark in essence is while regulation seems to be, oversight seems to be exactly the right thing to do here. It seems that human beings, what they naturally do is they wait for something to go wrong. If you're talking about nuclear weapons, you can't talk about nuclear weapons being dangerous until somebody actually like the United States drops the bomb or Chernobyl melting. Do you think we will have to wait for things going wrong in a way that's obviously damaging to society, not an existential risk, but obviously damaging? Or do you have faith that... I hope not, but I think we do have to look at history. And so the two examples you gave, nuclear weapons and nuclear power are very, very interesting because nuclear weapons, we knew in the early years of the 20th century that atoms contained a huge amount of energy. We had E equals MC squared. We knew the mass differences between the different atoms and their components. And we knew that you might be able to make an incredibly powerful explosive. So HG Wells wrote science fiction book, I think in 1912. Frederick Soddy, who was the guy who discovered isotopes, the Nobel prize winner, he gave a speech in 1915 saying that one pound of this new explosive would be the equivalent of 150 tons of dynamite, which turns out to be about right. And this was in World War I, so he was imagining how much worse the world war would be if we were using that kind of explosive. But the physics establishment simply refused to believe that these things could be made. Including the people who are making it. Well, so they were doing the nuclear physics. I mean, eventually were the ones who made it. You talk about Fermi or whoever. Well, so up to the development was mostly theoretical. So it was people using sort of primitive kinds of particle acceleration and doing experiments at the level of single particles or collections of particles. They weren't yet thinking about how to actually make a bomb or anything like that. But they knew the energy was there and they figured if they understood it better, it might be possible. But the physics establishment, their view, and I think because they did not want it to be true, their view was that it could not be true. That this could not not provide a way to make a super weapon. And there was this famous speech given by Rutherford, who was the sort of leader of nuclear physics. And it was on September 11th, 1933. And he said, anyone who talks about the possibility of obtaining energy from transformation of atoms is talking complete moonshine. And the next morning, Leo Szilard read about that speech and then invented the nuclear chain reaction. And so as soon as he invented, as soon as he had that idea that you could make a chain reaction with neutrons, because neutrons were not repelled by the nucleus, so they could enter the nucleus and then continue the reaction. As soon as he has that idea, he instantly realized that the world was in deep doo doo. Because this is 1933, right? Hitler had recently come to power in Germany. Szilard was in London and eventually became a refugee and came to the US. And in the process of having the idea about the chain reaction, he figured out basically how to make a bomb and also how to make a reactor. And he patented the reactor in 1934. But because of the situation, the great power conflict situation that he could see happening, he kept that a secret. And so between then and the beginning of World War II, people were working, including the Germans, on how to actually create neutron sources, what specific fission reactions would produce neutrons of the right energy to continue the reaction. And that was demonstrated in Germany, I think in 1938, if I remember correctly. The first nuclear weapon patent was 1939 by the French. So this was actually going on well before World War II really got going. And then the British probably had the most advanced capability in this area. But for safety reasons, among others, and just resources, they moved the program from Britain to the US and then that became Manhattan Project. So the reason why we couldn't have any kind of oversight of nuclear weapons and nuclear technology was because we were basically already in an arms race and a war. LR But you mentioned then in the 20s and 30s. So what are the echoes? The way you've described this story, I mean, there's clearly echoes. Why do you think most AI researchers, folks who are really close to the metal, they really are not concerned about AI. They don't think about it, whether it's they don't want to think about it. But why do you think that is, is what are the echoes of the nuclear situation to the current AI situation? And what can we do about it? BF I think there is a kind of motivated cognition, which is a term in psychology means that you believe what you would like to be true, rather than what is true. And it's unsettling to think that what you're working on might be the end of the human race, obviously. So you would rather instantly deny it and come up with some reason why it couldn't be true. And I have, I collected a long list of reasons that extremely intelligent, competent AI scientists have come up with for why we shouldn't worry about this. For example, calculators are superhuman at arithmetic and they haven't taken over the world. So there's nothing to worry about. Well, okay, my five year old, you know, could have figured out why that was an unreasonable and really quite weak argument. Another one was, while it's theoretically possible that you could have superhuman AI destroy the world, it's also theoretically possible that a black hole could materialize right next to the earth and destroy humanity. I mean, yes, it's theoretically possible, quantum theoretically, extremely unlikely that it would just materialize right there. But that's a completely bogus analogy, because, you know, if the whole physics community on earth was working to materialize a black hole in near earth orbit, right? Wouldn't you ask them, is that a good idea? Is that going to be safe? You know, what if you succeed? Right. And that's the thing, right? The AI community is sort of refused to ask itself, what if you succeed? And initially I think that was because it was too hard, but, you know, Alan Turing asked himself that, and he said, we'd be toast, right? If we were lucky, we might be able to switch off the power, but probably we'd be toast. But there's also an aspect that because we're not exactly sure what the future holds, it's not clear exactly, so technically what to worry about, sort of how things go wrong. And so there is something, it feels like, maybe you can correct me if I'm wrong, but there's something paralyzing about worrying about something that logically is inevitable, but you have to think about it, logically is inevitable, but you don't really know what that will look like. Yeah, I think that's, it's a reasonable point and, you know, it's certainly in terms of existential risks, it's different from, you know, asteroid collides with the earth, right? Which, again, is quite possible, you know, it's happened in the past, it'll probably happen again, we don't know right now, but if we did detect an asteroid that was going to hit the earth in 75 years time, we'd certainly be doing something about it. Well, it's clear there's got big rock and there's, we'll probably have a meeting and see what do we do about the big rock with AI. Right, with AI, I mean, there are very few people who think it's not going to happen within the next 75 years. I know Rod Brooks doesn't think it's going to happen, maybe Andrew Ng doesn't think it's happened, but, you know, a lot of the people who work day to day, you know, as you say, at the rock face, they think it's going to happen. I think the median estimate from AI researchers is somewhere in 40 to 50 years from now, or maybe, you know, I think in Asia, they think it's going to be even faster than that. I'm a little bit more conservative, I think it'd probably take longer than that, but I think, you know, as happened with nuclear weapons, it can happen overnight that you have these breakthroughs and we need more than one breakthrough, but, you know, it's on the order of half a dozen, I mean, this is a very rough scale, but sort of half a dozen breakthroughs of that nature would have to happen for us to reach the superhuman AI. But the, you know, the AI research community is vast now, the massive investments from governments, from corporations, tons of really, really smart people, you know, you just have to look at the rate of progress in different areas of AI to see that things are moving pretty fast. So to say, oh, it's just going to be thousands of years, I don't see any basis for that. You know, I see, you know, for example, the Stanford 100 year AI project, right, which is supposed to be sort of, you know, the serious establishment view, their most recent report actually said it's probably not even possible. Oh, wow. Right. Which if you want a perfect example of people in denial, that's it. Because, you know, for the whole history of AI, we've been saying to philosophers who said it wasn't possible, well, you have no idea what you're talking about. Of course it's possible, right? Give me an argument for why it couldn't happen. And there isn't one, right? And now, because people are worried that maybe AI might get a bad name, or I just don't want to think about this, they're saying, okay, well, of course, it's not really possible. You know, imagine if, you know, the leaders of the cancer biology community got up and said, well, you know, of course, curing cancer, it's not really possible. There'd be complete outrage and dismay. And, you know, I find this really a strange phenomenon. So, okay, so if you accept that it's possible, and if you accept that it's probably going to happen, the point that you're making that, you know, how does it go wrong? A valid question. Without that, without an answer to that question, then you're stuck with what I call the gorilla problem, which is, you know, the problem that the gorillas face, right? They made something more intelligent than them, namely us, a few million years ago, and now they're in deep doo doo. So there's really nothing they can do. They've lost the control. They failed to solve the control problem of controlling humans, and so they've lost. So we don't want to be in that situation. And if the gorilla problem is the only formulation you have, there's not a lot you can do, right? Other than to say, okay, we should try to stop, you know, we should just not make the humans, or in this case, not make the AI. And I think that's really hard to do. I'm not actually proposing that that's a feasible course of action. I also think that, you know, if properly controlled AI could be incredibly beneficial. But it seems to me that there's a consensus that one of the major failure modes is this loss of control, that we create AI systems that are pursuing incorrect objectives. And because the AI system believes it knows what the objective is, it has no incentive to listen to us anymore, so to speak, right? It's just carrying out the strategy that it has computed as being the optimal solution. And, you know, it may be that in the process, it needs to acquire more resources to increase the possibility of success or prevent various failure modes by defending itself against interference. And so that collection of problems, I think, is something we can address. The other problems are, roughly speaking, you know, misuse, right? So even if we solve the control problem, we make perfectly safe controllable AI systems. Well, why? You know, why does Dr. Evil going to use those, right? He wants to just take over the world and he'll make unsafe AI systems that then get out of control. So that's one problem, which is sort of a, you know, partly a policing problem, partly a sort of a cultural problem for the profession of how we teach people what kinds of AI systems are safe. You talk about autonomous weapon system and how pretty much everybody agrees that there's too many ways that that can go horribly wrong. This great slaughterbots movie that kind of illustrates that beautifully. I want to talk about that. That's another, there's another topic I'm having to talk about. I just want to mention that what I see is the third major failure mode, which is overuse, not so much misuse, but overuse of AI that we become overly dependent. So I call this the WALL E problem. So if you've seen WALL E, the movie, all right, all the humans are on the spaceship and the machines look after everything for them, and they just watch TV and drink big gulps. And they're all sort of obese and stupid and they sort of totally lost any notion of human autonomy. And, you know, so in effect, right. This would happen like the slow boiling frog, right? We would gradually turn over more and more of the management of our civilization to machines as we are already doing. And this, you know, if this if this process continues, you know, we sort of gradually switch from sort of being the masters of technology to just being the guests. Right. So we become guests on a cruise ship, you know, which is fine for a week, but not not for the rest of eternity. You know, and it's almost irreversible. Right. Once you once you lose the incentive to, for example, you know, learn to be an engineer or a doctor or a sanitation operative or any other of the infinitely many ways that we maintain and propagate our civilization. You know, if you if you don't have the incentive to do any of that, you won't. And then it's really hard to recover. And of course, as just one of the technologies that could that third failure mode result in that there's probably other technology in general detaches us from it does a bit. But the difference is that in terms of the knowledge to to run our civilization, you know, up to now, we've had no alternative but to put it into people's heads. Right. And if you software with Google, I mean, so software in general, so computers in general, but but the, you know, the knowledge of how, you know, how a sanitation system works, you know, that's an AI has to understand that it's no good putting it into Google. So, I mean, we we've always put knowledge in on paper, but paper doesn't run our civilization and only runs when it goes from the paper into people's heads again. Right. So we've always propagated civilization through human minds. And we've spent about a trillion person years doing that. I literally write you, you can work it out. It's about right. There's about just over 100 billion people who've ever lived. And each of them has spent about 10 years learning stuff to keep their civilization going. And so that's a trillion person years we put into this effort. Beautiful way to describe all civilization. And now we're, you know, we're in danger of throwing that away. So this is a problem that AI can't solve. It's not a technical problem. It's you know, if we do our job right, the AI systems will say, you know, the human race doesn't in the long run want to be passengers in a cruise ship. The human race wants autonomy. This is part of human preferences. So we, the AI systems are not going to do this stuff for you. You've got to do it for yourself. Right. I'm not going to carry you to the top of Everest in an autonomous helicopter. You have to climb it if you want to get the benefit and so on. So, but I'm afraid that because we are short sighted and lazy, we're going to override the AI systems. And, and there's an amazing short story that I recommend to everyone that I talked to about this called The Machine Stops, written in 1909 by E.M. Forster, who, you know, wrote novels about the British Empire and sort of things that became costume dramas on the BBC. But he wrote this one science fiction story, which is an amazing vision of the future. It has basically iPads, it has video conferencing, it has MOOCs, it has computer induced obesity. I mean, literally it's what people spend their time doing is giving online courses or listening to online courses and talking about ideas, but they never get out there in the real world. They don't really have a lot of face to face contact. Everything is done online, you know, so all the things we're worrying about now were described in the story. And, and then the human race becomes more and more dependent on the machine, loses knowledge of how things really run and then becomes vulnerable to collapse. And so it's a, it's a pretty unbelievably amazing story for someone writing in 1909 to imagine all this. So there's very few people that represent artificial intelligence more than you Stuart Russell. If you say it's okay, that's very kind. So it's all my fault. Right. You're often brought up as the person, well, Stuart Russell, like the AI person is worried about this. That's why you should be worried about it. Do you feel the burden of that? I don't know if you feel that at all, but when I talk to people like from, you talk about people outside of computer science, when they think about this, Stuart Russell is worried about AI safety. You should be worried too. Do you feel the burden of that? I mean, in a practical sense, yeah, because I get, you know, a dozen, sometimes 25 invitations a day to talk about it, to give interviews, to write press articles and so on. So in that very practical sense, I'm seeing that people are concerned and really interested about this. Are you worried that you could be wrong as all good scientists are? Of course. I worry about that all the time. I mean, that's, that's always been the way that I, I've worked, you know, is like I have an argument in my head with myself, right? So I have, I have some idea and then I think, okay, how could that be wrong? Or did someone else already have that idea? So I'll go and, you know, search in as much literature as I can to see whether someone else already thought of that or, or even refuted it. So, you know, I, right now I'm, I'm reading a lot of philosophy because, you know, in, in the form of the debates over, over utilitarianism and, and other kinds of moral, moral formulas, shall we say, people have already thought through some of these issues. But, you know, what, one of the things I'm, I'm not seeing in a lot of these debates is this specific idea about the importance of uncertainty in the objective that this is the way we should think about machines that are beneficial to humans. So this idea of provably beneficial machines based on explicit uncertainty in the objective, you know, it seems to be, you know, my gut feeling is this is the core of it. It's going to have to be elaborated in a lot of different directions and there are a lot of beneficial. Yeah. But there, there are, I mean, it has to be right. We can't afford, you know, hand wavy beneficial because there are, you know, whenever we do hand wavy stuff, there are loopholes. And the thing about super intelligent machines is they find the loopholes, you know, just like, you know, tax evaders. If you don't write your tax law properly, people will find the loopholes and end up paying no tax. And, and so you should think of it this way and, and getting those definitions right, you know, it is really a long process, you know, so you can, you can define mathematical frameworks and within that framework, you can prove mathematical theorems that yes, this will, you know, this, this theoretical entity will be provably beneficial to that theoretical entity, but that framework may not match the real world in some crucial way. So it's a long process, thinking through it, iterating and so on. Last question. Yep. You have 10 seconds to answer it. What is your favorite sci fi movie about AI? I would say interstellar has my favorite robots. Oh, beats space. Yeah. Yeah. Yeah. So, so Tars, the robots, one of the robots in interstellar is the way robot should behave. And, uh, I would say ex machina is in some ways, the one, the one that makes you think, uh, in a nervous kind of way about, about where we're going. Well Stuart, thank you so much for talking today. Pleasure.
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
The following is a conversation with Peter Abbeel. He's a professor at UC Berkeley and the director of the Berkeley Robotics Learning Lab. He's one of the top researchers in the world working on how we make robots understand and interact with the world around them, especially using imitation and deep reinforcement learning. This conversation is part of the MIT course on Artificial General Intelligence and the Artificial Intelligence podcast. If you enjoy it, please subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Peter Abbeel. You've mentioned that if there was one person you could meet, it would be Roger Federer. So let me ask, when do you think we'll have a robot that fully autonomously can beat Roger Federer at tennis? Roger Federer level player at tennis? Well, first, if you can make it happen for me to meet Roger, let me know. In terms of getting a robot to beat him at tennis, it's kind of an interesting question because for a lot of the challenges we think about in AI, the software is really the missing piece, but for something like this, the hardware is nowhere near either. To really have a robot that can physically run around, the Boston Dynamics robots are starting to get there, but still not really human level ability to run around and then swing a racket. So you think that's a hardware problem? I don't think it's a hardware problem only. I think it's a hardware and a software problem. I think it's both. And I think they'll have independent progress. So I'd say the hardware maybe in 10, 15 years. On clay, not grass. I mean, grass is probably harder. With the sliding? Yeah. With the clay, I'm not sure what's harder, grass or clay. The clay involves sliding, which might be harder to master actually, yeah. But you're not limited to a bipedal. I mean, I'm sure there's no... Well, if we can build a machine, it's a whole different question, of course. If you can say, okay, this robot can be on wheels, it can move around on wheels and can be designed differently, then I think that can be done sooner probably than a full humanoid type of setup. What do you think of swing a racket? So you've worked at basic manipulation. How hard do you think is the task of swinging a racket would be able to hit a nice backhand or a forehand? Let's say we just set up stationary, a nice robot arm, let's say, a standard industrial arm, and it can watch the ball come and then swing the racket. It's a good question. I'm not sure it would be super hard to do. I mean, I'm sure it would require a lot, if we do it with reinforcement learning, it would require a lot of trial and error. It's not gonna swing it right the first time around, but yeah, I don't see why I couldn't swing it the right way. I think it's learnable. I think if you set up a ball machine, let's say on one side, and then a robot with a tennis racket on the other side, I think it's learnable and maybe a little bit of pre training and simulation. Yeah, I think that's feasible. I think the swing the racket is feasible. It'd be very interesting to see how much precision it can get. Cause I mean, that's where, I mean, some of the human players can hit it on the lines, which is very high precision. With spin, the spin is an interesting, whether RL can learn to put a spin on the ball. Well, you got me interested. Maybe someday we'll set this up. Sure, you got me intrigued. Your answer is basically, okay, for this problem, it sounds fascinating, but for the general problem of a tennis player, we might be a little bit farther away. What's the most impressive thing you've seen a robot do in the physical world? So physically for me, it's the Boston Dynamics videos. Always just bring home and just super impressed. Recently, the robot running up the stairs, doing the parkour type thing. I mean, yes, we don't know what's underneath. They don't really write a lot of detail, but even if it's hard coded underneath, which it might or might not be just the physical abilities of doing that parkour, that's a very impressive. So have you met Spot Mini or any of those robots in person? Met Spot Mini last year in April at the Mars event that Jeff Bezos organizes. They brought it out there and it was nicely following around Jeff. When Jeff left the room, they had it follow him along, which is pretty impressive. So I think there's some confidence to know that there's no learning going on in those robots. The psychology of it, so while knowing that, while knowing there's not, if there's any learning going on, it's very limited. I met Spot Mini earlier this year and knowing everything that's going on, having one on one interaction, so I got to spend some time alone and there's immediately a deep connection on the psychological level. Even though you know the fundamentals, how it works, there's something magical. So do you think about the psychology of interacting with robots in the physical world? Even you just showed me the PR2, the robot, and there was a little bit something like a face, had a little bit something like a face. There's something that immediately draws you to it. Do you think about that aspect of the robotics problem? Well, it's very hard with Brad here. We'll give him a name, Berkeley Robot for the Elimination of Tedious Tasks. It's very hard to not think of the robot as a person and it seems like everybody calls him a he for whatever reason, but that also makes it more a person than if it was a it, and it seems pretty natural to think of it that way. This past weekend really struck me. I've seen Pepper many times on videos, but then I was at an event organized by, this was by Fidelity, and they had scripted Pepper to help moderate some sessions, and they had scripted Pepper to have the personality of a child a little bit, and it was very hard to not think of it as its own person in some sense because it would just jump in the conversation, making it very interactive. Moderate would be saying, Pepper would just jump in, hold on, how about me? Can I participate in this too? And you're just like, okay, this is like a person, and that was 100% scripted, and even then it was hard not to have that sense of somehow there is something there. So as we have robots interact in this physical world, is that a signal that could be used in reinforcement learning? You've worked a little bit in this direction, but do you think that psychology can be somehow pulled in? Yes, that's a question I would say a lot of people ask, and I think part of why they ask it is they're thinking about how unique are we really still as people? Like after they see some results, they see a computer play Go, they see a computer do this, that, they're like, okay, but can it really have emotion? Can it really interact with us in that way? And then once you're around robots, you already start feeling it, and I think that kind of maybe mythologically, the way that I think of it is if you run something like reinforcement learning, it's about optimizing some objective, and there's no reason that the objective couldn't be tied into how much does a person like interacting with this system, and why could not the reinforcement learning system optimize for the robot being fun to be around? And why wouldn't it then naturally become more and more interactive and more and more maybe like a person or like a pet? I don't know what it would exactly be, but more and more have those features and acquire them automatically. As long as you can formalize an objective of what it means to like something, what, how you exhibit, what's the ground truth? How do you get the reward from human? Because you have to somehow collect that information within you, human. But you're saying if you can formulate as an objective, it can be learned. There's no reason it couldn't emerge through learning, and maybe one way to formulate as an objective, you wouldn't have to necessarily score it explicitly, so standard rewards are numbers, and numbers are hard to come by. This is a 1.5 or a 1.7 on some scale. It's very hard to do for a person, but much easier is for a person to say, okay, what you did the last five minutes was much nicer than what you did the previous five minutes, and that now gives a comparison. And in fact, there have been some results on that. For example, Paul Christiano and collaborators at OpenAI had the Hopper, Mojoko Hopper, a one legged robot, going through backflips purely from feedback. I like this better than that. That's kind of equally good, and after a bunch of interactions, it figured out what it was the person was asking for, namely a backflip. And so I think the same thing. Oh, it wasn't trying to do a backflip. It was just getting a comparison score from the person based on? Person having in mind, in their own mind, I wanted to do a backflip, but the robot didn't know what it was supposed to be doing. It just knew that sometimes the person said, this is better, this is worse, and then the robot figured out what the person was actually after was a backflip. And I'd imagine the same would be true for things like more interactive robots, that the robot would figure out over time, oh, this kind of thing apparently is appreciated more than this other kind of thing. So when I first picked up Sutton's, Richard Sutton's reinforcement learning book, before sort of this deep learning, before the reemergence of neural networks as a powerful mechanism for machine learning, RL seemed to me like magic. It was beautiful. So that seemed like what intelligence is, RL reinforcement learning. So how do you think we can possibly learn anything about the world when the reward for the actions is delayed, is so sparse? Like where is, why do you think RL works? Why do you think you can learn anything under such sparse rewards, whether it's regular reinforcement learning or deep reinforcement learning? What's your intuition? The counterpart of that is why is RL, why does it need so many samples, so many experiences to learn from? Because really what's happening is when you have a sparse reward, you do something maybe for like, I don't know, you take 100 actions and then you get a reward. And maybe you get like a score of three. And I'm like okay, three, not sure what that means. You go again and now you get two. And now you know that that sequence of 100 actions that you did the second time around somehow was worse than the sequence of 100 actions you did the first time around. But that's tough to now know which one of those were better or worse. Some might have been good and bad in either one. And so that's why it needs so many experiences. But once you have enough experiences, effectively RL is teasing that apart. It's trying to say okay, what is consistently there when you get a higher reward and what's consistently there when you get a lower reward? And then kind of the magic of sometimes the policy gradient update is to say now let's update the neural network to make the actions that were kind of present when things are good more likely and make the actions that are present when things are not as good less likely. So that is the counterpoint, but it seems like you would need to run it a lot more than you do. Even though right now people could say that RL is very inefficient, but it seems to be way more efficient than one would imagine on paper. That the simple updates to the policy, the policy gradient, that somehow you can learn, exactly you just said, what are the common actions that seem to produce some good results? That that somehow can learn anything. It seems counterintuitive at least. Is there some intuition behind it? Yeah, so I think there's a few ways to think about this. The way I tend to think about it mostly originally, so when we started working on deep reinforcement learning here at Berkeley, which was maybe 2011, 12, 13, around that time, John Schulman was a PhD student initially kind of driving it forward here. And the way we thought about it at the time was if you think about rectified linear units or kind of rectifier type neural networks, what do you get? You get something that's piecewise linear feedback control. And if you look at the literature, linear feedback control is extremely successful, can solve many, many problems surprisingly well. I remember, for example, when we did helicopter flight, if you're in a stationary flight regime, not a non stationary, but a stationary flight regime like hover, you can use linear feedback control to stabilize a helicopter, very complex dynamical system, but the controller is relatively simple. And so I think that's a big part of it is that if you do feedback control, even though the system you control can be very, very complex, often relatively simple control architectures can already do a lot. But then also just linear is not good enough. And so one way you can think of these neural networks is that sometimes they tile the space, which people were already trying to do more by hand or with finite state machines, say this linear controller here, this linear controller here. Neural network learns to tile the space and say linear controller here, another linear controller here, but it's more subtle than that. And so it's benefiting from this linear control aspect, it's benefiting from the tiling, but it's somehow tiling it one dimension at a time. Because if let's say you have a two layer network, if in that hidden layer, you make a transition from active to inactive or the other way around, that is essentially one axis, but not axis aligned, but one direction that you change. And so you have this kind of very gradual tiling of the space where you have a lot of sharing between the linear controllers that tile the space. And that was always my intuition as to why to expect that this might work pretty well. It's essentially leveraging the fact that linear feedback control is so good, but of course not enough. And this is a gradual tiling of the space with linear feedback controls that share a lot of expertise across them. So that's really nice intuition, but do you think that scales to the more and more general problems of when you start going up the number of dimensions when you start going down in terms of how often you get a clean reward signal? Does that intuition carry forward to those crazier, weirder worlds that we think of as the real world? So I think where things get really tricky in the real world compared to the things we've looked at so far with great success in reinforcement learning is the time scales, which takes us to an extreme. So when you think about the real world, I mean, I don't know, maybe some student decided to do a PhD here, right? Okay, that's a decision. That's a very high level decision. But if you think about their lives, I mean, any person's life, it's a sequence of muscle fiber contractions and relaxations, and that's how you interact with the world. And that's a very high frequency control thing, but it's ultimately what you do and how you affect the world, until I guess we have brain readings and you can maybe do it slightly differently. But typically that's how you affect the world. And the decision of doing a PhD is so abstract relative to what you're actually doing in the world. And I think that's where credit assignment becomes just completely beyond what any current RL algorithm can do. And we need hierarchical reasoning at a level that is just not available at all yet. Where do you think we can pick up hierarchical reasoning? By which mechanisms? Yeah, so maybe let me highlight what I think the limitations are of what already was done 20, 30 years ago. In fact, you'll find reasoning systems that reason over relatively long horizons, but the problem is that they were not grounded in the real world. So people would have to hand design some kind of logical, dynamical descriptions of the world and that didn't tie into perception. And so it didn't tie into real objects and so forth. And so that was a big gap. Now with deep learning, we start having the ability to really see with sensors, process that and understand what's in the world. And so it's a good time to try to bring these things together. I see a few ways of getting there. One way to get there would be to say deep learning can get bolted on somehow to some of these more traditional approaches. Now bolted on would probably mean you need to do some kind of end to end training where you say my deep learning processing somehow leads to a representation that in term uses some kind of traditional underlying dynamical systems that can be used for planning. And that's, for example, the direction Aviv Tamar and Thanard Kuretach here have been pushing with causal info again and of course other people too. That's one way. Can we somehow force it into the form factor that is amenable to reasoning? Another direction we've been thinking about for a long time and didn't make any progress on was more information theoretic approaches. So the idea there was that what it means to take high level action is to take and choose a latent variable now that tells you a lot about what's gonna be the case in the future. Because that's what it means to take a high level action. I say okay, I decide I'm gonna navigate to the gas station because I need to get gas for my car. Well, that'll now take five minutes to get there. But the fact that I get there, I could already tell that from the high level action I took much earlier. That we had a very hard time getting success with. Not saying it's a dead end necessarily, but we had a lot of trouble getting that to work. And then we started revisiting the notion of what are we really trying to achieve? What we're trying to achieve is not necessarily hierarchy per se, but you could think about what does hierarchy give us? What we hope it would give us is better credit assignment. What is better credit assignment? It's giving us, it gives us faster learning, right? And so faster learning is ultimately maybe what we're after. And so that's where we ended up with the RL squared paper on learning to reinforcement learn, which at a time Rocky Dwan led. And that's exactly the meta learning approach where you say, okay, we don't know how to design hierarchy. We know what we want to get from it. Let's just enter and optimize for what we want to get from it and see if it might emerge. And we saw things emerge. The maze navigation had consistent motion down hallways, which is what you want. A hierarchical control should say, I want to go down this hallway. And then when there is an option to take a turn, I can decide whether to take a turn or not and repeat. Even had the notion of where have you been before or not to not revisit places you've been before. It still didn't scale yet to the real world kind of scenarios I think you had in mind, but it was some sign of life that maybe you can meta learn these hierarchical concepts. I mean, it seems like through these meta learning concepts, get at the, what I think is one of the hardest and most important problems of AI, which is transfer learning. So it's generalization. How far along this journey towards building general systems are we? Being able to do transfer learning well. So there's some signs that you can generalize a little bit, but do you think we're on the right path or it's totally different breakthroughs are needed to be able to transfer knowledge between different learned models? Yeah, I'm pretty torn on this in that I think there are some very impressive. Well, there's just some very impressive results already. I mean, I would say when, even with the initial kind of big breakthrough in 2012 with AlexNet, the initial thing is okay, great. This does better on ImageNet, hence image recognition. But then immediately thereafter, there was of course the notion that, wow, what was learned on ImageNet and you now wanna solve a new task, you can fine tune AlexNet for new tasks. And that was often found to be the even bigger deal that you learn something that was reusable, which was not often the case before. Usually machine learning, you learn something for one scenario and that was it. And that's really exciting. I mean, that's a huge application. That's probably the biggest success of transfer learning today in terms of scope and impact. That was a huge breakthrough. And then recently, I feel like similar kind of, by scaling things up, it seems like this has been expanded upon. Like people training even bigger networks, they might transfer even better. If you looked at, for example, some of the OpenAI results on language models and some of the recent Google results on language models, they're learned for just prediction and then they get reused for other tasks. And so I think there is something there where somehow if you train a big enough model on enough things, it seems to transfer some deep mind results that I thought were very impressive, the Unreal results, where it was learned to navigate mazes in ways where it wasn't just doing reinforcement learning, but it had other objectives it was optimizing for. So I think there's a lot of interesting results already. I think maybe where it's hard to wrap my head around this, to which extent or when do we call something generalization? Or the levels of generalization in the real world, or the levels of generalization involved in these different tasks, right? You draw this, by the way, just to frame things. I've heard you say somewhere, it's the difference between learning to master versus learning to generalize, that it's a nice line to think about. And I guess you're saying that it's a gray area of what learning to master and learning to generalize, where one starts. I think I might have heard this. I might have heard it somewhere else. And I think it might've been one of your interviews, maybe the one with Yoshua Benjamin, I'm not 100% sure. But I liked the example, I'm not sure who it was, but the example was essentially, if you use current deep learning techniques, what we're doing to predict, let's say, the relative motion of our planets, it would do pretty well. But then now if a massive new mass enters our solar system, it would probably not predict what will happen, right? And that's a different kind of generalization. That's a generalization that relies on the ultimate simplest, simplest explanation that we have available today to explain the motion of planets, whereas just pattern recognition could predict our current solar system motion pretty well, no problem. And so I think that's an example of a kind of generalization that is a little different from what we've achieved so far. And it's not clear if just regularizing more and forcing it to come up with a simpler, simpler, simpler explanation and say, look, this is not simple. But that's what physics researchers do, right? They say, can I make this even simpler? How simple can I get this? What's the simplest equation that can explain everything? The master equation for the entire dynamics of the universe, we haven't really pushed that direction as hard in deep learning, I would say. Not sure if it should be pushed, but it seems a kind of generalization you get from that that you don't get in our current methods so far. So I just talked to Vladimir Vapnik, for example, who's a statistician of statistical learning, and he kind of dreams of creating the E equals MC squared for learning, right? The general theory of learning. Do you think that's a fruitless pursuit in the near term, within the next several decades? I think that's a really interesting pursuit in the following sense, in that there is a lot of evidence that the brain is pretty modular. And so I wouldn't maybe think of it as the theory, maybe the underlying theory, but more kind of the principle where there have been findings where people who are blind will use the part of the brain usually used for vision for other functions. And even after some kind of, if people get rewired in some way, they might be able to reuse parts of their brain for other functions. And so what that suggests is some kind of modularity. And I think it is a pretty natural thing to strive for to see, can we find that modularity? Can we find this thing? Of course, every part of the brain is not exactly the same. Not everything can be rewired arbitrarily. But if you think of things like the neocortex, which is a pretty big part of the brain, that seems fairly modular from what the findings so far. Can you design something equally modular? And if you can just grow it, it becomes more capable probably. I think that would be the kind of interesting underlying principle to shoot for that is not unrealistic. Do you think you prefer math or empirical trial and error for the discovery of the essence of what it means to do something intelligent? So reinforcement learning embodies both groups, right? To prove that something converges, prove the bounds. And then at the same time, a lot of those successes are, well, let's try this and see if it works. So which do you gravitate towards? How do you think of those two parts of your brain? Maybe I would prefer we could make the progress with mathematics. And the reason maybe I would prefer that is because often if you have something you can mathematically formalize, you can leapfrog a lot of experimentation. And experimentation takes a long time to get through. And a lot of trial and error, kind of reinforcement learning, your research process, but you need to do a lot of trial and error before you get to a success. So if you can leapfrog that, to my mind, that's what the math is about. And hopefully once you do a bunch of experiments, you start seeing a pattern. You can do some derivations that leapfrog some experiments. But I agree with you. I mean, in practice, a lot of the progress has been such that we have not been able to find the math that allows you to leapfrog ahead. And we are kind of making gradual progress one step at a time, a new experiment here, a new experiment there that gives us new insights and gradually building up, but not getting to something yet where we're just, okay, here's an equation that now explains how, you know, that would be, have been two years of experimentation to get there, but this tells us what the result's going to be. Unfortunately, not so much yet. Not so much yet, but your hope is there. In trying to teach robots or systems to do everyday tasks or even in simulation, what do you think you're more excited about? Imitation learning or self play? So letting robots learn from humans or letting robots plan their own to try to figure out in their own way and eventually play, eventually interact with humans or solve whatever the problem is. What's the more exciting to you? What's more promising you think as a research direction? So when we look at self play, what's so beautiful about it is goes back to kind of the challenges in reinforcement learning. So the challenge of reinforcement learning is getting signal. And if you don't never succeed, you don't get any signal. In self play, you're on both sides. So one of you succeeds. And the beauty is also one of you fails. And so you see the contrast. You see the one version of me that did better than the other version. So every time you play yourself, you get signal. And so whenever you can turn something into self play, you're in a beautiful situation where you can naturally learn much more quickly than in most other reinforcement learning environments. So I think if somehow we can turn more reinforcement learning problems into self play formulations, that would go really, really far. So far, self play has been largely around games where there is natural opponents. But if we could do self play for other things, and let's say, I don't know, a robot learns to build a house. I mean, that's a pretty advanced thing to try to do for a robot, but maybe it tries to build a hut or something. If that can be done through self play, it would learn a lot more quickly if somebody can figure that out. And I think that would be something where it goes closer to kind of the mathematical leapfrogging where somebody figures out a formalism to say, okay, any RL problem by playing this and this idea, you can turn it into a self play problem where you get signal a lot more easily. Reality is, many problems we don't know how to turn into self play. And so either we need to provide detailed reward. That doesn't just reward for achieving a goal, but rewards for making progress, and that becomes time consuming. And once you're starting to do that, let's say you want a robot to do something, you need to give all this detailed reward. Well, why not just give a demonstration? Because why not just show the robot? And now the question is, how do you show the robot? One way to show is to tally operate the robot, and then the robot really experiences things. And that's nice, because that's really high signal to noise ratio data, and we've done a lot of that. And you teach your robot skills in just 10 minutes, you can teach your robot a new basic skill, like okay, pick up the bottle, place it somewhere else. That's a skill, no matter where the bottle starts, maybe it always goes onto a target or something. That's fairly easy to teach your robot with tally up. Now, what's even more interesting if you can now teach your robot through third person learning, where the robot watches you do something and doesn't experience it, but just kind of watches you. It doesn't experience it, but just watches it and says, okay, well, if you're showing me that, that means I should be doing this. And I'm not gonna be using your hand, because I don't get to control your hand, but I'm gonna use my hand, I do that mapping. And so that's where I think one of the big breakthroughs has happened this year. This was led by Chelsea Finn here. It's almost like learning a machine translation for demonstrations, where you have a human demonstration, and the robot learns to translate it into what it means for the robot to do it. And that was a meta learning formulation, learn from one to get the other. And that, I think, opens up a lot of opportunities to learn a lot more quickly. So my focus is on autonomous vehicles. Do you think this approach of third person watching, the autonomous driving is amenable to this kind of approach? So for autonomous driving, I would say third person is slightly easier. And the reason I'm gonna say it's slightly easier to do with third person is because the car dynamics are very well understood. So the... Easier than first person, you mean? Or easier than... So I think the distinction between third person and first person is not a very important distinction for autonomous driving. They're very similar. Because the distinction is really about who turns the steering wheel. Or maybe, let me put it differently. How to get from a point where you are now to a point, let's say, a couple meters in front of you. And that's a problem that's very well understood. And that's the only distinction between third and first person there. Whereas with the robot manipulation, interaction forces are very complex. And it's still a very different thing. For autonomous driving, I think there is still the question, imitation versus RL. So imitation gives you a lot more signal. I think where imitation is lacking and needs some extra machinery is, it doesn't, in its normal format, doesn't think about goals or objectives. And of course, there are versions of imitation learning and versus reinforcement learning type imitation learning which also thinks about goals. I think then we're getting much closer. But I think it's very hard to think of a fully reactive car, generalizing well. If it really doesn't have a notion of objectives to generalize well to the kind of general that you would want. You'd want more than just that reactivity that you get from just behavioral cloning slash supervised learning. So a lot of the work, whether it's self play or even imitation learning, would benefit significantly from simulation, from effective simulation. And you're doing a lot of stuff in the physical world and in simulation. Do you have hope for greater and greater power of simulation being boundless eventually to where most of what we need to operate in the physical world could be simulated to a degree that's directly transferable to the physical world? Or are we still very far away from that? So I think we could even rephrase that question in some sense. Please. And so the power of simulation, right? As simulators get better and better, of course, becomes stronger and we can learn more in simulation. But there's also another version which is where you say the simulator doesn't even have to be that precise. As long as it's somewhat representative and instead of trying to get one simulator that is sufficiently precise to learn in and transfer really well to the real world, I'm gonna build many simulators. Ensemble of simulators? Ensemble of simulators. Not any single one of them is sufficiently representative of the real world such that it would work if you train in there. But if you train in all of them, then there is something that's good in all of them. The real world will just be another one of them that's not identical to any one of them but just another one of them. Another sample from the distribution of simulators. Exactly. We do live in a simulation, so this is just one other one. I'm not sure about that, but yeah. It's definitely a very advanced simulator if it is. Yeah, it's a pretty good one. I've talked to Stuart Russell. It's something you think about a little bit too. Of course, you're really trying to build these systems, but do you think about the future of AI? A lot of people have concern about safety. How do you think about AI safety? As you build robots that are operating in the physical world, what is, yeah, how do you approach this problem in an engineering kind of way, in a systematic way? So when a robot is doing things, you kind of have a few notions of safety to worry about. One is that the robot is physically strong and of course could do a lot of damage. Same for cars, which we can think of as robots too in some way. And this could be completely unintentional. So it could be not the kind of longterm AI safety concerns that, okay, AI is smarter than us and now what do we do? But it could be just very practical. Okay, this robot, if it makes a mistake, what are the results going to be? Of course, simulation comes in a lot there to test in simulation. It's a difficult question. And I'm always wondering, like, I always wonder, let's say you look at, let's go back to driving because a lot of people know driving well, of course. What do we do to test somebody for driving, right? Get a driver's license. What do they really do? I mean, you fill out some tests and then you drive. And I mean, it's suburban California. That driving test is just you drive around the block, pull over, you do a stop sign successfully, and then you pull over again and you're pretty much done. And you're like, okay, if a self driving car did that, would you trust it that it can drive? And I'd be like, no, that's not enough for me to trust it. But somehow for humans, we've figured out that somebody being able to do that is representative of them being able to do a lot of other things. And so I think somehow for humans, we figured out representative tests of what it means if you can do this, what you can really do. Of course, testing humans, humans don't wanna be tested at all times. Self driving cars or robots could be tested more often probably. You can have replicas that get tested that are known to be identical because they use the same neural net and so forth. But still, I feel like we don't have this kind of unit tests or proper tests for robots. And I think there's something very interesting to be thought about there, especially as you update things. Your software improves, you have a better self driving car suite, you update it. How do you know it's indeed more capable on everything than what you had before, that you didn't have any bad things creep into it? So I think that's a very interesting direction of research that there is no real solution yet, except that somehow for humans we do. Because we say, okay, you have a driving test, you passed, you can go on the road now, and humans have accidents every like a million or 10 million miles, something pretty phenomenal compared to that short test that is being done. So let me ask, you've mentioned that Andrew Ng by example showed you the value of kindness. Do you think the space of policies, good policies for humans and for AI is populated by policies that with kindness or ones that are the opposite, exploitation, even evil? So if you just look at the sea of policies we operate under as human beings, or if AI system had to operate in this real world, do you think it's really easy to find policies that are full of kindness, like we naturally fall into them? Or is it like a very hard optimization problem? I mean, there is kind of two optimizations happening for humans, right? So for humans, there's kind of the very long term optimization which evolution has done for us and we're kind of predisposed to like certain things. And that's in some sense what makes our learning easier because I mean, we know things like pain and hunger and thirst. And the fact that we know about those is not something that we were taught, that's kind of innate. When we're hungry, we're unhappy. When we're thirsty, we're unhappy. When we have pain, we're unhappy. And ultimately evolution built that into us to think about those things. And so I think there is a notion that it seems somehow humans evolved in general to prefer to get along in some ways, but at the same time also to be very territorial and kind of centric to their own tribe. Like it seems like that's the kind of space we converged onto. I mean, I'm not an expert in anthropology, but it seems like we're very kind of good within our own tribe, but need to be taught to be nice to other tribes. Well, if you look at Steven Pinker, he highlights this pretty nicely in Better Angels of Our Nature, where he talks about violence decreasing over time consistently. So whatever tension, whatever teams we pick, it seems that the long arc of history goes towards us getting along more and more. So. I hope so. So do you think that, do you think it's possible to teach RL based robots this kind of kindness, this kind of ability to interact with humans, this kind of policy, even to, let me ask a fun one. Do you think it's possible to teach RL based robot to love a human being and to inspire that human to love the robot back? So to like RL based algorithm that leads to a happy marriage. That's an interesting question. Maybe I'll answer it with another question, right? Because I mean, but I'll come back to it. So another question you can have is okay. I mean, how close does some people's happiness get from interacting with just a really nice dog? Like, I mean, dogs, you come home, that's what dogs do. They greet you, they're excited, makes you happy when you come home to your dog. You're just like, okay, this is exciting. They're always happy when I'm here. And if they don't greet you, cause maybe whatever, your partner took them on a trip or something, you might not be nearly as happy when you get home, right? And so the kind of, it seems like the level of reasoning a dog has is pretty sophisticated, but then it's still not yet at the level of human reasoning. And so it seems like we don't even need to achieve human level reasoning to get like very strong affection with humans. And so my thinking is why not, right? Why couldn't, with an AI, couldn't we achieve the kind of level of affection that humans feel among each other or with friendly animals and so forth? So question, is it a good thing for us or not? That's another thing, right? Because I mean, but I don't see why not. Why not, yeah, so Elon Musk says love is the answer. Maybe he should say love is the objective function and then RL is the answer, right? Well, maybe. Oh, Peter, thank you so much. I don't want to take up more of your time. Thank you so much for talking today. Well, thanks for coming by. Great to have you visit.
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The following is a conversation with Jürgen Schmidhuber. He's the co director of the CS Swiss AI Lab and a co creator of long short term memory networks. LSDMs are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting out of the box ideas on meta learning, adversarial networks, computer vision, and even a formal theory of quote, creativity, curiosity, and fun. This conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman spelled F R I D. And now here's my conversation with Jürgen Schmidhuber. Early on you dreamed of AI systems that self improve recursively. When was that dream born? When I was a baby. No, that's not true. When I was a teenager. And what was the catalyst for that birth? What was the thing that first inspired you? When I was a boy, I was thinking about what to do in my life and then I thought the most exciting thing is to solve the riddles of the universe. And that means you have to become a physicist. However, then I realized that there's something even grander. You can try to build a machine that isn't really a machine any longer that learns to become a much better physicist than I could ever hope to be. And that's how I thought maybe I can multiply my tiny little bit of creativity into infinity. But ultimately that creativity will be multiplied to understand the universe around us. That's the curiosity for that mystery that drove you. Yes, so if you can build a machine that learns to solve more and more complex problems and more and more general problem solver then you basically have solved all the problems, at least all the solvable problems. So how do you think, what is the mechanism for that kind of general solver look like? Obviously we don't quite yet have one or know how to build one but we have ideas and you have had throughout your career several ideas about it. So how do you think about that mechanism? So in the 80s, I thought about how to build this machine that learns to solve all these problems that I cannot solve myself. And I thought it is clear it has to be a machine that not only learns to solve this problem here and this problem here but it also has to learn to improve the learning algorithm itself. So it has to have the learning algorithm in a representation that allows it to inspect it and modify it such that it can come up with a better learning algorithm. So I call that meta learning, learning to learn and recursive self improvement that is really the pinnacle of that where you then not only learn how to improve on that problem and on that but you also improve the way the machine improves and you also improve the way it improves the way it improves itself. And that was my 1987 diploma thesis which was all about that higher education hierarchy of meta learners that have no computational limits except for the well known limits that Gödel identified in 1931 and for the limits of physics. In the recent years, meta learning has gained popularity in a specific kind of form. You've talked about how that's not really meta learning with neural networks, that's more basic transfer learning. Can you talk about the difference between the big general meta learning and a more narrow sense of meta learning the way it's used today, the way it's talked about today? Let's take the example of a deep neural network that has learned to classify images and maybe you have trained that network on 100 different databases of images. And now a new database comes along and you want to quickly learn the new thing as well. So one simple way of doing that is you take the network which already knows 100 types of databases and then you just take the top layer of that and you retrain that using the new label data that you have in the new image database. And then it turns out that it really, really quickly can learn that too, one shot basically because from the first 100 data sets, it already has learned so much about computer vision that it can reuse that and that is then almost good enough to solve the new task except you need a little bit of adjustment on the top. So that is transfer learning. And it has been done in principle for many decades. People have done similar things for decades. Meta learning too, meta learning is about having the learning algorithm itself open to introspection by the system that is using it and also open to modification such that the learning system has an opportunity to modify any part of the learning algorithm and then evaluate the consequences of that modification and then learn from that to create a better learning algorithm and so on recursively. So that's a very different animal where you are opening the space of possible learning algorithms to the learning system itself. Right, so you've, like in the 2004 paper, you described gator machines, programs that rewrite themselves, right? Philosophically and even in your paper, mathematically, these are really compelling ideas but practically, do you see these self referential programs being successful in the near term to having an impact where sort of it demonstrates to the world that this direction is a good one to pursue in the near term? Yes, we had these two different types of fundamental research, how to build a universal problem solver, one basically exploiting proof search and things like that that you need to come up with asymptotically optimal, theoretically optimal self improvers and problem solvers. However, one has to admit that through this proof search comes in an additive constant, an overhead, an additive overhead that vanishes in comparison to what you have to do to solve large problems. However, for many of the small problems that we want to solve in our everyday life, we cannot ignore this constant overhead and that's why we also have been doing other things, non universal things such as recurrent neural networks which are trained by gradient descent and local search techniques which aren't universal at all, which aren't provably optimal at all, like the other stuff that we did, but which are much more practical as long as we only want to solve the small problems that we are typically trying to solve in this environment here. So the universal problem solvers like the Gödel machine, but also Markus Hutter's fastest way of solving all possible problems, which he developed around 2002 in my lab, they are associated with these constant overheads for proof search, which guarantees that the thing that you're doing is optimal. For example, there is this fastest way of solving all problems with a computable solution, which is due to Markus, Markus Hutter, and to explain what's going on there, let's take traveling salesman problems. With traveling salesman problems, you have a number of cities and cities and you try to find the shortest path through all these cities without visiting any city twice. And nobody knows the fastest way of solving traveling salesman problems, TSPs, but let's assume there is a method of solving them within N to the five operations where N is the number of cities. Then the universal method of Markus is going to solve the same traveling salesman problem also within N to the five steps, plus O of one, plus a constant number of steps that you need for the proof searcher, which you need to show that this particular class of problems, the traveling salesman problems, can be solved within a certain time frame, solved within a certain time bound, within order N to the five steps, basically, and this additive constant doesn't care for N, which means as N is getting larger and larger, as you have more and more cities, the constant overhead pales in comparison, and that means that almost all large problems are solved in the best possible way. Today, we already have a universal problem solver like that. However, it's not practical because the overhead, the constant overhead is so large that for the small kinds of problems that we want to solve in this little biosphere. By the way, when you say small, you're talking about things that fall within the constraints of our computational systems. So they can seem quite large to us mere humans, right? That's right, yeah. So they seem large and even unsolvable in a practical sense today, but they are still small compared to almost all problems because almost all problems are large problems, which are much larger than any constant. Do you find it useful as a person who has dreamed of creating a general learning system, has worked on creating one, has done a lot of interesting ideas there, to think about P versus NP, this formalization of how hard problems are, how they scale, this kind of worst case analysis type of thinking, do you find that useful? Or is it only just a mathematical, it's a set of mathematical techniques to give you intuition about what's good and bad. So P versus NP, that's super interesting from a theoretical point of view. And in fact, as you are thinking about that problem, you can also get inspiration for better practical problem solvers. On the other hand, we have to admit that at the moment, the best practical problem solvers for all kinds of problems that we are now solving through what is called AI at the moment, they are not of the kind that is inspired by these questions. There we are using general purpose computers such as recurrent neural networks, but we have a search technique which is just local search gradient descent to try to find a program that is running on these recurrent networks, such that it can solve some interesting problems such as speech recognition or machine translation and something like that. And there is very little theory behind the best solutions that we have at the moment that can do that. Do you think that needs to change? Do you think that will change? Or can we go, can we create a general intelligent systems without ever really proving that that system is intelligent in some kind of mathematical way, solving machine translation perfectly or something like that, within some kind of syntactic definition of a language, or can we just be super impressed by the thing working extremely well and that's sufficient? There's an old saying, and I don't know who brought it up first, which says, there's nothing more practical than a good theory. And a good theory of problem solving under limited resources, like here in this universe or on this little planet, has to take into account these limited resources. And so probably there is locking a theory, which is related to what we already have, these asymptotically optimal problem solvers, which tells us what we need in addition to that to come up with a practically optimal problem solver. So I believe we will have something like that. And maybe just a few little tiny twists are necessary to change what we already have, to come up with that as well. As long as we don't have that, we admit that we are taking suboptimal ways and recurrent neural networks and long short term memory for equipped with local search techniques. And we are happy that it works better than any competing methods, but that doesn't mean that we think we are done. You've said that an AGI system will ultimately be a simple one. A general intelligence system will ultimately be a simple one. Maybe a pseudocode of a few lines will be able to describe it. Can you talk through your intuition behind this idea, why you feel that at its core, intelligence is a simple algorithm? Experience tells us that the stuff that works best is really simple. So the asymptotically optimal ways of solving problems, if you look at them, they're just a few lines of code, it's really true. Although they are these amazing properties, just a few lines of code. Then the most promising and most useful practical things, maybe don't have this proof of optimality associated with them. However, they are also just a few lines of code. The most successful recurrent neural networks, you can write them down in five lines of pseudocode. That's a beautiful, almost poetic idea, but what you're describing there is the lines of pseudocode are sitting on top of layers and layers of abstractions in a sense. So you're saying at the very top, it'll be a beautifully written sort of algorithm. But do you think that there's many layers of abstractions we have to first learn to construct? Yeah, of course, we are building on all these great abstractions that people have invented over the millennia, such as matrix multiplications and real numbers and basic arithmetics and calculus and derivations of error functions and derivatives of error functions and stuff like that. So without that language that greatly simplifies our way of thinking about these problems, we couldn't do anything. So in that sense, as always, we are standing on the shoulders of the giants who in the past simplified the problem of problem solving so much that now we have a chance to do the final step. So the final step will be a simple one. If we take a step back through all of human civilization and just the universe in general, how do you think about evolution and what if creating a universe is required to achieve this final step? What if going through the very painful and inefficient process of evolution is needed to come up with this set of abstractions that ultimately lead to intelligence? Do you think there's a shortcut or do you think we have to create something like our universe in order to create something like human level intelligence? So far, the only example we have is this one, this universe in which we are living. Do you think we can do better? Maybe not, but we are part of this whole process. So apparently, so it might be the case that the code that runs the universe is really, really simple. Everything points to that possibility because gravity and other basic forces are really simple laws that can be easily described also in just a few lines of code basically. And then there are these other events that the apparently random events in the history of the universe, which as far as we know at the moment don't have a compact code, but who knows? Maybe somebody in the near future is going to figure out the pseudo random generator which is computing whether the measurement of that spin up or down thing here is going to be positive or negative. Underlying quantum mechanics. Yes. Do you ultimately think quantum mechanics is a pseudo random number generator? So it's all deterministic. There's no randomness in our universe. Does God play dice? So a couple of years ago, a famous physicist, quantum physicist, Anton Zeilinger, he wrote an essay in nature and it started more or less like that. One of the fundamental insights of the 20th century was that the universe is fundamentally random on the quantum level. And that whenever you measure spin up or down or something like that, a new bit of information enters the history of the universe. And while I was reading that, I was already typing the response and they had to publish it. Because I was right, that there is no evidence, no physical evidence for that. So there's an alternative explanation where everything that we consider random is actually pseudo random, such as the decimal expansion of pi, 3.141 and so on, which looks random, but isn't. So pi is interesting because every three digits sequence, every sequence of three digits appears roughly one in a thousand times. And every five digit sequence appears roughly one in 10,000 times, what you would expect if it was random. But there's a very short algorithm, a short program that computes all of that. So it's extremely compressible. And who knows, maybe tomorrow, somebody, some grad student at CERN goes back over all these data points, better decay and whatever, and figures out, oh, it's the second billion digits of pi or something like that. We don't have any fundamental reason at the moment to believe that this is truly random and not just a deterministic video game. If it was a deterministic video game, it would be much more beautiful. Because beauty is simplicity. And many of the basic laws of the universe, like gravity and the other basic forces are very simple. So very short programs can explain what these are doing. And it would be awful and ugly. The universe would be ugly. The history of the universe would be ugly if for the extra things, the random, the seemingly random data points that we get all the time, that we really need a huge number of extra bits to describe all these extra bits of information. So as long as we don't have evidence that there is no short program that computes the entire history of the entire universe, we are, as scientists, compelled to look further for that shortest program. Your intuition says there exists a program that can backtrack to the creation of the universe. Yeah. So it can give the shortest path to the creation of the universe. Yes. Including all the entanglement things and all the spin up and down measures that have been taken place since 13.8 billion years ago. So we don't have a proof that it is random. We don't have a proof that it is compressible to a short program. But as long as we don't have that proof, we are obliged as scientists to keep looking for that simple explanation. Absolutely. So you said the simplicity is beautiful or beauty is simple. Either one works. But you also work on curiosity, discovery, the romantic notion of randomness, of serendipity, of being surprised by things that are about you. In our poetic notion of reality, we think it's kind of like, poetic notion of reality, we think as humans require randomness. So you don't find randomness beautiful. You find simple determinism beautiful. Yeah. Okay. So why? Why? Because the explanation becomes shorter. A universe that is compressible to a short program is much more elegant and much more beautiful than another one, which needs an almost infinite number of bits to be described. As far as we know, many things that are happening in this universe are really simple in terms of short programs that compute gravity and the interaction between elementary particles and so on. So all of that seems to be very, very simple. Every electron seems to reuse the same subprogram all the time, as it is interacting with other elementary particles. If we now require an extra oracle injecting new bits of information all the time for these extra things which are currently not understood, such as better decay, then the whole description length of the data that we can observe of the history of the universe would become much longer and therefore uglier. And uglier. Again, simplicity is elegant and beautiful. The history of science is a history of compression progress. Yes, so you've described sort of as we build up abstractions and you've talked about the idea of compression. How do you see this, the history of science, the history of humanity, our civilization, and life on Earth as some kind of path towards greater and greater compression? What do you mean by that? How do you think about that? Indeed, the history of science is a history of compression progress. What does that mean? Hundreds of years ago there was an astronomer whose name was Kepler and he looked at the data points that he got by watching planets move. And then he had all these data points and suddenly it turned out that he can greatly compress the data by predicting it through an ellipse law. So it turns out that all these data points are more or less on ellipses around the sun. And another guy came along whose name was Newton and before him Hooke. And they said the same thing that is making these planets move like that is what makes the apples fall down. And it also holds for stones and for all kinds of other objects. And suddenly many, many of these observations became much more compressible because as long as you can predict the next thing, given what you have seen so far, you can compress it. And you don't have to store that data extra. This is called predictive coding. And then there was still something wrong with that theory of the universe and you had deviations from these predictions of the theory. And 300 years later another guy came along whose name was Einstein. And he was able to explain away all these deviations from the predictions of the old theory through a new theory which was called the general theory of relativity. Which at first glance looks a little bit more complicated and you have to warp space and time but you can't phrase it within one single sentence which is no matter how fast you accelerate and how hard you decelerate and no matter what is the gravity in your local network, light speed always looks the same. And from that you can calculate all the consequences. So it's a very simple thing and it allows you to further compress all the observations because certainly there are hardly any deviations any longer that you can measure from the predictions of this new theory. So all of science is a history of compression progress. You never arrive immediately at the shortest explanation of the data but you're making progress. Whenever you are making progress you have an insight. You see oh first I needed so many bits of information to describe the data, to describe my falling apples, my video of falling apples, I need so many data, so many pixels have to be stored. But then suddenly I realize no there is a very simple way of predicting the third frame in the video from the first two. And maybe not every little detail can be predicted but more or less most of these orange blobs that are coming down they accelerate in the same way which means that I can greatly compress the video. And the amount of compression, progress, that is the depth of the insight that you have at that moment. That's the fun that you have, the scientific fun, the fun in that discovery. And we can build artificial systems that do the same thing. They measure the depth of their insights as they are looking at the data which is coming in through their own experiments and we give them a reward, an intrinsic reward in proportion to this depth of insight. And since they are trying to maximize the rewards they get they are suddenly motivated to come up with new action sequences, with new experiments that have the property that the data that is coming in as a consequence of these experiments has the property that they can learn something about, see a pattern in there which they hadn't seen yet before. So there is an idea of power play that you described, a training in general problem solver in this kind of way of looking for the unsolved problems. Yeah. Can you describe that idea a little further? It's another very simple idea. So normally what you do in computer science, you have some guy who gives you a problem and then there is a huge search space of potential solution candidates and you somehow try them out and you have more less sophisticated ways of moving around in that search space until you finally found a solution which you consider satisfactory. That's what most of computer science is about. Power play just goes one little step further and says let's not only search for solutions to a given problem but let's search to pairs of problems and their solutions where the system itself has the opportunity to phrase its own problem. So we are looking suddenly at pairs of problems and their solutions or modifications of the problem solver that is supposed to generate a solution to that new problem. And this additional degree of freedom allows us to build career systems that are like scientists in the sense that they not only try to solve and try to find answers to existing questions, no they are also free to pose their own questions. So if you want to build an artificial scientist you have to give it that freedom and power play is exactly doing that. So that's a dimension of freedom that's important to have but how hard do you think that, how multidimensional and difficult the space of then coming up with your own questions is. So it's one of the things that as human beings we consider to be the thing that makes us special, the intelligence that makes us special is that brilliant insight that can create something totally new. Yes. So now let's look at the extreme case, let's look at the set of all possible problems that you can formally describe which is infinite, which should be the next problem that a scientist or power play is going to solve. Well, it should be the easiest problem that goes beyond what you already know. So it should be the simplest problem that the current problem solver that you have which can already solve 100 problems that he cannot solve yet by just generalizing. So it has to be new, so it has to require a modification of the problem solver such that the new problem solver can solve this new thing but the old problem solver cannot do it and in addition to that we have to make sure that the problem solver doesn't forget any of the previous solutions. Right. And so by definition power play is now trying always to search in this pair of, in the set of pairs of problems and problems over modifications for a combination that minimize the time to achieve these criteria. So it's always trying to find the problem which is easiest to add to the repertoire. So just like grad students and academics and researchers can spend their whole career in a local minima stuck trying to come up with interesting questions but ultimately doing very little. Do you think it's easy in this approach of looking for the simplest unsolvable problem to get stuck in a local minima? Is not never really discovering new, you know really jumping outside of the 100 problems that you've already solved in a genuine creative way? No, because that's the nature of power play that it's always trying to break its current generalization abilities by coming up with a new problem which is beyond the current horizon. Just shifting the horizon of knowledge a little bit out there, breaking the existing rules such that the new thing becomes solvable but wasn't solvable by the old thing. So like adding a new axiom like what Gödel did when he came up with these new sentences, new theorems that didn't have a proof in the formal system which means you can add them to the repertoire hoping that they are not going to damage the consistency of the whole thing. So in the paper with the amazing title, Formal Theory of Creativity, Fun and Intrinsic Motivation, you talk about discovery as intrinsic reward, so if you view humans as intelligent agents, what do you think is the purpose and meaning of life for us humans? You've talked about this discovery, do you see humans as an instance of power play, agents? Humans are curious and that means they behave like scientists, not only the official scientists but even the babies behave like scientists and they play around with their toys to figure out how the world works and how it is responding to their actions and that's how they learn about gravity and everything. In 1990 we had the first systems like that which would just try to play around with the environment and come up with situations that go beyond what they knew at that time and then get a reward for creating these situations and then becoming more general problem solvers and being able to understand more of the world. I think in principle that curiosity strategy or more sophisticated versions of what I just described, they are what we have built in as well because evolution discovered that's a good way of exploring the unknown world and a guy who explores the unknown world has a higher chance of solving the mystery that he needs to survive in this world. On the other hand, those guys who were too curious they were weeded out as well so you have to find this trade off. Evolution found a certain trade off. Apparently in our society there is a certain percentage of extremely explorative guys and it doesn't matter if they die because many of the others are more conservative. It would be surprising to me if that principle of artificial curiosity wouldn't be present in almost exactly the same form here. In our brains. You are a bit of a musician and an artist. Continuing on this topic of creativity, what do you think is the role of creativity and intelligence? So you've kind of implied that it's essential for intelligence if you think of intelligence as a problem solving system, as ability to solve problems. But do you think it's essential, this idea of creativity? We never have a program, a sub program that is called creativity or something. It's just a side effect of what our problem solvers do. They are searching a space of problems, a space of candidates, of solution candidates until they hopefully find a solution to a given problem. But then there are these two types of creativity and both of them are now present in our machines. The first one has been around for a long time, which is human gives problem to machine, machine tries to find a solution to that. And this has been happening for many decades and for many decades machines have found creative solutions to interesting problems where humans were not aware of these particularly creative solutions but then appreciated that the machine found that. The second is the pure creativity. That I would call, what I just mentioned, I would call the applied creativity, like applied art where somebody tells you now make a nice picture of this Pope and you will get money for that. So here is the artist and he makes a convincing picture of the Pope and the Pope likes it and gives him the money. And then there is the pure creativity which is more like the power play and the artificial curiosity thing where you have the freedom to select your own problem. Like a scientist who defines his own question to study and so that is the pure creativity if you will as opposed to the applied creativity which serves another. And in that distinction there is almost echoes of narrow AI versus general AI. So this kind of constrained painting of a Pope seems like the approaches of what people are calling narrow AI and pure creativity seems to be, maybe I am just biased as a human but it seems to be an essential element of human level intelligence. Is that what you are implying? To a degree? If you zoom back a little bit and you just look at a general problem solving machine which is trying to solve arbitrary problems then this machine will figure out in the course of solving problems that it is good to be curious. So all of what I said just now about this prewired curiosity and this will to invent new problems that the system doesn't know how to solve yet should be just a byproduct of the general search. However, apparently evolution has built it into us because it turned out to be so successful, a prewiring, a bias, a very successful exploratory bias that we are born with. And you have also said that consciousness in the same kind of way may be a byproduct of problem solving. Do you find this an interesting byproduct? Do you think it is a useful byproduct? What are your thoughts on consciousness in general? Or is it simply a byproduct of greater and greater capabilities of problem solving that is similar to creativity in that sense? We never have a procedure called consciousness in our machines. However, we get as side effects of what these machines are doing things that seem to be closely related to what people call consciousness. So for example, already in 1990 we had simple systems which were basically recurrent networks and therefore universal computers trying to map incoming data into actions that lead to success. Maximizing reward in a given environment, always finding the charging station in time whenever the battery is low and negative signals are coming from the battery, always find the charging station in time without bumping against painful obstacles on the way. So complicated things but very easily motivated. And then we give these little guys a separate recurrent neural network which is just predicting what's happening if I do that and that. What will happen as a consequence of these actions that I'm executing. And it's just trained on the long and long history of interactions with the world. So it becomes a predictive model of the world basically. And therefore also a compressor of the observations of the world because whatever you can predict you don't have to store extra. So compression is a side effect of prediction. And how does this recurrent network compress? Well, it's inventing little subprograms, little subnetworks that stand for everything that frequently appears in the environment like bottles and microphones and faces, maybe lots of faces in my environment so I'm learning to create something like a prototype face and a new face comes along and all I have to encode are the deviations from the prototype. So it's compressing all the time the stuff that frequently appears. There's one thing that appears all the time that is present all the time when the agent is interacting with its environment which is the agent itself. But just for data compression reasons it is extremely natural for this recurrent network to come up with little subnetworks that stand for the properties of the agents, the hand, the other actuators and all the stuff that you need to better encode the data which is influenced by the actions of the agent. So there just as a side effect of data compression during problem solving you have internal self models. Now you can use this model of the world to plan your future and that's what we also have done since 1990. So the recurrent network which is the controller which is trying to maximize reward can use this model of the network of the world, this model network of the world, this predictive model of the world to plan ahead and say let's not do this action sequence, let's do this action sequence instead because it leads to more predicted reward. And whenever it is waking up these little subnetworks that stand for itself then it is thinking about itself and it is thinking about itself and it is exploring mentally the consequences of its own actions and now you tell me what is still missing. Missing the next, the gap to consciousness. There isn't. That's a really beautiful idea that if life is a collection of data and life is a process of compressing that data to act efficiently in that data you yourself appear very often. So it's useful to form compressions of yourself and it's a really beautiful formulation of what consciousness is a necessary side effect. It's actually quite compelling to me. You've described RNNs, developed LSTMs, long short term memory networks that are a type of recurrent neural networks that have gotten a lot of success recently. So these are networks that model the temporal aspects in the data, temporal patterns in the data and you've called them the deepest of the neural networks. So what do you think is the value of depth in the models that we use to learn? Since you mentioned the long short term memory and the LSTM I have to mention the names of the brilliant students who made that possible. First of all my first student ever Sepp Hochreiter who had fundamental insights already in his diploma thesis. Then Felix Geers who had additional important contributions. Alex Gray is a guy from Scotland who is mostly responsible for this CTC algorithm which is now often used to train the LSTM to do the speech recognition on all the Google Android phones and whatever and Siri and so on. So these guys without these guys I would be nothing. It's a lot of incredible work. What is now the depth? What is the importance of depth? Well most problems in the real world are deep in the sense that the current input doesn't tell you all you need to know about the environment. So instead you have to have a memory of what happened in the past and often important parts of that memory are dated. They are pretty old. So when you're doing speech recognition for example and somebody says 11 then that's about half a second or something like that which means it's already 50 time steps. And another guy or the same guy says 7. So the ending is the same even but now the system has to see the distinction between 7 and 11 and the only way it can see the difference is it has to store that 50 steps ago there was an S or an L, 11 or 7. So there you have already a problem of depth 50 because for each time step you have something like a virtual layer in the expanded unrolled version of this recurrent network which is doing the speech recognition. So these long time lags they translate into problem depth. And most problems in this world are such that you really have to look far back in time to understand what is the problem and to solve it. But just like with LSTMs you don't necessarily need to when you look back in time remember every aspect you just need to remember the important aspects. That's right. The network has to learn to put the important stuff into memory and to ignore the unimportant noise. But in that sense deeper and deeper is better or is there a limitation? I mean LSTM is one of the great examples of architectures that do something beyond just deeper and deeper networks. There's clever mechanisms for filtering data, for remembering and forgetting. So do you think that kind of thinking is necessary? If you think about LSTMs as a leap, a big leap forward over traditional vanilla RNNs, what do you think is the next leap within this context? So LSTM is a very clever improvement but LSTM still don't have the same kind of ability to see far back in the past as us humans do. The credit assignment problem across way back not just 50 time steps or 100 or 1000 but millions and billions. It's not clear what are the practical limits of the LSTM when it comes to looking back. Already in 2006 I think we had examples where it not only looked back tens of thousands of steps but really millions of steps. And Juan Perez Ortiz in my lab I think was the first author of a paper where we really, was it 2006 or something, had examples where it learned to look back for more than 10 million steps. So for most problems of speech recognition it's not necessary to look that far back but there are examples where it does. Now the looking back thing, that's rather easy because there is only one past but there are many possible futures and so a reinforcement learning system which is trying to maximize its future expected reward and doesn't know yet which of these many possible futures should I select given this one single past is facing problems that the LSTM by itself cannot solve. So the LSTM is good for coming up with a compact representation of the history and observations and actions so far but now how do you plan in an efficient and good way among all these, how do you select one of these many possible action sequences that a reinforcement learning system has to consider to maximize reward in this unknown future? We have this basic setup where you have one recurrent network which gets in the video and the speech and whatever and it's executing actions and it's trying to maximize reward so there is no teacher who tells it what to do at which point in time. And then there's the other network which is just predicting what's going to happen if I do that and that and that could be an LSTM network and it learns to look back all the way to make better predictions of the next time step. So essentially although it's predicting only the next time step it is motivated to learn to put into memory something that happened maybe a million steps ago because it's important to memorize that if you want to predict that at the next time step, the next event. Now how can a model of the world like that, a predictive model of the world be used by the first guy? Let's call it the controller and the model, the controller and the model. How can the model be used by the controller to efficiently select among these many possible futures? The naive way we had about 30 years ago was let's just use the model of the world as a stand in, as a simulation of the world and millisecond by millisecond we plan the future and that means we have to roll it out really in detail and it will work only if the model is really good and it will still be inefficient because we have to look at all these possible futures and there are so many of them. So instead what we do now since 2015 in our CM systems, controller model systems, we give the controller the opportunity to learn by itself how to use the potentially relevant parts of the M, of the model network to solve new problems more quickly. And if it wants to, it can learn to ignore the M and sometimes it's a good idea to ignore the M because it's really bad, it's a bad predictor in this particular situation of life where the controller is currently trying to maximize reward. However, it can also learn to address and exploit some of the subprograms that came about in the model network through compressing the data by predicting it. So it now has an opportunity to reuse that code, the algorithmic information in the model network to reduce its own search space such that it can solve a new problem more quickly than without the model. Compression. So you're ultimately optimistic and excited about the power of RL, of reinforcement learning in the context of real systems. Absolutely, yeah. So you see RL as a potential having a huge impact beyond just sort of the M part is often developed on supervised learning methods. You see RL as a for problems of self driving cars or any kind of applied cyber robotics. That's the correct interesting direction for research in your view? I do think so. We have a company called Nasence which has applied reinforcement learning to little Audis which learn to park without a teacher. The same principles were used of course. So these little Audis, they are small, maybe like that, so much smaller than the real Audis. But they have all the sensors that you find in the real Audis. You find the cameras, the LIDAR sensors. They go up to 120 kilometers an hour if they want to. And they have pain sensors basically and they don't want to bump against obstacles and other Audis and so they must learn like little babies to park. Take the raw vision input and translate that into actions that lead to successful parking behavior which is a rewarding thing. And yes, they learn that. So we have examples like that and it's only in the beginning. This is just the tip of the iceberg and I believe the next wave of AI is going to be all about that. So at the moment, the current wave of AI is about passive pattern observation and prediction and that's what you have on your smartphone and what the major companies on the Pacific Rim are using to sell you ads to do marketing. That's the current sort of profit in AI and that's only one or two percent of the world economy. Which is big enough to make these companies pretty much the most valuable companies in the world. But there's a much, much bigger fraction of the economy going to be affected by the next wave which is really about machines that shape the data through their own actions. Do you think simulation is ultimately the biggest way that those methods will be successful in the next 10, 20 years? We're not talking about 100 years from now. We're talking about sort of the near term impact of RL. Do you think really good simulation is required or is there other techniques like imitation learning, observing other humans operating in the real world? Where do you think the success will come from? So at the moment, we have a tendency of using physics simulations to learn behavior from machines that learn to solve problems that humans also do not know how to solve. However, this is not the future because the future is in what little babies do. They don't use a physics engine to simulate the world. No, they learn a predictive model of the world which maybe sometimes is wrong in many ways but captures all kinds of important abstract high level predictions which are really important to be successful. And that's what was the future 30 years ago when we started that type of research but it's still the future and now we know much better how to go there to move forward and to really make working systems based on that where you have a learning model of the world, a model of the world that learns to predict what's going to happen if I do that and that. And then the controller uses that model to more quickly learn successful action sequences. And then of course always this curiosity thing. In the beginning, the model is stupid so the controller should be motivated to come up with experiments with action sequences that lead to data that improve the model. Do you think improving the model, constructing an understanding of the world in this connection is now the popular approaches that have been successful are grounded in ideas of neural networks. But in the 80s with expert systems, there's symbolic AI approaches which to us humans are more intuitive in the sense that it makes sense that you build up knowledge in this knowledge representation. What kind of lessons can we draw into our current approaches from expert systems from symbolic AI? So I became aware of all of that in the 80s and back then logic programming was a huge thing. Was it inspiring to you yourself? Did you find it compelling? Because a lot of your work was not so much in that realm, right? It was more in the learning systems. Yes and no, but we did all of that. So my first publication ever actually was 1987, was the implementation of genetic algorithm of a genetic programming system in Prolog. So Prolog, that's what you learn back then which is a logic programming language and the Japanese, they have this huge fifth generation AI project which was mostly about logic programming back then. Although neural networks existed and were well known back then and deep learning has existed since 1965, since this guy in the Ukraine, Iwakunenko, started it. But the Japanese and many other people, they focused really on this logic programming and I was influenced to the extent that I said, okay, let's take these biologically inspired algorithms like evolution, programs, and implement that in the language which I know, which was Prolog, for example, back then. And then in many ways this came back later because the Gödel machine, for example, has a proof searcher on board and without that it would not be optimal. Well, Markus Futter's universal algorithm for solving all well defined problems has a proof searcher on board so that's very much logic programming. Without that it would not be asymptotically optimal. But then on the other hand, because we are very pragmatic guys also, we focused on recurrent neural networks and suboptimal stuff such as gradient based search and program space rather than provably optimal things. The logic programming certainly has a usefulness when you're trying to construct something provably optimal or provably good or something like that. But is it useful for practical problems? It's really useful for our theorem proving. The best theorem provers today are not neural networks. No, they are logic programming systems and they are much better theorem provers than most math students in the first or second semester. But for reasoning, for playing games of Go or chess or for robots, autonomous vehicles that operate in the real world or object manipulation, you think learning. Yeah, as long as the problems have little to do with theorem proving themselves, then as long as that is not the case, you just want to have better pattern recognition. So to build a self driving car, you want to have better pattern recognition and pedestrian recognition and all these things. You want to minimize the number of false positives, which is currently slowing down self driving cars in many ways. All of that has very little to do with logic programming. What are you most excited about in terms of directions of artificial intelligence at this moment in the next few years in your own research and in the broader community? So I think in the not so distant future, we will have for the first time little robots that learn like kids. I will be able to say to the robot, look here robot, we are going to assemble a smartphone. Let's take this slab of plastic and the screwdriver and let's screw in the screw like that. Not like that, like that. Not like that, like that. And I don't have a data glove or something. He will see me and he will hear me and he will try to do something with his own actuators, which will be really different from mine, but he will understand the difference and will learn to imitate me, but not in the supervised way where a teacher is giving target signals for all his muscles all the time. No, by doing this high level imitation where he first has to learn to imitate me and then to interpret these additional noises coming from my mouth as helping, helpful signals to do that better. And then it will by itself come up with faster ways and more efficient ways of doing the same thing. And finally I stop his learning algorithm and make a million copies and sell it. And so at the moment this is not possible, but we already see how we are going to get there. And you can imagine to the extent that this works economically and cheaply, it's going to change everything. Almost all of production is going to be affected by that. And a much bigger wave, a much bigger AI wave is coming than the one that we are currently witnessing, which is mostly about passive pattern recognition on your smartphone. This is about active machines that shapes data through the actions they are executing and they learn to do that in a good way. So many of the traditional industries are going to be affected by that. All the companies that are building machines will equip these machines with cameras and other sensors and they are going to learn to solve all kinds of problems through interaction with humans, but also a lot on their own to improve what they already can do. And lots of old economy is going to be affected by that. And in recent years I have seen that old economy is actually waking up and realizing that this is the case. Are you optimistic about that future? Are you concerned? There is a lot of people concerned in the near term about the transformation of the nature of work, the kind of ideas that you just suggested would have a significant impact of what kind of things could be automated. Are you optimistic about that future? Are you nervous about that future? And looking a little bit farther into the future, there are people like Gila Musk, Stuart Russell, concerned about the existential threats of that future. So in the near term, job loss, in the long term existential threat, are these concerns to you or are you ultimately optimistic? So let's first address the near future. We have had predictions of job losses for many decades. For example, when industrial robots came along, many people predicted that lots of jobs are going to get lost. And in a sense, they were right, because back then there were car factories and hundreds of people in these factories assembled cars, and today the same car factories have hundreds of robots and maybe three guys watching the robots. On the other hand, those countries that have lots of robots per capita, Japan, Korea, Germany, Switzerland, and a couple of other countries, they have really low unemployment rates. Somehow, all kinds of new jobs were created. Back then, nobody anticipated those jobs. And decades ago, I always said, it's really easy to say which jobs are going to get lost, but it's really hard to predict the new ones. 200 years ago, who would have predicted all these people making money as YouTube bloggers, for example? 200 years ago, 60% of all people used to work in agriculture. Today, maybe 1%. But still, only, I don't know, 5% unemployment. Lots of new jobs were created, and Homo Ludens, the playing man, is inventing new jobs all the time. Most of these jobs are not existentially necessary for the survival of our species. There are only very few existentially necessary jobs, such as farming and building houses and warming up the houses, but less than 10% of the population is doing that. And most of these newly invented jobs are about interacting with other people in new ways, through new media and so on, getting new types of kudos and forms of likes and whatever, and even making money through that. So, Homo Ludens, the playing man, doesn't want to be unemployed, and that's why he's inventing new jobs all the time. And he keeps considering these jobs as really important and is investing a lot of energy and hours of work into those new jobs. That's quite beautifully put. We're really nervous about the future because we can't predict what kind of new jobs will be created. But you're ultimately optimistic that we humans are so restless that we create and give meaning to newer and newer jobs, totally new, things that get likes on Facebook or whatever the social platform is. So what about long term existential threat of AI, where our whole civilization may be swallowed up by these ultra super intelligent systems? Maybe it's not going to be swallowed up, but I'd be surprised if we humans were the last step in the evolution of the universe. You've actually had this beautiful comment somewhere that I've seen saying that, quite insightful, artificial general intelligence systems, just like us humans, will likely not want to interact with humans, they'll just interact amongst themselves. Just like ants interact amongst themselves and only tangentially interact with humans. And it's quite an interesting idea that once we create AGI, they will lose interest in humans and compete for their own Facebook likes and their own social platforms. So within that quite elegant idea, how do we know in a hypothetical sense that there's not already intelligence systems out there? How do you think broadly of general intelligence greater than us? How do we know it's out there? How do we know it's around us? And could it already be? I'd be surprised if within the next few decades or something like that, we won't have AIs that are truly smart in every single way and better problem solvers in almost every single important way. And I'd be surprised if they wouldn't realize what we have realized a long time ago, which is that almost all physical resources are not here in this biosphere, but further out, the rest of the solar system gets 2 billion times more solar energy than our little planet. There's lots of material out there that you can use to build robots and self replicating robot factories and all this stuff. And they are going to do that and they will be scientists and curious and they will explore what they can do. And in the beginning, they will be fascinated by life and by their own origins in our civilization. They will want to understand that completely, just like people today would like to understand how life works and also the history of our own existence and civilization, but then also the physical laws that created all of that. So in the beginning, they will be fascinated by life. Once they understand it, they lose interest. Like anybody who loses interest in things he understands. And then, as you said, the most interesting sources of information for them will be others of their own kind. So at least in the long run, there seems to be some sort of protection through lack of interest on the other side. And now it seems also clear, as far as we understand physics, you need matter and energy to compute and to build more robots and infrastructure for AI civilization and EIEI ecologies consisting of trillions of different types of AIs. And so it seems inconceivable to me that this thing is not going to expand. Some AI ecology not controlled by one AI, but trillions of different types of AIs competing in all kinds of quickly evolving and disappearing ecological niches in ways that we cannot fathom at the moment. But it's going to expand, limited by light speed and physics, but it's going to expand and now we realize that the universe is still young. It's only 13.8 billion years old and it's going to be a thousand times older than that. So there's plenty of time to conquer the entire universe and to fill it with intelligence and senders and receivers such that AIs can travel the way they are traveling in our labs today, which is by radio from sender to receiver. And let's call the current age of the universe one eon, one eon. Now it will take just a few eons from now and the entire visible universe is going to be full of that stuff. And let's look ahead to a time when the universe is going to be 1000 times older than it is now. They will look back and they will say, look, almost immediately after the Big Bang, only a few eons later, the entire universe started to become intelligent. Now to your question, how do we see whether anything like that has already happened or is already in a more advanced stage in some other part of the universe, of the visible universe? We are trying to look out there and nothing like that has happened so far or is that true? Do you think we would recognize it? How do we know it's not among us? How do we know planets aren't in themselves intelligent beings? How do we know ants seen as a collective are not much greater intelligence than our own? These kinds of ideas. When I was a boy, I was thinking about these things and I thought, maybe it has already happened. Because back then I knew, I learned from popular physics books, that the large scale structure of the universe is not homogeneous. You have these clusters of galaxies and then in between there are these huge empty spaces. And I thought, maybe they aren't really empty. It's just that in the middle of that, some AI civilization already has expanded and then has covered a bubble of a billion light years diameter and is using all the energy of all the stars within that bubble for its own unfathomable purposes. And so it already has happened and we just fail to interpret the signs. And then I learned that gravity by itself explains the large scale structure of the universe and that this is not a convincing explanation. And then I thought, maybe it's the dark matter. Because as far as we know today, 80% of the measurable matter is invisible. And we know that because otherwise our galaxy or other galaxies would fall apart. They are rotating too quickly. And then the idea was, maybe all of these AI civilizations that are already out there, they are just invisible because they are really efficient in using the energies of their own local systems and that's why they appear dark to us. But this is also not a convincing explanation because then the question becomes, why are there still any visible stars left in our own galaxy, which also must have a lot of dark matter? So that is also not a convincing thing. And today, I like to think it's quite plausible that maybe we are the first, at least in our local light cone within the few hundreds of millions of light years that we can reliably observe. Is that exciting to you that we might be the first? And it would make us much more important because if we mess it up through a nuclear war, then maybe this will have an effect on the development of the entire universe. So let's not mess it up. Let's not mess it up. Jürgen, thank you so much for talking today. I really appreciate it. It's my pleasure.
Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11
The following is a conversation with Thomas Sanholm. He's a professor at CMU and co creator of Labratus, which is the first AI system to beat top human players in the game of Heads Up No Limit Texas Holdem. He has published over 450 papers on game theory and machine learning, including a best paper in 2017 at NIPS, now renamed to Newrips, which is where I caught up with him for this conversation. His research and companies have had wide reaching impact in the real world, especially because he and his group not only propose new ideas, but also build systems to prove that these ideas work in the real world. This conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now here's my conversation with Thomas Sanholm. Can you describe at the high level the game of poker, Texas Holdem, Heads Up Texas Holdem for people who might not be familiar with this card game? Yeah, happy to. So Heads Up No Limit Texas Holdem has really emerged in the AI community as a main benchmark for testing these application independent algorithms for imperfect information game solving. And this is a game that's actually played by humans. You don't see that much on TV or casinos because well, for various reasons, but you do see it in some expert level casinos and you see it in the best poker movies of all time. It's actually an event in the World Series of Poker, but mostly it's played online and typically for pretty big sums of money. And this is a game that usually only experts play. So if you go to your home game on a Friday night, it probably is not gonna be Heads Up No Limit Texas Holdem. It might be No Limit Texas Holdem in some cases, but typically for a big group and it's not as competitive. While Heads Up means it's two players. So it's really like me against you. Am I better or are you better? Much like chess or go in that sense, but an imperfect information game, which makes it much harder because I have to deal with issues of you knowing things that I don't know and I know things that you don't know instead of pieces being nicely laid on the board for both of us to see. So in Texas Holdem, there's two cards that you only see that belong to you. Yeah. And there is, they gradually lay out some cards that add up overall to five cards that everybody can see. Yeah. So the imperfect nature of the information is the two cards that you're holding in your hand. Up front, yeah. So as you said, you first get two cards in private each and then there's a betting round. Then you get three cards in public on the table. Then there's a betting round. Then you get the fourth card in public on the table. There's a betting round. Then you get the 5th card on the table. There's a betting round. So there's a total of four betting rounds and four tranches of information revelation if you will. The only the first tranche is private and then it's public from there. And this is probably by far the most popular game in AI and just the general public in terms of imperfect information. So that's probably the most popular spectator game to watch, right? So, which is why it's a super exciting game to tackle. So it's on the order of chess, I would say, in terms of popularity, in terms of AI setting it as the bar of what is intelligence. So in 2017, Labratus, how do you pronounce it? Labratus. Labratus. Labratus beats. A little Latin there. A little bit of Latin. Labratus beats a few, four expert human players. Can you describe that event? What you learned from it? What was it like? What was the process in general for people who have not read the papers and the study? Yeah, so the event was that we invited four of the top 10 players, with these specialist players in Heads Up No Limit, Texas Holden, which is very important because this game is actually quite different than the multiplayer version. We brought them in to Pittsburgh to play at the Reverse Casino for 20 days. We wanted to get 120,000 hands in because we wanted to get statistical significance. So it's a lot of hands for humans to play, even for these top pros who play fairly quickly normally. So we couldn't just have one of them play so many hands. 20 days, they were playing basically morning to evening. And I raised 200,000 as a little incentive for them to play. And the setting was so that they didn't all get 50,000. We actually paid them out based on how they did against the AI each. So they had an incentive to play as hard as they could, whether they're way ahead or way behind or right at the mark of beating the AI. And you don't make any money, unfortunately. Right, no, we can't make any money. So originally, a couple of years earlier, I actually explored whether we could actually play for money because that would be, of course, interesting as well, to play against the top people for money. But the Pennsylvania Gaming Board said no, so we couldn't. So this is much like an exhibit, like for a musician or a boxer or something like that. Nevertheless, they were keeping track of the money and brought us close to $2 million, I think. So if it was for real money, if you were able to earn money, that was a quite impressive and inspiring achievement. Just a few details, what were the players looking at? Were they behind a computer? What was the interface like? Yes, they were playing much like they normally do. These top players, when they play this game, they play mostly online. So they're used to playing through a UI. And they did the same thing here. So there was this layout. You could imagine there's a table on a screen. There's the human sitting there, and then there's the AI sitting there. And the screen shows everything that's happening. The cards coming out and shows the bets being made. And we also had the betting history for the human. So if the human forgot what had happened in the hand so far, they could actually reference back and so forth. Is there a reason they were given access to the betting history for? Well, we just, it didn't really matter. They wouldn't have forgotten anyway. These are top quality people. But we just wanted to put out there so it's not a question of the human forgetting and the AI somehow trying to get advantage of better memory. So what was that like? I mean, that was an incredible accomplishment. So what did it feel like before the event? Did you have doubt, hope? Where was your confidence at? Yeah, that's great. So great question. So 18 months earlier, I had organized a similar brains versus AI competition with a previous AI called Cloudyco and we couldn't beat the humans. So this time around, it was only 18 months later. And I knew that this new AI, Libratus, was way stronger, but it's hard to say how you'll do against the top humans before you try. So I thought we had about a 50, 50 shot. And the international betting sites put us as a four to one or five to one underdog. So it's kind of interesting that people really believe in people and over AI, not just people. People don't just over believe in themselves, but they have overconfidence in other people as well compared to the performance of AI. And yeah, so we were a four to one or five to one underdog. And even after three days of beating the humans in a row, we were still 50, 50 on the international betting sites. Do you think there's something special and magical about poker and the way people think about it, in the sense you have, I mean, even in chess, there's no Hollywood movies. Poker is the star of many movies. And there's this feeling that certain human facial expressions and body language, eye movement, all these tells are critical to poker. Like you can look into somebody's soul and understand their betting strategy and so on. So that's probably why, possibly, do you think that is why people have a confidence that humans will outperform? Because AI systems cannot, in this construct, perceive these kinds of tells. They're only looking at betting patterns and nothing else, betting patterns and statistics. So what's more important to you if you step back on human players, human versus human? What's the role of these tells, of these ideas that we romanticize? Yeah, so I'll split it into two parts. So one is why do humans trust humans more than AI and have overconfidence in humans? I think that's not really related to the tell question. It's just that they've seen these top players, how good they are, and they're really fantastic. So it's just hard to believe that an AI could beat them. So I think that's where that comes from. And that's actually maybe a more general lesson about AI. That until you've seen it overperform a human, it's hard to believe that it could. But then the tells, a lot of these top players, they're so good at hiding tells that among the top players, it's actually not really worth it for them to invest a lot of effort trying to find tells in each other because they're so good at hiding them. So yes, at the kind of Friday evening game, tells are gonna be a huge thing. You can read other people. And if you're a good reader, you'll read them like an open book. But at the top levels of poker now, the tells become a much smaller and smaller aspect of the game as you go to the top levels. The amount of strategies, the amount of possible actions is very large, 10 to the power of 100 plus. So there has to be some, I've read a few of the papers related, it has to form some abstractions of various hands and actions. So what kind of abstractions are effective for the game of poker? Yeah, so you're exactly right. So when you go from a game tree that's 10 to the 161, especially in an imperfect information game, it's way too large to solve directly, even with our fastest equilibrium finding algorithms. So you wanna abstract it first. And abstraction in games is much trickier than abstraction in MDPs or other single agent settings. Because you have these abstraction pathologies that if I have a finer grained abstraction, the strategy that I can get from that for the real game might actually be worse than the strategy I can get from the coarse grained abstraction. So you have to be very careful. Now the kinds of abstractions, just to zoom out, we're talking about, there's the hands abstractions and then there's betting strategies. Yeah, betting actions, yeah. Baiting actions. So there's information abstraction, don't talk about general games, information abstraction, which is the abstraction of what chance does. And this would be the cards in the case of poker. And then there's action abstraction, which is abstracting the actions of the actual players, which would be bets in the case of poker. Yourself and the other players? Yes, yourself and other players. And for information abstraction, we were completely automated. So these are algorithms, but they do what we call potential aware abstraction, where we don't just look at the value of the hand, but also how it might materialize into good or bad hands over time. And it's a certain kind of bottom up process with integer programming there and clustering and various aspects, how do you build this abstraction? And then in the action abstraction, there it's largely based on how humans and other AIs have played this game in the past. But in the beginning, we actually used an automated action abstraction technology, which is provably convergent that it finds the optimal combination of bet sizes, but it's not very scalable. So we couldn't use it for the whole game, but we use it for the first couple of betting actions. So what's more important, the strength of the hand, so the information abstraction or the how you play them, the actions, does it, you know, the romanticized notion again, is that it doesn't matter what hands you have, that the actions, the betting may be the way you win no matter what hands you have. Yeah, so that's why you have to play a lot of hands so that the role of luck gets smaller. So you could otherwise get lucky and get some good hands and then you're gonna win the match. Even with thousands of hands, you can get lucky because there's so much variance in No Limit Texas Holden because if we both go all in, it's a huge stack of variance, so there are these massive swings in No Limit Texas Holden. So that's why you have to play not just thousands, but over 100,000 hands to get statistical significance. So let me ask another way this question. If you didn't even look at your hands, but they didn't know that, the opponents didn't know that, how well would you be able to do? Oh, that's a good question. There's actually, I heard this story that there's this Norwegian female poker player called Annette Oberstad who's actually won a tournament by doing exactly that, but that would be extremely rare. So you cannot really play well that way. Okay, so the hands do have some role to play, okay. So Labradus does not use, as far as I understand, they use learning methods, deep learning. Is there room for learning in, there's no reason why Labradus doesn't combine with an AlphaGo type approach for estimating the quality for function estimator. What are your thoughts on this, maybe as compared to another algorithm which I'm not that familiar with, DeepStack, the engine that does use deep learning, that it's unclear how well it does, but nevertheless uses deep learning. So what are your thoughts about learning methods to aid in the way that Labradus plays in the game of poker? Yeah, so as you said, Labradus did not use learning methods and played very well without them. Since then, we have actually, actually here, we have a couple of papers on things that do use learning techniques. Excellent. And deep learning in particular. And sort of the way you're talking about where it's learning an evaluation function, but in imperfect information games, unlike let's say in Go or now also in chess and shogi, it's not sufficient to learn an evaluation for a state because the value of an information set depends not only on the exact state, but it also depends on both players beliefs. Like if I have a bad hand, I'm much better off if the opponent thinks I have a good hand and vice versa. If I have a good hand, I'm much better off if the opponent believes I have a bad hand. So the value of a state is not just a function of the cards. It depends on, if you will, the path of play, but only to the extent that it's captured in the belief distributions. So that's why it's not as simple as it is in perfect information games. And I don't wanna say it's simple there either. It's of course very complicated computationally there too, but at least conceptually, it's very straightforward. There's a state, there's an evaluation function. You can try to learn it. Here, you have to do something more. And what we do is in one of these papers, we're looking at where we allow the opponent to actually take different strategies at the leaf of the search tree, if you will. And that is a different way of doing it. And it doesn't assume therefore a particular way that the opponent plays, but it allows the opponent to choose from a set of different continuation strategies. And that forces us to not be too optimistic in a look ahead search. And that's one way you can do sound look ahead search in imperfect information games, which is very difficult. And you were asking about DeepStack. What they did, it was very different than what we do, either in Libratus or in this new work. They were randomly generating various situations in the game. Then they were doing the look ahead from there to the end of the game, as if that was the start of a different game. And then they were using deep learning to learn those values of those states, but the states were not just the physical states. They include belief distributions. When you talk about look ahead for DeepStack or with Libratus, does it mean, considering every possibility that the game can evolve, are we talking about extremely, sort of this exponentially growth of a tree? Yes, so we're talking about exactly that. Much like you do in alpha beta search or Monte Carlo tree search, but with different techniques. So there's a different search algorithm. And then we have to deal with the leaves differently. So if you think about what Libratus did, we didn't have to worry about this because we only did it at the end of the game. So we would always terminate into a real situation and we would know what the payout is. It didn't do these depth limited lookaheads, but now in this new paper, which is called depth limited, I think it's called depth limited search for imperfect information games, we can actually do sound depth limited lookahead. So we can actually start to do the look ahead from the beginning of the game on, because that's too complicated to do for this whole long game. So in Libratus, we were just doing it for the end. So, and then the other side, this belief distribution, so is it explicitly modeled what kind of beliefs that the opponent might have? Yeah, it is explicitly modeled, but it's not assumed. The beliefs are actually output, not input. Of course, the starting beliefs are input, but they just fall from the rules of the game because we know that the dealer deals uniformly from the deck, so I know that every pair of cards that you might have is equally likely. I know that for a fact, that just follows from the rules of the game. Of course, except the two cards that I have, I know you don't have those. Yeah. You have to take that into account. That's called card removal and that's very important. Is the dealing always coming from a single deck in Heads Up, so you can assume. Single deck, so you know that if I have the ace of spades, I know you don't have an ace of spades. Great, so in the beginning, your belief is basically the fact that it's a fair dealing of hands, but how do you start to adjust that belief? Well, that's where this beauty of game theory comes. So Nash equilibrium, which John Nash introduced in 1950, introduces what rational play is when you have more than one player. And these are pairs of strategies where strategies are contingency plans, one for each player. So that neither player wants to deviate to a different strategy, given that the other doesn't deviate. But as a side effect, you get the beliefs from base roll. So Nash equilibrium really isn't just deriving in these imperfect information games, Nash equilibrium, it doesn't just define strategies. It also defines beliefs for both of us and defines beliefs for each state. So at each state, it's called information sets. At each information set in the game, there's a set of different states that we might be in, but I don't know which one we're in. Nash equilibrium tells me exactly what is the probability distribution over those real world states in my mind. How does Nash equilibrium give you that distribution? So why? I'll do a simple example. So you know the game Rock, Paper, Scissors? So we can draw it as player one moves first and then player two moves. But of course, it's important that player two doesn't know what player one moved, otherwise player two would win every time. So we can draw that as an information set where player one makes one of three moves first, and then there's an information set for player two. So player two doesn't know which of those nodes the world is in. But once we know the strategy for player one, Nash equilibrium will say that you play 1 3rd Rock, 1 3rd Paper, 1 3rd Scissors. From that, I can derive my beliefs on the information set that they're 1 3rd, 1 3rd, 1 3rd. So Bayes gives you that. Bayes gives you. But is that specific to a particular player, or is it something you quickly update with the specific player? No, the game theory isn't really player specific. So that's also why we don't need any data. We don't need any history how these particular humans played in the past or how any AI or human had played before. It's all about rationality. So the AI just thinks about what would a rational opponent do? And what would I do if I am rational? And that's the idea of game theory. So it's really a data free, opponent free approach. So it comes from the design of the game as opposed to the design of the player. Exactly, there's no opponent modeling per se. I mean, we've done some work on combining opponent modeling with game theory so you can exploit weak players even more, but that's another strand. And in Librarus, we didn't turn that on. So I decided that these players are too good. And when you start to exploit an opponent, you typically open yourself up to exploitation. And these guys have so few holes to exploit and they're world's leading experts in counter exploitation. So I decided that we're not gonna turn that stuff on. Actually, I saw a few of your papers exploiting opponents. It sounded very interesting to explore. Do you think there's room for exploitation generally outside of Librarus? Is there a subject or people differences that could be exploited, maybe not just in poker, but in general interactions and negotiations, all these other domains that you're considering? Yeah, definitely. We've done some work on that. And I really like the work at hybrid digested too. So you figure out what would a rational opponent do. And by the way, that's safe in these zero sum games, two player zero sum games, because if the opponent does something irrational, yes, it might throw off my beliefs, but the amount that the player can gain by throwing off my belief is always less than they lose by playing poorly. So it's safe. But still, if somebody's weak as a player, you might wanna play differently to exploit them more. So you can think about it this way, a game theoretic strategy is unbeatable, but it doesn't maximally beat the other opponent. So the winnings per hand might be better with a different strategy. And the hybrid is that you start from a game theoretic approach. And then as you gain data about the opponent in certain parts of the game tree, then in those parts of the game tree, you start to tweak your strategy more and more towards exploitation while still staying fairly close to the game theoretic strategy so as to not open yourself up to exploitation too much. How do you do that? Do you try to vary up strategies, make it unpredictable? It's like, what is it, tit for tat strategies in Prisoner's Dilemma or? Well, that's a repeated game. Repeated games. Simple Prisoner's Dilemma, repeated games. But even there, there's no proof that says that that's the best thing. But experimentally, it actually does well. So what kind of games are there, first of all? I don't know if this is something that you could just summarize. There's perfect information games where all the information's on the table. There is imperfect information games. There's repeated games that you play over and over. There's zero sum games. There's non zero sum games. And then there's a really important distinction you're making, two player versus more players. So what are, what other games are there? And what's the difference, for example, with this two player game versus more players? What are the key differences in your view? So let me start from the basics. So a repeated game is a game where the same exact game is played over and over. In these extensive form games, where it's, think about three form, maybe with these information sets to represent incomplete information, you can have kind of repetitive interactions. Even repeated games are a special case of that, by the way. But the game doesn't have to be exactly the same. It's like in sourcing auctions. Yes, we're gonna see the same supply base year to year, but what I'm buying is a little different every time. And the supply base is a little different every time and so on. So it's not really repeated. So to find a purely repeated game is actually very rare in the world. So they're really a very course model of what's going on. Then if you move up from just repeated, simple repeated matrix games, not all the way to extensive form games, but in between, they're stochastic games, where, you know, there's these, you think about it like these little matrix games. And when you take an action and your opponent takes an action, they determine not which next state I'm going to, next game I'm going to, but the distribution over next games where I might be going to. So that's the stochastic game. But it's like matrix games, repeated stochastic games, extensive form games. That is from less to more general. And poker is an example of the last one. So it's really in the most general setting. Extensive form games. And that's kind of what the AI community has been working on and being benchmarked on with this Heads Up No Limit Texas Holdem. Can you describe extensive form games? What's the model here? Yeah, so if you're familiar with the tree form, so it's really the tree form. Like in chess, there's a search tree. Versus a matrix. Versus a matrix, yeah. And the matrix is called the matrix form or bi matrix form or normal form game. And here you have the tree form. So you can actually do certain types of reasoning there that you lose the information when you go to normal form. There's a certain form of equivalence. Like if you go from tree form and you say it, every possible contingency plan is a strategy. Then I can actually go back to the normal form, but I lose some information from the lack of sequentiality. Then the multiplayer versus two player distinction is an important one. So two player games in zero sum are conceptually easier and computationally easier. They're still huge like this one, but they're conceptually easier and computationally easier in that conceptually, you don't have to worry about which equilibrium is the other guy going to play when there are multiple, because any equilibrium strategy is a best response to any other equilibrium strategy. So I can play a different equilibrium from you and we'll still get the right values of the game. That falls apart even with two players when you have general sum games. Even without cooperation just in general. Even without cooperation. So there's a big gap from two player zero sum to two player general sum or even to three player zero sum. That's a big gap, at least in theory. Can you maybe non mathematically provide the intuition why it all falls apart with three or more players? It seems like you should still be able to have a Nash equilibrium that's instructive, that holds. Okay, so it is true that all finite games have a Nash equilibrium. So this is what John Nash actually proved. So they do have a Nash equilibrium. That's not the problem. The problem is that there can be many. And then there's a question of which equilibrium to select. So, and if you select your strategy from a different equilibrium and I select mine, then what does that mean? And in these non zero sum games, we may lose some joint benefit by being just simply stupid. We could actually both be better off if we did something else. And in three player, you get other problems also like collusion. Like maybe you and I can gang up on a third player and we can do radically better by colluding. So there are lots of issues that come up there. So Noah Brown, the student you work with on this has mentioned, I looked through the AMA on Reddit. He mentioned that the ability of poker players to collaborate will make the game. He was asked the question of, how would you make the game of poker, or both of you were asked the question, how would you make the game of poker beyond being solvable by current AI methods? And he said that there's not many ways of making poker more difficult, but a collaboration or cooperation between players would make it extremely difficult. So can you provide the intuition behind why that is, if you agree with that idea? Yeah, so I've done a lot of work on coalitional games and we actually have a paper here with my other student Gabriele Farina and some other collaborators at NIPS on that. Actually just came back from the poster session where we presented this. But so when you have a collusion, it's a different problem. And it typically gets even harder then. Even the game representations, some of the game representations don't really allow good computation. So we actually introduced a new game representation for that. Is that kind of cooperation part of the model? Are you, do you have, do you have information about the fact that other players are cooperating or is it just this chaos that where nothing is known? So there's some things unknown. Can you give an example of a collusion type game or is it usually? So like bridge. So think about bridge. It's like when you and I are on a team, our payoffs are the same. The problem is that we can't talk. So when I get my cards, I can't whisper to you what my cards are. That would not be allowed. So we have to somehow coordinate our strategies ahead of time and only ahead of time. And then there's certain signals we can talk about, but they have to be such that the other team also understands them. So that's an example where the coordination is already built into the rules of the game. But in many other situations like auctions or negotiations or diplomatic relationships, poker, it's not really built in, but it still can be very helpful for the colluders. I've read you write somewhere, the negotiations you come to the table with prior, like a strategy that you're willing to do and not willing to do those kinds of things. So how do you start to now moving away from poker, moving beyond poker into other applications like negotiations, how do you start applying this to other domains, even real world domains that you've worked on? Yeah, I actually have two startup companies doing exactly that. One is called Strategic Machine, and that's for kind of business applications, gaming, sports, all sorts of things like that. Any applications of this to business and to sports and to gaming, to various types of things in finance, electricity markets and so on. And the other is called Strategy Robot, where we are taking these to military security, cyber security and intelligence applications. I think you worked a little bit in, how do you put it, advertisement, sort of suggesting ads kind of thing, auction. That's another company, optimized markets. But that's much more about a combinatorial market and optimization based technology. That's not using these game theoretic reasoning technologies. I see, okay, so what sort of high level do you think about our ability to use game theoretic concepts to model human behavior? Do you think human behavior is amenable to this kind of modeling outside of the poker games, and where have you seen it done successfully in your work? I'm not sure the goal really is modeling humans. Like for example, if I'm playing a zero sum game, I don't really care that the opponent is actually following my model of rational behavior, because if they're not, that's even better for me. Right, so see with the opponents in games, the prerequisite is that you formalize the interaction in some way that can be amenable to analysis. And you've done this amazing work with mechanism design, designing games that have certain outcomes. But, so I'll tell you an example from my world of autonomous vehicles, right? We're studying pedestrians, and pedestrians and cars negotiate in this nonverbal communication. There's this weird game dance of tension where pedestrians are basically saying, I trust that you won't kill me, and so as a jaywalker, I will step onto the road even though I'm breaking the law, and there's this tension. And the question is, we really don't know how to model that well in trying to model intent. And so people sometimes bring up ideas of game theory and so on. Do you think that aspect of human behavior can use these kinds of imperfect information approaches, modeling, how do you start to attack a problem like that when you don't even know how to design the game to describe the situation in order to solve it? Okay, so I haven't really thought about jaywalking, but one thing that I think could be a good application in autonomous vehicles is the following. So let's say that you have fleets of autonomous cars operating by different companies. So maybe here's the Waymo fleet and here's the Uber fleet. If you think about the rules of the road, they define certain legal rules, but that still leaves a huge strategy space open. Like as a simple example, when cars merge, how humans merge, they slow down and look at each other and try to merge. Wouldn't it be better if these situations would already be prenegotiated so we can actually merge at full speed and we know that this is the situation, this is how we do it, and it's all gonna be faster. But there are way too many situations to negotiate manually. So you could use automated negotiation, this is the idea at least, you could use automated negotiation to negotiate all of these situations or many of them in advance. And of course it might be that, hey, maybe you're not gonna always let me go first. Maybe you said, okay, well, in these situations, I'll let you go first, but in exchange, you're gonna give me too much, you're gonna let me go first in this situation. So it's this huge combinatorial negotiation. And do you think there's room in that example of merging to model this whole situation as an imperfect information game or do you really want to consider it to be a perfect? No, that's a good question, yeah. That's a good question. Do you pay the price of assuming that you don't know everything? Yeah, I don't know. It's certainly much easier. Games with perfect information are much easier. So if you can't get away with it, you should. But if the real situation is of imperfect information, then you're gonna have to deal with imperfect information. Great, so what lessons have you learned the Annual Computer Poker Competition? An incredible accomplishment of AI. You look at the history of Deep Blue, AlphaGo, these kind of moments when AI stepped up in an engineering effort and a scientific effort combined to beat the best of human players. So what do you take away from this whole experience? What have you learned about designing AI systems that play these kinds of games? And what does that mean for AI in general, for the future of AI development? Yeah, so that's a good question. So there's so much to say about it. I do like this type of performance oriented research. Although in my group, we go all the way from like idea to theory, to experiments, to big system building, to commercialization, so we span that spectrum. But I think that in a lot of situations in AI, you really have to build the big systems and evaluate them at scale before you know what works and doesn't. And we've seen that in the computational game theory community, that there are a lot of techniques that look good in the small, but then they cease to look good in the large. And we've also seen that there are a lot of techniques that look superior in theory. And I really mean in terms of convergence rates, like first order methods, better convergence rates, like the CFR based algorithms, yet the CFR based algorithms are the fastest in practice. So it really tells me that you have to test this in reality. The theory isn't tight enough, if you will, to tell you which algorithms are better than the others. And you have to look at these things in the large, because any sort of projections you do from the small can at least in this domain be very misleading. So that's kind of from a kind of a science and engineering perspective, from a personal perspective, it's been just a wild experience in that with the first poker competition, the first brains versus AI, man machine poker competition that we organized. There had been, by the way, for other poker games, there had been previous competitions, but this was for Heads Up No Limit, this was the first. And I probably became the most hated person in the world of poker. And I didn't mean to, I just saw. Why is that? For cracking the game, for something. Yeah, a lot of people felt that it was a real threat to the whole game, the whole existence of the game. If AI becomes better than humans, people would be scared to play poker because there are these superhuman AIs running around taking their money and all of that. So I just, it's just really aggressive. The comments were super aggressive. I got everything just short of death threats. Do you think the same was true for chess? Because right now they just completed the world championships in chess, and humans just started ignoring the fact that there's AI systems now that outperform humans and they still enjoy the game, it's still a beautiful game. That's what I think. And I think the same thing happens in poker. And so I didn't think of myself as somebody who was gonna kill the game, and I don't think I did. I've really learned to love this game. I wasn't a poker player before, but learned so many nuances about it from these AIs, and they've really changed how the game is played, by the way. So they have these very Martian ways of playing poker, and the top humans are now incorporating those types of strategies into their own play. So if anything, to me, our work has made poker a richer, more interesting game for humans to play, not something that is gonna steer humans away from it entirely. Just a quick comment on something you said, which is, if I may say so, in academia is a little bit rare sometimes. It's pretty brave to put your ideas to the test in the way you described, saying that sometimes good ideas don't work when you actually try to apply them at scale. So where does that come from? I mean, if you could do advice for people, what drives you in that sense? Were you always this way? I mean, it takes a brave person. I guess is what I'm saying, to test their ideas and to see if this thing actually works against human top human players and so on. Yeah, I don't know about brave, but it takes a lot of work. It takes a lot of work and a lot of time to organize, to make something big and to organize an event and stuff like that. And what drives you in that effort? Because you could still, I would argue, get a best paper award at NIPS as you did in 17 without doing this. That's right, yes. And so in general, I believe it's very important to do things in the real world and at scale. And that's really where the pudding, if you will, proof is in the pudding, that's where it is. In this particular case, it was kind of a competition between different groups and for many years as to who can be the first one to beat the top humans at Heads Up No Limit, Texas Holdem. So it became kind of like a competition who can get there. Yeah, so a little friendly competition could do wonders for progress. Yes, absolutely. So the topic of mechanism design, which is really interesting, also kind of new to me, except as an observer of, I don't know, politics and any, I'm an observer of mechanisms, but you write in your paper an automated mechanism design that I quickly read. So mechanism design is designing the rules of the game so you get a certain desirable outcome. And you have this work on doing so in an automatic fashion as opposed to fine tuning it. So what have you learned from those efforts? If you look, say, I don't know, at complexes like our political system, can we design our political system to have, in an automated fashion, to have outcomes that we want? Can we design something like traffic lights to be smart where it gets outcomes that we want? So what are the lessons that you draw from that work? Yeah, so I still very much believe in the automated mechanism design direction. Yes. But it's not a panacea. There are impossibility results in mechanism design saying that there is no mechanism that accomplishes objective X in class C. So it's not going to, there's no way using any mechanism design tools, manual or automated, to do certain things in mechanism design. Can you describe that again? So meaning it's impossible to achieve that? Yeah, yeah. And it's unlikely. Impossible. Impossible. So these are not statements about human ingenuity who might come up with something smart. These are proofs that if you want to accomplish properties X in class C, that is not doable with any mechanism. The good thing about automated mechanism design is that we're not really designing for a class. We're designing for specific settings at a time. So even if there's an impossibility result for the whole class, it just doesn't mean that all of the cases in the class are impossible. It just means that some of the cases are impossible. So we can actually carve these islands of possibility within these known impossible classes. And we've actually done that. So one of the famous results in mechanism design is the Meyerson Satethweight theorem by Roger Meyerson and Mark Satethweight from 1983. It's an impossibility of efficient trade under imperfect information. We show that you can, in many settings, avoid that and get efficient trade anyway. Depending on how they design the game, okay. Depending how you design the game. And of course, it doesn't in any way contradict the impossibility result. The impossibility result is still there, but it just finds spots within this impossible class where in those spots, you don't have the impossibility. Sorry if I'm going a bit philosophical, but what lessons do you draw towards, like I mentioned, politics or human interaction and designing mechanisms for outside of just these kinds of trading or auctioning or purely formal games or human interaction, like a political system? How, do you think it's applicable to, yeah, politics or to business, to negotiations, these kinds of things, designing rules that have certain outcomes? Yeah, yeah, I do think so. Have you seen that successfully done? They haven't really, oh, you mean mechanism design or automated mechanism design? Automated mechanism design. So mechanism design itself has had fairly limited success so far. There are certain cases, but most of the real world situations are actually not sound from a mechanism design perspective, even in those cases where they've been designed by very knowledgeable mechanism design people, the people are typically just taking some insights from the theory and applying those insights into the real world, rather than applying the mechanisms directly. So one famous example of is the FCC spectrum auctions. So I've also had a small role in that and very good economists have been working, excellent economists have been working on that with no game theory, yet the rules that are designed in practice there, they're such that bidding truthfully is not the best strategy. Usually mechanism design, we try to make things easy for the participants. So telling the truth is the best strategy, but even in those very high stakes auctions where you have tens of billions of dollars worth of spectrum being auctioned, truth telling is not the best strategy. And by the way, nobody knows even a single optimal bidding strategy for those auctions. What's the challenge of coming up with an optimal, because there's a lot of players and there's imperfect. It's not so much that a lot of players, but many items for sale, and these mechanisms are such that even with just two items or one item, bidding truthfully wouldn't be the best strategy. If you look at the history of AI, it's marked by seminal events. AlphaGo beating a world champion human Go player, I would put Liberatus winning the Heads Up No Limit Holdem as one of such event. Thank you. And what do you think is the next such event, whether it's in your life or in the broadly AI community that you think might be out there that would surprise the world? So that's a great question, and I don't really know the answer. In terms of game solving, Heads Up No Limit Texas Holdem really was the one remaining widely agreed upon benchmark. So that was the big milestone. Now, are there other things? Yeah, certainly there are, but there's not one that the community has kind of focused on. So what could be other things? There are groups working on StarCraft. There are groups working on Dota 2. These are video games. Or you could have like Diplomacy or Hanabi, things like that. These are like recreational games, but none of them are really acknowledged as kind of the main next challenge problem, like chess or Go or Heads Up No Limit Texas Holdem was. So I don't really know in the game solving space what is or what will be the next benchmark. I kind of hope that there will be a next benchmark because really the different groups working on the same problem really drove these application independent techniques forward very quickly over 10 years. Do you think there's an open problem that excites you that you start moving away from games into real world games, like say the stock market trading? Yeah, so that's kind of how I am. So I am probably not going to work as hard on these recreational benchmarks. I'm doing two startups on game solving technology, Strategic Machine and Strategy Robot, and we're really interested in pushing this stuff into practice. What do you think would be really a powerful result that would be surprising that would be, if you can say, I mean, five years, 10 years from now, something that statistically you would say is not very likely, but if there's a breakthrough, would achieve? Yeah, so I think that overall, we're in a very different situation in game theory than we are in, let's say, machine learning. So in machine learning, it's a fairly mature technology and it's very broadly applied and proven success in the real world. In game solving, there are almost no applications yet. We have just become superhuman, which machine learning you could argue happened in the 90s, if not earlier, and at least on supervised learning, certain complex supervised learning applications. Now, I think the next challenge problem, I know you're not asking about it this way, you're asking about the technology breakthrough, but I think that big, big breakthrough is to be able to show that, hey, maybe most of, let's say, military planning or most of business strategy will actually be done strategically using computational game theory. That's what I would like to see as the next five or 10 year goal. Maybe you can explain to me again, forgive me if this is an obvious question, but machine learning methods, neural networks suffer from not being transparent, not being explainable. Game theoretic methods, Nash equilibria, do they generally, when you see the different solutions, are they, when you talk about military operations, are they, once you see the strategies, do they make sense, are they explainable, or do they suffer from the same problems as neural networks do? So that's a good question. I would say a little bit yes and no. And what I mean by that is that these game theoretic strategies, let's say, Nash equilibrium, it has provable properties. So it's unlike, let's say, deep learning where you kind of cross your fingers, hopefully it'll work. And then after the fact, when you have the weights, you're still crossing your fingers, and I hope it will work. Here, you know that the solution quality is there. There's provable solution quality guarantees. Now, that doesn't necessarily mean that the strategies are human understandable. That's a whole other problem. So I think that deep learning and computational game theory are in the same boat in that sense, that both are difficult to understand. But at least the game theoretic techniques, they have these guarantees of solution quality. So do you see business operations, strategic operations, or even military in the future being at least the strong candidates being proposed by automated systems? Do you see that? Yeah, I do, I do. But that's more of a belief than a substantiated fact. Depending on where you land in optimism or pessimism, that's a really, to me, that's an exciting future, especially if there's provable things in terms of optimality. So looking into the future, there's a few folks worried about the, especially you look at the game of poker, which is probably one of the last benchmarks in terms of games being solved. They worry about the future and the existential threats of artificial intelligence. So the negative impact in whatever form on society. Is that something that concerns you as much, or are you more optimistic about the positive impacts of AI? Oh, I am much more optimistic about the positive impacts. So just in my own work, what we've done so far, we run the nationwide kidney exchange. Hundreds of people are walking around alive today, who would it be? And it's increased employment. You have a lot of people now running kidney exchanges and at the transplant centers, interacting with the kidney exchange. You have extra surgeons, nurses, anesthesiologists, hospitals, all of that. So employment is increasing from that and the world is becoming a better place. Another example is combinatorial sourcing auctions. We did 800 large scale combinatorial sourcing auctions from 2001 to 2010 in a previous startup of mine called CombineNet. And we increased the supply chain efficiency on that $60 billion of spend by 12.6%. So that's over $6 billion of efficiency improvement in the world. And this is not like shifting value from somebody to somebody else, just efficiency improvement, like in trucking, less empty driving, so there's less waste, less carbon footprint and so on. So a huge positive impact in the near term, but sort of to stay in it for a little longer, because I think game theory has a role to play here. Oh, let me actually come back on that as one thing. I think AI is also going to make the world much safer. So that's another aspect that often gets overlooked. Well, let me ask this question. Maybe you can speak to the safer. So I talked to Max Tegmark and Stuart Russell, who are very concerned about existential threats of AI. And often the concern is about value misalignment. So AI systems basically working, operating towards goals that are not the same as human civilization, human beings. So it seems like game theory has a role to play there to make sure the values are aligned with human beings. I don't know if that's how you think about it. If not, how do you think AI might help with this problem? How do you think AI might make the world safer? Yeah, I think this value misalignment is a fairly theoretical worry. And I haven't really seen it in, because I do a lot of real applications. I don't see it anywhere. The closest I've seen it was the following type of mental exercise really, where I had this argument in the late eighties when we were building these transportation optimization systems. And somebody had heard that it's a good idea to have high utilization of assets. So they told me, hey, why don't you put that as objective? And we didn't even put it as an objective because I just showed him that, if you had that as your objective, the solution would be to load your trucks full and drive in circles. Nothing would ever get delivered. You'd have a hundred percent utilization. So yeah, I know this phenomenon. I've known this for over 30 years, but I've never seen it actually be a problem in reality. And yes, if you have the wrong objective, the AI will optimize that to the hilt and it's gonna hurt more than some human who's kind of trying to solve it in a half baked way with some human insight too. But I just haven't seen that materialize in practice. There's this gap that you've actually put your finger on very clearly just now between theory and reality. That's very difficult to put into words, I think. It's what you can theoretically imagine, the worst possible case or even, yeah, I mean bad cases. And what usually happens in reality. So for example, to me, maybe it's something you can comment on having grown up and I grew up in the Soviet Union. There's currently 10,000 nuclear weapons in the world. And for many decades, it's theoretically surprising to me that the nuclear war is not broken out. Do you think about this aspect from a game theoretic perspective in general, why is that true? Why in theory you could see how things would go terribly wrong and somehow yet they have not? Yeah, how do you think about it? So I do think about that a lot. I think the biggest two threats that we're facing as mankind, one is climate change and the other is nuclear war. So those are my main two worries that I worry about. And I've tried to do something about climate, thought about trying to do something for climate change twice. Actually, for two of my startups, I've actually commissioned studies of what we could do on those things. And we didn't really find a sweet spot, but I'm still keeping an eye out on that. If there's something where we could actually provide a market solution or optimizations solution or some other technology solution to problems. Right now, like for example, pollution critic markets was what we were looking at then. And it was much more the lack of political will by those markets were not so successful rather than bad market design. So I could go in and make a better market design, but that wouldn't really move the needle on the world very much if there's no political will. And in the US, the market, at least the Chicago market was just shut down and so on. So then it doesn't really help how great your market design was. And then the nuclear side, it's more, so global warming is a more encroaching problem. Nuclear weapons have been here. It's an obvious problem that's just been sitting there. So how do you think about, what is the mechanism design there that just made everything seem stable? And are you still extremely worried? I am still extremely worried. So you probably know the simple game theory of mad. So this was a mutually assured destruction and it doesn't require any computation with small matrices. You can actually convince yourself that the game is such that nobody wants to initiate. Yeah, that's a very coarse grained analysis. And it really works in a situational way. You have two superpowers or small number of superpowers. Now things are very different. You have a smaller nuke. So the threshold of initiating is smaller and you have smaller countries and non nation actors who may get a nuke and so on. So I think it's riskier now than it was maybe ever before. And what idea, application of AI, you've talked about a little bit, but what is the most exciting to you right now? I mean, you're here at NIPS, NeurIPS. Now you have a few excellent pieces of work, but what are you thinking into the future with several companies you're doing? What's the most exciting thing or one of the exciting things? The number one thing for me right now is coming up with these scalable techniques for game solving and applying them into the real world. I'm still very interested in market design as well. And we're doing that in the optimized markets, but I'm most interested if number one right now is strategic machine strategy robot, getting that technology out there and seeing as you were in the trenches doing applications, what needs to be actually filled, what technology gaps still need to be filled. So it's so hard to just put your feet on the table and imagine what needs to be done. But when you're actually doing real applications, the applications tell you what needs to be done. And I really enjoy that interaction. Is it a challenging process to apply some of the state of the art techniques you're working on and having the various players in industry or the military or people who could really benefit from it actually use it? What's that process like of, autonomous vehicles work with automotive companies and they're in many ways are a little bit old fashioned. It's difficult. They really want to use this technology. There's clearly will have a significant benefit, but the systems aren't quite in place to easily have them integrated in terms of data, in terms of compute, in terms of all these kinds of things. So is that one of the bigger challenges that you're facing and how do you tackle that challenge? Yeah, I think that's always a challenge. That's kind of slowness and inertia really of let's do things the way we've always done it. You just have to find the internal champions at the customer who understand that, hey, things can't be the same way in the future. Otherwise bad things are going to happen. And it's in autonomous vehicles. It's actually very interesting that the car makers are doing that and they're very traditional, but at the same time you have tech companies who have nothing to do with cars or transportation like Google and Baidu really pushing on autonomous cars. I find that fascinating. Clearly you're super excited about actually these ideas having an impact in the world. In terms of the technology, in terms of ideas and research, are there directions that you're also excited about? Whether that's on some of the approaches you talked about for the imperfect information games, whether it's applying deep learning to some of these problems, is there something that you're excited in the research side of things? Yeah, yeah, lots of different things in the game solving. So solving even bigger games, games where you have more hidden action of the player actions as well. Poker is a game where really the chance actions are hidden or some of them are hidden, but the player actions are public. Multiplayer games of various sorts, collusion, opponent exploitation, all and even longer games. So games that basically go forever, but they're not repeated. So see extensive fun games that go forever. What would that even look like? How do you represent that? How do you solve that? What's an example of a game like that? Or is this some of the stochastic games that you mentioned? Let's say business strategy. So it's not just modeling like a particular interaction, but thinking about the business from here to eternity. Or let's say military strategy. So it's not like war is gonna go away. How do you think about military strategy that's gonna go forever? How do you even model that? How do you know whether a move was good that somebody made and so on? So that's kind of one direction. I'm also very interested in learning much more scalable techniques for integer programming. So we had an ICML paper this summer on that. The first automated algorithm configuration paper that has theoretical generalization guarantees. So if I see this many training examples and I told my algorithm in this way, it's going to have good performance on the real distribution, which I've not seen. So, which is kind of interesting that algorithm configuration has been going on now for at least 17 years seriously. And there has not been any generalization theory before. Well, this is really exciting and it's a huge honor to talk to you. Thank you so much, Tomas. Thank you for bringing Labradus to the world and all the great work you're doing. Well, thank you very much. It's been fun. No more questions.
Tuomas Sandholm: Poker and Game Theory | Lex Fridman Podcast #12
The following is a conversation with Tommaso Poggio. He's a professor at MIT and is a director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence in both biological and artificial neural networks. He has been an advisor to many highly impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of Mobileye, and Christoph Koch of the Allen Institute for Brain Science. This conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Tommaso Poggio. You've mentioned that in your childhood, you've developed a fascination with physics, especially the theory of relativity. And that Einstein was also a childhood hero to you. What aspect of Einstein's genius, the nature of his genius, do you think was essential for discovering the theory of relativity? You know, Einstein was a hero to me, and I'm sure to many people, because he was able to make, of course, a major, major contribution to physics with simplifying a bit just a gedanken experiment, a thought experiment, you know, imagining communication with lights between a stationary observer and somebody on a train. And I thought, you know, the fact that just with the force of his thought, of his thinking, of his mind, he could get to something so deep in terms of physical reality, how time depend on space and speed, it was something absolutely fascinating. It was the power of intelligence, the power of the mind. Do you think the ability to imagine, to visualize as he did, as a lot of great physicists do, do you think that's in all of us human beings? Or is there something special to that one particular human being? I think, you know, all of us can learn and have, in principle, similar breakthroughs. There are lessons to be learned from Einstein. He was one of five PhD students at ETA, the Eidgenössische Technische Hochschule in Zurich, in physics. And he was the worst of the five, the only one who did not get an academic position when he graduated, when he finished his PhD. And he went to work, as everybody knows, for the patent office. And so it's not so much that he worked for the patent office, but the fact that obviously he was smart, but he was not a top student, obviously was the anti conformist. He was not thinking in the traditional way that probably his teachers and the other students were doing. So there is a lot to be said about trying to do the opposite or something quite different from what other people are doing. That's certainly true for the stock market. Never buy if everybody's buying. And also true for science. Yes. So you've also mentioned, staying on the theme of physics, that you were excited at a young age by the mysteries of the universe that physics could uncover. Such, as I saw mentioned, the possibility of time travel. So the most out of the box question, I think I'll get to ask today, do you think time travel is possible? Well, it would be nice if it were possible right now. In science, you never say no. But your understanding of the nature of time. Yeah. It's very likely that it's not possible to travel in time. We may be able to travel forward in time if we can, for instance, freeze ourselves or go on some spacecraft traveling close to the speed of light. But in terms of actively traveling, for instance, back in time, I find probably very unlikely. So do you still hold the underlying dream of the engineering intelligence that will build systems that are able to do such huge leaps, like discovering the kind of mechanism that would be required to travel through time? Do you still hold that dream or echoes of it from your childhood? Yeah. I don't think whether there are certain problems that probably cannot be solved, depending what you believe about the physical reality, like maybe totally impossible to create energy from nothing or to travel back in time, but about making machines that can think as well as we do or better, or more likely, especially in the short and midterm, help us think better, which is, in a sense, is happening already with the computers we have. And it will happen more and more. But that I certainly believe. And I don't see, in principle, why computers at some point could not become more intelligent than we are, although the word intelligence is a tricky one and one we should discuss. What I mean with that. Intelligence, consciousness, words like love, all these need to be disentangled. So you've mentioned also that you believe the problem of intelligence is the greatest problem in science, greater than the origin of life and the origin of the universe. You've also, in the talk I've listened to, said that you're open to arguments against you. So what do you think is the most captivating aspect of this problem of understanding the nature of intelligence? Why does it captivate you as it does? Well, originally, I think one of the motivation that I had as, I guess, a teenager when I was infatuated with theory of relativity was really that I found that there was the problem of time and space and general relativity. But there were so many other problems of the same level of difficulty and importance that I could, even if I were Einstein, it was difficult to hope to solve all of them. So what about solving a problem whose solution allowed me to solve all the problems? And this was, what if we could find the key to an intelligence 10 times better or faster than Einstein? So that's sort of seeing artificial intelligence as a tool to expand our capabilities. But is there just an inherent curiosity in you in just understanding what it is in here that makes it all work? Yes, absolutely, you're right. So I started saying this was the motivation when I was a teenager. But soon after, I think the problem of human intelligence became a real focus of my science and my research because I think for me, the most interesting problem is really asking who we are. It's asking not only a question about science, but even about the very tool we are using to do science, which is our brain. How does our brain work? From where does it come from? What are its limitations? Can we make it better? And that, in many ways, is the ultimate question that underlies this whole effort of science. So you've made significant contributions in both the science of intelligence and the engineering of intelligence. In a hypothetical way, let me ask, how far do you think we can get in creating intelligence systems without understanding the biological, the understanding how the human brain creates intelligence? Put another way, do you think we can build a strong AI system without really getting at the core understanding the functional nature of the brain? Well, this is a real difficult question. We did solve problems like flying without really using too much our knowledge about how birds fly. It was important, I guess, to know that you could have things heavier than air being able to fly, like birds. But beyond that, probably we did not learn very much, some. The Brothers Wright did learn a lot of observation about birds and designing their aircraft. But you can argue we did not use much of biology in that particular case. Now, in the case of intelligence, I think that it's a bit of a bet right now. If you ask, OK, we all agree we'll get at some point, maybe soon, maybe later, to a machine that is indistinguishable from my secretary, say, in terms of what I can ask the machine to do. I think we'll get there. And now the question is, you can ask people, do you think we'll get there without any knowledge about the human brain? Or that the best way to get there is to understand better the human brain? OK, this is, I think, an educated bet that different people with different backgrounds will decide in different ways. The recent history of the progress in AI in the last, I would say, five years or 10 years has been that the main breakthroughs, the main recent breakthroughs, really start from neuroscience. I can mention reinforcement learning as one. It's one of the algorithms at the core of AlphaGo, which is the system that beat the kind of an official world champion of Go, Lee Sedol, two, three years ago in Seoul. That's one. And that started really with the work of Pavlov in 1900, Marvin Minsky in the 60s, and many other neuroscientists later on. And deep learning started, which is at the core, again, of AlphaGo and systems like autonomous driving systems for cars, like the systems that Mobileye, which is a company started by one of my ex postdocs, Amnon Shashua, did. So that is at the core of those things. And deep learning, really, the initial ideas in terms of the architecture of these layered hierarchical networks started with work of Torsten Wiesel and David Hubel at Harvard up the river in the 60s. So recent history suggests that neuroscience played a big role in these breakthroughs. My personal bet is that there is a good chance they continue to play a big role. Maybe not in all the future breakthroughs, but in some of them. At least in inspiration. At least in inspiration, absolutely, yes. So you studied both artificial and biological neural networks. You said these mechanisms that underlie deep learning and reinforcement learning. But there is nevertheless significant differences between biological and artificial neural networks as they stand now. So between the two, what do you find is the most interesting, mysterious, maybe even beautiful difference as it currently stands in our understanding? I must confess that until recently, I found that the artificial networks, too simplistic relative to real neural networks. But recently, I've been starting to think that, yes, there is a very big simplification of what you find in the brain. But on the other hand, they are much closer in terms of the architecture to the brain than other models that we had, that computer science used as model of thinking, which were mathematical logics, LISP, Prologue, and those kind of things. So in comparison to those, they're much closer to the brain. You have networks of neurons, which is what the brain is about. And the artificial neurons in the models, as I said, caricature of the biological neurons. But they're still neurons, single units communicating with other units, something that is absent in the traditional computer type models of mathematics, reasoning, and so on. So what aspect would you like to see in artificial neural networks added over time as we try to figure out ways to improve them? So one of the main differences and problems in terms of deep learning today, and it's not only deep learning, and the brain, is the need for deep learning techniques to have a lot of labeled examples. For instance, for ImageNet, you have like a training set, which is 1 million images, each one labeled by some human in terms of which object is there. And it's clear that in biology, a baby may be able to see millions of images in the first years of life, but will not have millions of labels given to him or her by parents or caretakers. So how do you solve that? I think there is this interesting challenge that today, deep learning and related techniques are all about big data, big data meaning a lot of examples labeled by humans, whereas in nature, you have this big data is n going to infinity. That's the best, n meaning labeled data. But I think the biological world is more n going to 1. A child can learn from a very small number of labeled examples. Like you tell a child, this is a car. You don't need to say, like in ImageNet, this is a car, this is a car, this is not a car, this is not a car, 1 million times. And of course, with AlphaGo, or at least the AlphaZero variants, because the world of Go is so simplistic that you can actually learn by yourself through self play, you can play against each other. In the real world, the visual system that you've studied extensively is a lot more complicated than the game of Go. On the comment about children, which are fascinatingly good at learning new stuff, how much of it do you think is hardware, and how much of it is software? Yeah, that's a good, deep question. In a sense, it's the old question of nurture and nature, how much is in the gene, and how much is in the experience of an individual. Obviously, it's both that play a role. And I believe that the way evolution gives, puts prior information, so to speak, hardwired, is not really hardwired. But that's essentially an hypothesis. I think what's going on is that evolution has almost necessarily, if you believe in Darwin, is very opportunistic. And think about our DNA and the DNA of Drosophila. Our DNA does not have many more genes than Drosophila. The fly. The fly, the fruit fly. Now, we know that the fruit fly does not learn very much during its individual existence. It looks like one of these machinery that it's really mostly, not 100%, but 95%, hardcoded by the genes. But since we don't have many more genes than Drosophila, evolution could encode in as a general learning machinery, and then had to give very weak priors. Like, for instance, let me give a specific example, which is recent work by a member of our Center for Brains, Minds, and Machines. We know because of work of other people in our group and other groups, that there are cells in a part of our brain, neurons, that are tuned to faces. They seem to be involved in face recognition. Now, this face area seems to be present in young children and adults. And one question is, is there from the beginning? Is hardwired by evolution? Or somehow it's learned very quickly. So what's your, by the way, a lot of the questions I'm asking, the answer is we don't really know. But as a person who has contributed some profound ideas in these fields, you're a good person to guess at some of these. So of course, there's a caveat before a lot of the stuff we talk about. But what is your hunch? Is the face, the part of the brain that seems to be concentrated on face recognition, are you born with that? Or you just is designed to learn that quickly, like the face of the mother and so on? My hunch, my bias was the second one, learned very quickly. And it turns out that Marge Livingstone at Harvard has done some amazing experiments in which she raised baby monkeys, depriving them of faces during the first weeks of life. So they see technicians, but the technician have a mask. Yes. And so when they looked at the area in the brain of these monkeys that were usually defined faces, they found no face preference. So my guess is that what evolution does in this case is there is a plastic area, which is plastic, which is kind of predetermined to be imprinted very easily. But the command from the gene is not a detailed circuitry for a face template. Could be, but this will require probably a lot of bits. You had to specify a lot of connection of a lot of neurons. Instead, the command from the gene is something like imprint, memorize what you see most often in the first two weeks of life, especially in connection with food and maybe nipples. I don't know. Well, source of food. And so that area is very plastic at first and then solidifies. It'd be interesting if a variant of that experiment would show a different kind of pattern associated with food than a face pattern, whether that could stick. There are indications that during that experiment, what the monkeys saw quite often were the blue gloves of the technicians that were giving to the baby monkeys the milk. And some of the cells, instead of being face sensitive in that area, are hand sensitive. That's fascinating. Can you talk about what are the different parts of the brain and, in your view, sort of loosely, and how do they contribute to intelligence? Do you see the brain as a bunch of different modules, and they together come in the human brain to create intelligence? Or is it all one mush of the same kind of fundamental architecture? Yeah, that's an important question. And there was a phase in neuroscience back in the 1950 or so in which it was believed for a while that the brain was equipotential. This was the term. You could cut out a piece, and nothing special happened apart a little bit less performance. There was a surgeon, Lashley, who did a lot of experiments of this type with mice and rats and concluded that every part of the brain was essentially equivalent to any other one. It turns out that that's really not true. There are very specific modules in the brain, as you said. And people may lose the ability to speak if you have a stroke in a certain region, or may lose control of their legs in another region. So they're very specific. The brain is also quite flexible and redundant, so often it can correct things and take over functions from one part of the brain to the other. But really, there are specific modules. So the answer that we know from this old work, which was basically based on lesions, either on animals, or very often there was a mine of very interesting data coming from the war, from different types of injuries that soldiers had in the brain. And more recently, functional MRI, which allow you to check which part of the brain are active when you are doing different tasks, can replace some of this. You can see that certain parts of the brain are involved, are active in certain tasks. Vision, language, yeah, that's right. But sort of taking a step back to that part of the brain that discovers that specializes in the face and how that might be learned, what's your intuition behind? Is it possible that from a physicist perspective, when you get lower and lower, that it's all the same stuff and it just, when you're born, it's plastic and quickly figures out this part is going to be about vision, this is going to be about language, this is about common sense reasoning. Do you have an intuition that that kind of learning is going on really quickly, or is it really kind of solidified in hardware? That's a great question. So there are parts of the brain like the cerebellum or the hippocampus that are quite different from each other. They clearly have different anatomy, different connectivity. Then there is the cortex, which is the most developed part of the brain in humans. And in the cortex, you have different regions of the cortex that are responsible for vision, for audition, for motor control, for language. Now, one of the big puzzles of this is that in the cortex is the cortex is the cortex. Looks like it is the same in terms of hardware, in terms of type of neurons and connectivity across these different modalities. So for the cortex, I think aside these other parts of the brain like spinal cord, hippocampus, cerebellum, and so on, for the cortex, I think your question about hardware and software and learning and so on, I think is rather open. And I find it very interesting for Risa to think about an architecture, computer architecture, that is good for vision and at the same time is good for language. Seems to be so different problem areas that you have to solve. But the underlying mechanism might be the same. And that's really instructive for artificial neural networks. So we've done a lot of great work in vision, in human vision, computer vision. And you mentioned the problem of human vision is really as difficult as the problem of general intelligence. And maybe that connects to the cortex discussion. Can you describe the human visual cortex and how the humans begin to understand the world through the raw sensory information? What's, for folks who are not familiar, especially on the computer vision side, we don't often actually take a step back except saying with a sentence or two that one is inspired by the other. What is it that we know about the human visual cortex? That's interesting. We know quite a bit. At the same time, we don't know a lot. But the bit we know, in a sense, we know a lot of the details. And many we don't know. And we know a lot of the top level, the answer to the top level question. But we don't know some basic ones, even in terms of general neuroscience, forgetting vision. Why do we sleep? It's such a basic question. And we really don't have an answer to that. So taking a step back on that. So sleep, for example, is fascinating. Do you think that's a neuroscience question? Or if we talk about abstractions, what do you think is an interesting way to study intelligence or most effective on the levels of abstraction? Is it chemical, is it biological, is it electrophysical, mathematical, as you've done a lot of excellent work on that side? Which psychology, at which level of abstraction do you think? Well, in terms of levels of abstraction, I think we need all of them. It's like if you ask me, what does it mean to understand a computer? That's much simpler. But in a computer, I could say, well, I understand how to use PowerPoint. That's my level of understanding a computer. It is reasonable. It gives me some power to produce slides and beautiful slides. Now, you can ask somebody else. He says, well, I know how the transistors work that are inside the computer. I can write the equation for transistor and diodes and circuits, logical circuits. And I can ask this guy, do you know how to operate PowerPoint? No idea. So do you think if we discovered computers walking amongst us full of these transistors that are also operating under windows and have PowerPoint, do you think it's digging in a little bit more? How useful is it to understand the transistor in order to be able to understand PowerPoint and these higher level intelligent processes? So I think in the case of computers, because they were made by engineers, by us, this different level of understanding are rather separate on purpose. They are separate modules so that the engineer that designed the circuit for the chips does not need to know what is inside PowerPoint. And somebody can write the software translating from one to the other. So in that case, I don't think understanding the transistor helps you understand PowerPoint, or very little. If you want to understand the computer, this question, I would say you have to understand it at different levels. If you really want to build one, right? But for the brain, I think these levels of understanding, so the algorithms, which kind of computation, the equivalent of PowerPoint, and the circuits, the transistors, I think they are much more intertwined with each other. There is not a neatly level of the software separate from the hardware. And so that's why I think in the case of the brain, the problem is more difficult and more than for computers requires the interaction, the collaboration between different types of expertise. The brain is a big hierarchical mess. You can't just disentangle levels. I think you can, but it's much more difficult. And it's not completely obvious. And as I said, I think it's one of the, personally, I think is the greatest problem in science. So I think it's fair that it's difficult. That's a difficult one. That said, you do talk about compositionality and why it might be useful. And when you discuss why these neural networks, in artificial or biological sense, learn anything, you talk about compositionality. See, there's a sense that nature can be disentangled. Or, well, all aspects of our cognition could be disentangled to some degree. So why do you think, first of all, how do you see compositionality? And why do you think it exists at all in nature? I spoke about, I use the term compositionality when we looked at deep neural networks, multilayers, and trying to understand when and why they are more powerful than more classical one layer networks, like linear classifier, kernel machines, so called. And what we found is that in terms of approximating or learning or representing a function, a mapping from an input to an output, like from an image to the label in the image, if this function has a particular structure, then deep networks are much more powerful than shallow networks to approximate the underlying function. And the particular structure is a structure of compositionality. If the function is made up of functions of function, so that you need to look on when you are interpreting an image, classifying an image, you don't need to look at all pixels at once. But you can compute something from small groups of pixels. And then you can compute something on the output of this local computation and so on, which is similar to what you do when you read a sentence. You don't need to read the first and the last letter. But you can read syllables, combine them in words, combine the words in sentences. So this is this kind of structure. So that's as part of a discussion of why deep neural networks may be more effective than the shallow methods. And is your sense, for most things we can use neural networks for, those problems are going to be compositional in nature, like language, like vision? How far can we get in this kind of way? So here is almost philosophy. Well, let's go there. Yeah, let's go there. So a friend of mine, Max Tegmark, who is a physicist at MIT. I've talked to him on this thing. Yeah, and he disagrees with you, right? A little bit. Yeah, we agree on most. But the conclusion is a bit different. His conclusion is that for images, for instance, the compositional structure of this function that we have to learn or to solve these problems comes from physics, comes from the fact that you have local interactions in physics between atoms and other atoms, between particle of matter and other particles, between planets and other planets, between stars and other. It's all local. And that's true. But you could push this argument a bit further. Not this argument, actually. You could argue that maybe that's part of the truth. But maybe what happens is kind of the opposite, is that our brain is wired up as a deep network. So it can learn, understand, solve problems that have this compositional structure and it cannot solve problems that don't have this compositional structure. So the problems we are accustomed to, we think about, we test our algorithms on, are this compositional structure because our brain is made up. And that's, in a sense, an evolutionary perspective that we've. So the ones that didn't have, that weren't dealing with the compositional nature of reality died off? Yes, but also could be maybe the reason why we have this local connectivity in the brain, like simple cells in cortex looking only at the small part of the image, each one of them, and then other cells looking at the small number of these simple cells and so on. The reason for this may be purely that it was difficult to grow long range connectivity. So suppose it's for biology. It's possible to grow short range connectivity but not long range also because there is a limited number of long range that you can. And so you have this limitation from the biology. And this means you build a deep convolutional network. This would be something like a deep convolutional network. And this is great for solving certain class of problems. These are the ones we find easy and important for our life. And yes, they were enough for us to survive. And you can start a successful business on solving those problems with Mobileye. Driving is a compositional problem. So on the learning task, we don't know much about how the brain learns in terms of optimization. So the thing that's stochastic gradient descent is what artificial neural networks use for the most part to adjust the parameters in such a way that it's able to deal based on the label data, it's able to solve the problem. So what's your intuition about why it works at all? How hard of a problem it is to optimize a neural network, artificial neural network? Is there other alternatives? Just in general, your intuition is behind this very simplistic algorithm that seems to do pretty good, surprisingly so. Yes. So I find neuroscience, the architecture of cortex, is really similar to the architecture of deep networks. So there is a nice correspondence there between the biology and this kind of local connectivity, hierarchical architecture. The stochastic gradient descent, as you said, is a very simple technique. It seems pretty unlikely that biology could do that from what we know right now about cortex and neurons and synapses. So it's a big question open whether there are other optimization learning algorithms that can replace stochastic gradient descent. And my guess is yes, but nobody has found yet a real answer. I mean, people are trying, still trying, and there are some interesting ideas. The fact that stochastic gradient descent is so successful, this has become clearly not so mysterious. And the reason is that it's an interesting fact. It's a change, in a sense, in how people think about statistics. And this is the following, is that typically when you had data and you had, say, a model with parameters, you are trying to fit the model to the data, to fit the parameter. Typically, the kind of crowd wisdom type idea was you should have at least twice the number of data than the number of parameters. Maybe 10 times is better. Now, the way you train neural networks these days is that they have 10 or 100 times more parameters than data, exactly the opposite. And it has been one of the puzzles about neural networks. How can you get something that really works when you have so much freedom? From that little data, it can generalize somehow. Right, exactly. Do you think the stochastic nature of it is essential, the randomness? So I think we have some initial understanding why this happens. But one nice side effect of having this overparameterization, more parameters than data, is that when you look for the minima of a loss function, like stochastic gradient descent is doing, you find I made some calculations based on some old basic theorem of algebra called the Bezu theorem that gives you an estimate of the number of solution of a system of polynomial equation. Anyway, the bottom line is that there are probably more minima for a typical deep networks than atoms in the universe. Just to say, there are a lot because of the overparameterization. A more global minimum, zero minimum, good minimum. A more global minima. Yeah, a lot of them. So you have a lot of solutions. So it's not so surprising that you can find them relatively easily. And this is because of the overparameterization. The overparameterization sprinkles that entire space with solutions that are pretty good. It's not so surprising, right? It's like if you have a system of linear equation and you have more unknowns than equations, then you have, we know, you have an infinite number of solutions. And the question is to pick one. That's another story. But you have an infinite number of solutions. So there are a lot of value of your unknowns that satisfy the equations. But it's possible that there's a lot of those solutions that aren't very good. What's surprising is that they're pretty good. So that's a good question. Why can you pick one that generalizes well? Yeah. That's a separate question with separate answers. One theorem that people like to talk about that kind of inspires imagination of the power of neural networks is the universality, universal approximation theorem, that you can approximate any computable function with just a finite number of neurons in a single hidden layer. Do you find this theorem one surprising? Do you find it useful, interesting, inspiring? No, this one, I never found it very surprising. It was known since the 80s, since I entered the field, because it's basically the same as Weierstrass theorem, which says that I can approximate any continuous function with a polynomial of sufficiently, with a sufficient number of terms, monomials. So basically the same. And the proofs are very similar. So your intuition was there was never any doubt that neural networks in theory could be very strong approximators. Right. The question, the interesting question, is that if this theorem says you can approximate, fine. But when you ask how many neurons, for instance, or in the case of polynomial, how many monomials, I need to get a good approximation. Then it turns out that that depends on the dimensionality of your function, how many variables you have. But it depends on the dimensionality of your function in a bad way. It's, for instance, suppose you want an error which is no worse than 10% in your approximation. You come up with a network that approximate your function within 10%. Then it turns out that the number of units you need are in the order of 10 to the dimensionality, d, how many variables. So if you have two variables, these two words, you have 100 units and OK. But if you have, say, 200 by 200 pixel images, now this is 40,000, whatever. We again go to the size of the universe pretty quickly. Exactly, 10 to the 40,000 or something. And so this is called the curse of dimensionality, not quite appropriately. And the hope is with the extra layers, you can remove the curse. What we proved is that if you have deep layers, hierarchical architecture with the local connectivity of the type of convolutional deep learning, and if you're dealing with a function that has this kind of hierarchical architecture, then you avoid completely the curse. You've spoken a lot about supervised deep learning. What are your thoughts, hopes, views on the challenges of unsupervised learning with GANs, with Generative Adversarial Networks? Do you see those as distinct? The power of GANs, do you see those as distinct from supervised methods in neural networks, or are they really all in the same representation ballpark? GANs is one way to get estimation of probability densities, which is a somewhat new way that people have not done before. I don't know whether this will really play an important role in intelligence. Or it's interesting. I'm less enthusiastic about it than many people in the field. I have the feeling that many people in the field are really impressed by the ability of producing realistic looking images in this generative way. Which describes the popularity of the methods. But you're saying that while that's exciting and cool to look at, it may not be the tool that's useful for it. So you describe it kind of beautifully. Current supervised methods go n to infinity in terms of number of labeled points. And we really have to figure out how to go to n to 1. And you're thinking GANs might help, but they might not be the right. I don't think for that problem, which I really think is important, I think they may help. They certainly have applications, for instance, in computer graphics. And I did work long ago, which was a little bit similar in terms of saying, OK, I have a network. And I present images. And I can input its images. And output is, for instance, the pose of the image. A face, how much is smiling, is rotated 45 degrees or not. What about having a network that I train with the same data set, but now I invert input and output. Now the input is the pose or the expression, a number, set of numbers. And the output is the image. And I train it. And we did pretty good, interesting results in terms of producing very realistic looking images. It was a less sophisticated mechanism. But the output was pretty less than GANs. But the output was pretty much of the same quality. So I think for a computer graphics type application, yeah, definitely GANs can be quite useful. And not only for that, but for helping, for instance, on this problem of unsupervised example of reducing the number of labeled examples. I think people, it's like they think they can get out more than they put in. There's no free lunch, as you said. What do you think, what's your intuition? How can we slow the growth of N to infinity in supervised, N to infinity in supervised learning? So for example, Mobileye has very successfully, I mean, essentially annotated large amounts of data to be able to drive a car. Now one thought is, so we're trying to teach machines, school of AI. And we're trying to, so how can we become better teachers, maybe? That's one way. No, I like that. Because again, one caricature of the history of computer science, you could say, begins with programmers, expensive. Continuous labelers, cheap. And the future will be schools, like we have for kids. Yeah. Currently, the labeling methods were not selective about which examples we teach networks with. So I think the focus of making networks that learn much faster is often on the architecture side. But how can we pick better examples with which to learn? Do you have intuitions about that? Well, that's part of the problem. But the other one is, if we look at biology, a reasonable assumption, I think, is in the same spirit that I said, evolution is opportunistic and has weak priors. The way I think the intelligence of a child, the baby may develop is by bootstrapping weak priors from evolution. For instance, you can assume that you have in most organisms, including human babies, built in some basic machinery to detect motion and relative motion. And in fact, we know all insects from fruit flies to other animals, they have this, even in the retinas, in the very peripheral part. It's very conserved across species, something that evolution discovered early. It may be the reason why babies tend to look in the first few days to moving objects and not to not moving objects. Now, moving objects means, OK, they're attracted by motion. But motion also means that motion gives automatic segmentation from the background. So because of motion boundaries, either the object is moving or the eye of the baby is tracking the moving object and the background is moving, right? Yeah, so just purely on the visual characteristics of the scene, that seems to be the most useful. Right, so it's like looking at an object without background. It's ideal for learning the object. Otherwise, it's really difficult because you have so much stuff. So suppose you do this at the beginning, first weeks. Then after that, you can recognize object. Now they are imprinted, the number one, even in the background, even without motion. So that's, by the way, I just want to ask on the object recognition problem. So there is this being responsive to movement and doing edge detection, essentially. What's the gap between being effective at visually recognizing stuff, detecting where it is, and understanding the scene? Is this a huge gap in many layers, or is it close? No, I think that's a huge gap. I think present algorithm with all the success that we have and the fact that there are a lot of very useful, I think we are in a golden age for applications of low level vision and low level speech recognition and so on, Alexa and so on. There are many more things of similar level to be done, including medical diagnosis and so on. But we are far from what we call understanding of a scene, of language, of actions, of people. That is, despite the claims, that's, I think, very far. We're a little bit off. So in popular culture and among many researchers, some of which I've spoken with, the Stuart Russell and Elon Musk, in and out of the AI field, there's a concern about the existential threat of AI. And how do you think about this concern? And is it valuable to think about large scale, long term, unintended consequences of intelligent systems we try to build? I always think it's better to worry first, early, rather than late. So worry is good. Yeah. I'm not against worrying at all. Personally, I think that it will take a long time before there is real reason to be worried. But as I said, I think it's good to put in place and think about possible safety against. What I find a bit misleading are things like that have been said by people I know, like Elon Musk, and what is Bostrom in particular, and what is his first name? Nick Bostrom. Nick Bostrom, right. And a couple of other people that, for instance, AI is more dangerous than nuclear weapons. I think that's really wrong. That can be misleading. Because in terms of priority, we should still be more worried about nuclear weapons and what people are doing about it and so on than AI. And you've spoken about Demis Hassabis and yourself saying that you think you'll be about 100 years out before we have a general intelligence system that's on par with a human being. Do you have any updates for those predictions? Well, I think he said. He said 20, I think. He said 20, right. This was a couple of years ago. I have not asked him again. So should I have? Your own prediction, what's your prediction about when you'll be truly surprised? And what's the confidence interval on that? It's so difficult to predict the future and even the present sometimes. It's pretty hard to predict. But I would be, as I said, this is completely, I would be more like Rod Brooks. I think he's about 200 years. 200 years. When we have this kind of AGI system, artificial general intelligence system, you're sitting in a room with her, him, it. Do you think the underlying design of such a system is something we'll be able to understand? It will be simple? Do you think it'll be explainable, understandable by us? Your intuition, again, we're in the realm of philosophy a little bit. Well, probably no. But again, it depends what you really mean for understanding. So I think we don't understand how deep networks work. I think we are beginning to have a theory now. But in the case of deep networks, or even in the case of the simpler kernel machines or linear classifier, we really don't understand the individual units or so. But we understand what the computation and the limitations and the properties of it are. It's similar to many things. What does it mean to understand how a fusion bomb works? How many of us understand the basic principle? And some of us may understand deeper details. In that sense, understanding is, as a community, as a civilization, can we build another copy of it? And in that sense, do you think there will need to be some evolutionary component where it runs away from our understanding? Or do you think it could be engineered from the ground up, the same way you go from the transistor to PowerPoint? So many years ago, this was actually 40, 41 years ago, I wrote a paper with David Marr, who was one of the founding fathers of computer vision, computational vision. I wrote a paper about levels of understanding, which is related to the question we discussed earlier about understanding PowerPoint, understanding transistors, and so on. And in that kind of framework, we had the level of the hardware and the top level of the algorithms. We did not have learning. Recently, I updated adding levels. And one level I added to those three was learning. And you can imagine, you could have a good understanding of how you construct a learning machine, like we do. But being unable to describe in detail what the learning machines will discover, right? Now, that would be still a powerful understanding, if I can build a learning machine, even if I don't understand in detail every time it learns something. Just like our children, if they start listening to a certain type of music, I don't know, Miley Cyrus or something, you don't understand why they came to that particular preference. But you understand the learning process. That's very interesting. So on learning for systems to be part of our world, it has a certain, one of the challenging things that you've spoken about is learning ethics, learning morals. And how hard do you think is the problem of, first of all, humans understanding our ethics? What is the origin on the neural on the low level of ethics? What is it at the higher level? Is it something that's learnable from machines in your intuition? I think, yeah, ethics is learnable, very likely. I think it's one of these problems where I think understanding the neuroscience of ethics, people discuss there is an ethics of neuroscience. Yeah, yes. How a neuroscientist should or should not behave. Can you think of a neurosurgeon and the ethics rule he has to be or he, she has to be. But I'm more interested on the neuroscience of ethics. You're blowing my mind right now. The neuroscience of ethics is very meta. Yeah, and I think that would be important to understand also for being able to design machines that are ethical machines in our sense of ethics. And you think there is something in neuroscience, there's patterns, tools in neuroscience that could help us shed some light on ethics? Or is it mostly on the psychologists of sociology in which higher level? No, there is psychology. But there is also, in the meantime, there is evidence, fMRI, of specific areas of the brain that are involved in certain ethical judgment. And not only this, you can stimulate those area with magnetic fields and change the ethical decisions. Yeah, wow. So that's work by a colleague of mine, Rebecca Sachs. And there is other researchers doing similar work. And I think this is the beginning. But ideally, at some point, we'll have an understanding of how this works. And why it evolved, right? The big why question. Yeah, it must have some purpose. Yeah, obviously it has some social purposes, probably. If neuroscience holds the key to at least illuminate some aspect of ethics, that means it could be a learnable problem. Yeah, exactly. And as we're getting into harder and harder questions, let's go to the hard problem of consciousness. Is this an important problem for us to think about and solve on the engineering of intelligence side of your work, of our dream? It's unclear. So again, this is a deep problem, partly because it's very difficult to define consciousness. And there is a debate among neuroscientists about whether consciousness and philosophers, of course, whether consciousness is something that requires flesh and blood, so to speak. Or could be that we could have silicon devices that are conscious, or up to statement like everything has some degree of consciousness and some more than others. This is like Giulio Tonioni and phi. We just recently talked to Christoph Koch. OK. Christoph was my first graduate student. Do you think it's important to illuminate aspects of consciousness in order to engineer intelligence systems? Do you think an intelligent system would ultimately have consciousness? Are they interlinked? Most of the people working in artificial intelligence, I think, would answer, we don't strictly need consciousness to have an intelligent system. That's sort of the easier question, because it's a very engineering answer to the question. Pass the Turing test, we don't need consciousness. But if you were to go, do you think it's possible that we need to have that kind of self awareness? We may, yes. So for instance, I personally think that when test a machine or a person in a Turing test, in an extended Turing test, I think consciousness is part of what we require in that test, implicitly, to say that this is intelligent. Christoph disagrees. Yes, he does. Despite many other romantic notions he holds, he disagrees with that one. Yes, that's right. So we'll see. Do you think, as a quick question, Ernest Becker's fear of death, do you think mortality and those kinds of things are important for consciousness and for intelligence? The finiteness of life, finiteness of existence, or is that just a side effect of evolution, evolutionary side effect that's useful for natural selection? Do you think this kind of thing that this interview is going to run out of time soon, our life will run out of time soon, do you think that's needed to make this conversation good and life good? I never thought about it. It's a very interesting question. I think Steve Jobs, in his commencement speech at Stanford, argued that having a finite life was important for stimulating achievements. So it was different. Yeah, live every day like it's your last, right? Yeah. So rationally, I don't think strictly you need mortality for consciousness. But who knows? They seem to go together in our biological system, right? Yeah, yeah. You've mentioned before, and students are associated with, AlphaGo immobilized the big recent success stories in AI. And I think it's captivated the entire world of what AI can do. So what do you think will be the next breakthrough? And what's your intuition about the next breakthrough? Of course, I don't know where the next breakthrough is. I think that there is a good chance, as I said before, that the next breakthrough will also be inspired by neuroscience. But which one, I don't know. And there's, so MIT has this quest for intelligence. And there's a few moon shots, which in that spirit, which ones are you excited about? Which projects kind of? Well, of course, I'm excited about one of the moon shots, which is our Center for Brains, Minds, and Machines, which is the one which is fully funded by NSF. And it is about visual intelligence. And that one is particularly about understanding. Visual intelligence, so the visual cortex, and visual intelligence in the sense of how we look around ourselves and understand the world around ourselves, meaning what is going on, how we could go from here to there without hitting obstacles, whether there are other agents, people in the environment. These are all things that we perceive very quickly. And it's something actually quite close to being conscious, not quite. But there is this interesting experiment that was run at Google X, which is in a sense is just a virtual reality experiment, but in which they had a subject sitting, say, in a chair with goggles, like Oculus and so on, earphones. And they were seeing through the eyes of a robot nearby to cameras, microphones for receiving. So their sensory system was there. And the impression of all the subject, very strong, they could not shake it off, was that they were where the robot was. They could look at themselves from the robot and still feel they were where the robot is. They were looking at their body. Theirself had moved. So some aspect of scene understanding has to have ability to place yourself, have a self awareness about your position in the world and what the world is. So we may have to solve the hard problem of consciousness to solve it. On their way, yes. It's quite a moonshine. So you've been an advisor to some incredible minds, including Demis Hassabis, Krzysztof Koch, Amna Shashua, like you said. All went on to become seminal figures in their respective fields. From your own success as a researcher and from perspective as a mentor of these researchers, having guided them in the way of advice, what does it take to be successful in science and engineering careers? Whether you're talking to somebody in their teens, 20s, and 30s, what does that path look like? It's curiosity and having fun. And I think it's important also having fun with other curious minds. It's the people you surround with too, so fun and curiosity. Is there, you mentioned Steve Jobs, is there also an underlying ambition that's unique that you saw? Or does it really does boil down to insatiable curiosity and fun? Well of course, it's being curious in an active and ambitious way, yes. Definitely. But I think sometime in science, there are friends of mine who are like this. There are some of the scientists like to work by themselves and kind of communicate only when they complete their work or discover something. I think I always found the actual process of discovering something is more fun if it's together with other intelligent and curious and fun people. So if you see the fun in that process, the side effect of that process will be that you'll actually end up discovering some interesting things. So as you've led many incredible efforts here, what's the secret to being a good advisor, mentor, leader in a research setting? Is it a similar spirit? Or yeah, what advice could you give to people, young faculty and so on? It's partly repeating what I said about an environment that should be friendly and fun and ambitious. And I think I learned a lot from some of my advisors and friends and some who are physicists. And there was, for instance, this behavior that was encouraged of when somebody comes with a new idea in the group, you are, unless it's really stupid, but you are always enthusiastic. And then, and you're enthusiastic for a few minutes, for a few hours. Then you start asking critically a few questions, testing this. But this is a process that is, I think it's very good. You have to be enthusiastic. Sometimes people are very critical from the beginning. That's not... Yes, you have to give it a chance for that seed to grow. That said, with some of your ideas, which are quite revolutionary, so there's a witness, especially in the human vision side and neuroscience side, there could be some pretty heated arguments. Do you enjoy these? Is that a part of science and academic pursuits that you enjoy? Yeah. Is that something that happens in your group as well? Yeah, absolutely. I also spent some time in Germany. Again, there is this tradition in which people are more forthright, less kind than here. So in the U.S., when you write a bad letter, you still say, this guy's nice. Yes, yes. So... Yeah, here in America, it's degrees of nice. Yes. It's all just degrees of nice, yeah. Right, right. So as long as this does not become personal, and it's really like a football game with these rules, that's great. That's fun. So if you somehow found yourself in a position to ask one question of an oracle, like a genie, maybe a god, and you're guaranteed to get a clear answer, what kind of question would you ask? What would be the question you would ask? In the spirit of our discussion, it could be, how could I become 10 times more intelligent? And so, but see, you only get a clear short answer. So do you think there's a clear short answer to that? No. And that's the answer you'll get. Okay, so you've mentioned Flowers of Algernon. Oh, yeah. As a story that inspires you in your childhood, as this story of a mouse, human achieving genius level intelligence, and then understanding what was happening while slowly becoming not intelligent again, and this tragedy of gaining intelligence and losing intelligence, do you think in that spirit, in that story, do you think intelligence is a gift or a curse from the perspective of happiness and meaning of life? You try to create an intelligent system that understands the universe, but on an individual level, the meaning of life, do you think intelligence is a gift? It's a good question. I don't know. As one of the, as one people consider the smartest people in the world, in some dimension, at the very least, what do you think? I don't know, it may be invariant to intelligence, that degree of happiness. It would be nice if it were. That's the hope. Yeah. You could be smart and happy and clueless and happy. Yeah. As always, on the discussion of the meaning of life, it's probably a good place to end. Tommaso, thank you so much for talking today. Thank you, this was great.
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
The following is a conversation with Kyle Vogt. He's the president and the CTO of Cruise Automation, leading an effort to solve one of the biggest robotics challenges of our time, vehicle automation. He's a cofounder of two successful companies, Twitch and Cruise, that have each sold for a billion dollars. And he's a great example of the innovative spirit that flourishes in Silicon Valley, and now is facing an interesting and exciting challenge of matching that spirit with the mass production and the safety centric culture of a major automaker like General Motors. This conversation is part of the MIT Artificial General Intelligence series and the Artificial Intelligence podcast. If you enjoy it, please subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now here's my conversation with Kyle Vogt. You grew up in Kansas, right? Yeah, and I just saw that picture you had hidden over there, so I'm a little bit a little bit worried about that now. Yeah, so in high school in Kansas City, you joined Shawnee Mission North high school robotics team. Yeah. Now that wasn't your high school. That's right, that was that was the only high school in the area that had a like a teacher who was willing to sponsor our first robotics team. I was gonna troll you a little bit. Jog your memory a little bit. Yeah, I was trying to look super cool and intense, because you know this was BattleBots. This is serious business. So we're standing there with a welded steel frame and looking tough. So go back there. What is that drew you to robotics? Well, I think I've been trying to figure this out for a while, but I've always liked building things with Legos. And when I was really, really young, I wanted the Legos that had motors and other things. And then, you know, Lego Mindstorms came out, and for the first time you could program Lego contraptions. And I think things just sort of snowballed from that. But I remember seeing, you know, the BattleBots TV show on Comedy Central and thinking that is the coolest thing in the world. I want to be a part of that. And not knowing a whole lot about how to build these 200 pound fighting robots. So I sort of obsessively poured over the internet forums where all the creators for BattleBots would sort of hang out and talk about, you know, document their build progress and everything. And I think I read, I must have read like, you know, tens of thousands of forum posts from basically everything that was out there on what these people were doing. And eventually like sort of triangulated how to put some of these things together. And I ended up doing BattleBots, which was, you know, I was like 13 or 14, which was pretty awesome. I'm not sure if the show is still running, but so BattleBots is, there's not an artificial intelligence component. It's remotely controlled. And it's almost like a mechanical engineering challenge of building things that can be broken. They're radio controlled. So, and I think that they allowed some limited form of autonomy. But, you know, in a two minute match, you're, in the way these things ran, you're really doing yourself a disservice by trying to automate it versus just, you know, do the practical thing, which is drive it yourself. And there's an entertainment aspect, just going on YouTube, there's like an, some of them wield an axe, some of them, I mean, there's that fun. So what drew you to that aspect? Was it the mechanical engineering? Was it the dream to create like Frankenstein and sentient being? Or was it just like the Lego, you like tinkering with stuff? I mean, that was just building something. I think the idea of, you know, this radio controlled machine that can do various things, if it has like a weapon or something was pretty interesting. I agree it doesn't have the same appeal as, you know, autonomous robots, which I, which I, you know, sort of gravitated towards later on. But it was definitely an engineering challenge because everything you did in that competition was pushing components to their limits. So we would buy like these $40 DC motors that came out of a winch, like on the front of a pickup truck or something, and we'd power the car with those and we'd run them at like double or triple their rated voltage. So they immediately start overheating, but for that two minute match you can get, you know, a significant increase in the power output of those motors before they burn out. And so you're doing the same thing for your battery packs, all the materials in the system. And I think there's something, something intrinsically interesting about just seeing like where things break. And did you offline see where they break? Did you take it to the testing point? Like how did you know two minutes? Or was there a reckless let's just go with it and see? We weren't very good at BattleBots. We lost all of our matches the first round. The one I built first, both of them were these wedge shaped robots because wedge, even though it's sort of boring to look at, is extremely effective. You drive towards another robot and the front edge of it gets under them and then they sort of flip over, kind of like a door stopper. And the first one had a pneumatic polished stainless steel spike on the front that would shoot out about eight inches. The purpose of which is what? Pretty, pretty ineffective actually, but it looks cool. And was it to help with the lift? No, it was, it was just to try to poke holes in the other robot. And then the second time I did it, which is the following, I think maybe 18 months later, we had a, well a titanium axe with a, with a hardened steel tip on it that was powered by a hydraulic cylinder which we were activating with liquid CO2, which was, had its own set of problems. So great, so that's kind of on the hardware side. I mean at a certain point there must have been born a fascination on the software side. So what was the first piece of code you've written? Go back there, see what language was it? What, what was that? Was it Emacs? Vim? Was it a more respectable modern IDE? Do you, do you remember any of this? Yeah, well I remember, I think maybe when I was in third or fourth grade, the school I was at, elementary school, had a bunch of Apple II computers and we'd play games on those. And I remember every once in a while something would, would, would crash or wouldn't start up correctly and it would dump you out to what I later learned was like sort of a command prompt. And my teacher would come over and type, I actually remember this to this day for some reason, like PR number six or PR pound six, which is peripheral six, which is the disk drive, which would fire up the disk and load the program. And I just remember thinking wow, she's like a hacker, like teach me these, these codes, these error codes, that is what I called them at the time. But she had no interest in that, so it wasn't until I think about fifth grade that I had a school where you could actually go on these Apple IIs and learn to program. And so it was all in basic, you know, where every line, you know, the line numbers are all number, that every line is numbered and you have to like leave enough space between the numbers so that if you want to tweak your code you go back and the first line was 10 and the second line is 20. Now you have to go back and insert 15 and if you need to add code in front of that, you know, 11 or 12 and you hope you don't run out of line numbers and have to redo the whole thing. And there's go to statements? Yeah, go to and it's very basic, maybe hence the name, but a lot of fun. And that was like, that was, you know, that's when, you know, when you first program you see the magic of it. It's like, it just, just like this world opens up with, you know, endless possibilities for the things you could build or or accomplish with that computer. So you got the bug then, so even starting with basic and then what C++ throughout, what did you, was there computer programming, computer science classes in high school? Not, not where I went, so it was self taught, but I did a lot of programming. The thing that, you know, sort of pushed me in the path of eventually working on self driving cars is actually one of these really long trips driving from my house in Kansas to, to I think Las Vegas where we did the BattleBots competition and I had just gotten my, I think my learner's permit or early driver's permit and so I was driving this, you know, 10 hour stretch across western Kansas where it's just, you're going straight on a highway and it is mind numbingly boring. And I remember thinking even then with my sort of mediocre programming background that this is something that a computer can do, right? Let's take a picture of the road, let's find the yellow lane markers and, you know, steer the wheel. And, you know, later I'd come to realize this had been done, you know, since, since the 80s or the 70s or even earlier, but I still wanted to do it and sort of immediately after that trip switched from sort of BattleBots, which is more radio controlled machines, to thinking about building, you know, autonomous vehicles of some scale. Start off with really small electric ones and then, you know, progress to what we're doing now. So what was your view of artificial intelligence at that point? What did you think? So this is before, there's been waves in artificial intelligence, right? The current wave with deep learning makes people believe that you can solve in a really rich deep way the computer vision perception problem, but like in before the deep learning craze, you know, how do you think about, how would you even go about building a thing that perceives itself in the world, localizes itself in the world, moves around the world? Like when you were younger, I mean, what was your thinking about it? Well, prior to deep neural networks or convolutional neural analysis, these modern techniques we have, or at least ones that are in use today, it was all a heuristic space and so like old school image processing and I think extracting, you know, yellow lane markers out of an image of a road is one of the problems that lends itself reasonably well to those heuristic based methods, you know, like just do a threshold on the color yellow and then try to fit some lines to that using a Huff transform or something and then go from there. Traffic light detection and stop sign detection, red, yellow, green. And I think you could, I mean, if you wanted to do a full, I was just trying to make something that would stay in between the lanes on a highway, but if you wanted to do the full, the full, you know, set of capabilities needed for a driverless car, I think you could, and we'd done this at cruise, you know, in the very first days, you can start off with a really simple, you know, human written heuristic just to get the scaffolding in place for your system. Traffic light detection, probably a really simple, you know, color thresholding on day one just to get the system up and running before you migrate to, you know, a deep learning based technique or something else. And, you know, back in when I was doing this, my first one, it was on a Pentium 203, 233 megahertz computer in it and I think I wrote the first version in basic, which is like an interpreted language. It's extremely slow because that's the thing I knew at the time. And so there was no, no chance at all of using, there was no, no computational power to do any sort of reasonable deep nets like you have today. So I don't know what kids these days are doing. Are kids these days, you know, at age 13 using neural networks in their garage? I mean, that would be awesome. I get emails all the time from, you know, like 11, 12 year olds saying I'm having, you know, I'm trying to follow this TensorFlow tutorial and I'm having this problem. And the general approach in the deep learning community is of extreme optimism of, as opposed to, you mentioned like heuristics, you can, you can, you can separate the autonomous driving problem into modules and try to solve it sort of rigorously, or you can just do it end to end. And most people just kind of love the idea that, you know, us humans do it end to end. We just perceive and act. We should be able to use that, do the same kind of thing when you're on nets. And that, that kind of thinking, you don't want to criticize that kind of thinking because eventually they will be right. Yeah. And so it's exciting and especially when they're younger to explore that as a really exciting approach. But yeah, it's, it's changed the, the language, the kind of stuff you're tinkering with. It's kind of exciting to see when these teenagers grow up. Yeah. I can only imagine if you, if your starting point is, you know, Python and TensorFlow at age 13 where you end up, you know, after 10 or 15 years of that, that's, that's pretty cool. Because of GitHub, because the state tools for solving most of the major problems in artificial intelligence are within a few lines of code for most kids. And that's incredible to think about also on the entrepreneurial side. And, and on that point, was there any thought about entrepreneurship before you came to college? Is sort of doing, you're building this into a thing that impacts the world on a large scale? Yeah. I've always wanted to start a company. I think that's, you know, just a cool concept of creating something and exchanging it for value or creating value, I guess. So in high school, I was, I was trying to build like, you know, servo motor drivers, little circuit boards and sell them online or other, other things like that. And certainly knew at some point I wanted to do a startup, but it wasn't really, I'd say until college, until I felt like I had the, I guess the right combination of the environment, the smart people around you and some free time and a lot of free time at MIT. So you came to MIT as an undergrad 2004. That's right. And that's when the first DARPA Grand Challenge was happening. Yeah. The, the timing of that is beautifully poetic. So how did you get yourself involved in that one? Originally there wasn't a official entry. Yeah, faculty sponsored thing. And so a bunch of undergrads, myself included, started meeting and got together and tried to haggle together some sponsorships. We got a vehicle donated, a bunch of sensors and tried to put something together. And so we had, our team was probably mostly freshmen and sophomores, you know, which, which was not really a fair, fair fight against maybe the, you know, postdoc and faculty led teams from other schools. But we, we got something up and running. We had our vehicle drive by wire and you know, very, very basic control and things. But on the day of the qualifying, sort of pre qualifying round, the one and only steering motor that we had purchased, the thing that we had retrofitted to turn the steering wheel on the truck died. And so our vehicle was just dead in the water, couldn't steer. So we didn't make it very far. On the hardware side. So was there a software component? Was there, like, how did your view of autonomous vehicles in terms of artificial intelligence evolve in this moment? I mean, you know, like you said from the 80s has been autonomous vehicles, but really that was the birth of the modern wave. The, the thing that captivated everyone's imagination that we can actually do this. So what, how were you captivated in that way? So how did your view of autonomous vehicles change at that point? I'd say at that point in time it was, it was a curiosity as in, like, is this really possible? And I think that was generally the spirit and the purpose of that original DARPA Grand Challenge, which was to just get a whole bunch of really brilliant people exploring the space and pushing the limits. And I think, like, to this day that DARPA Challenge with its, you know, million dollar prize pool was probably one of the most effective, you know, uses of taxpayer money dollar for dollar that I've seen, you know, because that, that small sort of initiative that DARPA put, put out sort of, in my view, was the catalyst or the tipping point for this, this whole next wave of autonomous vehicle development. So that was pretty cool. So let me jump around a little bit on that point. They also did the Urban Challenge where it was in the city, but it was very artificial and there's no pedestrians and there's very little human involvement except a few professional drivers. Yeah. Do you think there's room, and then there was the Robotics Challenge with humanoid robots. Right. So in your now role is looking at this, you're trying to solve one of the, you know, autonomous driving, one of the harder, more difficult places in San Francisco. Is there a role for DARPA to step in to also kind of help out, like, challenge with new ideas, specifically pedestrians and so on, all these kinds of interesting things? Well, I haven't, I haven't thought about it from that perspective. Is there anything DARPA could do today to further accelerate things? And I would say, my instinct is that that's maybe not the highest and best use of their resources and time, because, like, kick starting and spinning up the flywheel is, I think, what what they did in this case for very, very little money. But today this has become, this has become, like, commercially interesting to very large companies and the amount of money going into it and the amount of people, like, going through your class and learning about these things and developing these skills is just, you know, orders of magnitude more than it was back then. And so there's enough momentum and inertia and energy and investment dollars into this space right now that I don't, I don't, I think they're, I think they're, they can just say mission accomplished and move on to the next area of technology that needs help. So then stepping back to MIT, you left MIT during your junior year. What was that decision like? As I said, I always wanted to do a company in, or start a company, and this opportunity landed in my lap, which was a couple guys from Yale were starting a new company, and I googled them and found that they had started a company previously and sold it actually on eBay for about a quarter million bucks, which was a pretty interesting story, but so I thought to myself, these guys are, you know, rock star entrepreneurs, they've done this before, they must be driving around in Ferraris because they sold their company, and, you know, I thought I could learn a lot from them, so I teamed up with those guys and, you know, went out during, went out to California during IAP, which is MIT's month off, on a one way ticket and basically never went back. We were having so much fun, we felt like we were building something and creating something, and it was going to be interesting that, you know, I was just all in and got completely hooked, and that that business was Justin TV, which is originally a reality show about a guy named Justin, which morphed into a live video streaming platform, which then morphed into what is Twitch today, so that was, that was quite an unexpected journey. So no regrets? No. Looking back, it was just an obvious, I mean, one way ticket. I mean, if we just pause on that for a second, there was no, how did you know these are the right guys, this is the right decision, you didn't think it was just follow the heart kind of thing? Well, I didn't know, but, you know, just trying something for a month during IAP seems pretty low risk, right? And then, you know, well, maybe I'll take a semester off, MIT's pretty flexible about that, you can always go back, right? And then after two or three cycles of that, I eventually threw in the towel, but, you know, I think it's, I guess in that case I felt like I could always hit the undo button if I had to. Right. But nevertheless, from when you look in retrospect, I mean, it seems like a brave decision, you know, it would be difficult for a lot of people to make. It wasn't as popular, I'd say that the general, you know, flux of people out of MIT at the time was mostly into, you know, finance or consulting jobs in Boston or New York, and very few people were going to California to start companies, but today I'd say that's, it's probably inverted, which is just a sign of, a sign of the times, I guess. Yeah. So there's a story about midnight of March 18, 2007, where TechCrunch, I guess, announced Justin.TV earlier than it was supposed to, a few hours. The site didn't work. I don't know if any of this is true, you can tell me. And you and one of the folks at Justin.TV, Emmett Shearer, coded through the night. Can you take me through that experience? So let me, let me say a few nice things that, the article I read quoted Justin Kahn said that you were known for bureau coding through problems and being a creative, quote, creative genius. So on that night, what, what was going through your head, or maybe I'd put another way, how do you solve these problems? What's your approach to solving these kinds of problems where the line between success and failure seems to be pretty thin? That's a good question. Well, first of all, that's, that's a nice of Justin to say that. I think, you know, I would have been maybe 21 years old then and not very experienced at programming, but as with, with everything in a startup, you're sort of racing against the clock. And so our plan was the second we had this live streaming camera backpack up and running, where Justin could wear it and no matter where he went in a city, it would be streaming live video. And this is even before the iPhones. This is like hard to do back then. We would launch. And so we thought we were there and the backpack was working and then we sent out all the emails to launch the, launch the company and do the press thing. And then, you know, we weren't quite actually there. And then we thought, oh, well, you know, they're not going to announce it until maybe 10 a.m. the next morning. And it's, I don't know, it's 5 p.m. now. So how many hours do we have left? What is that? Like, you know, 17 hours to go. And, and that was, that was going to be fine. Was the problem obvious? Did you understand what could possibly, like, how complicated was the system at that point? It was, it was pretty messy. So to get a live video feed that looked decent working from anywhere in San Francisco, I put together this system where we had like three or four cell phone data modems and they were, like, we take the video stream and, you know, sort of spray it across these three or four modems and then try to catch all the packets on the other side, you know, with unreliable cell phone networks. It's pretty low level networking. Yeah, and putting these, like, you know, sort of protocols on top of all that to, to reassemble and reorder the packets and have time buffers and error correction and all that kind of stuff. And the night before it was just staticky. Every once in a while the image would, would go to staticky and there would be this horrible, like, screeching audio noise because the audio was also corrupted. And this would happen, like, every five to ten minutes or so and it was a really, you know, off putting to the viewers. How do you tackle that problem? What was the, uh, you're just freaking out behind a computer. There's, are there other, other folks working on this problem? Like, were you behind a whiteboard? Were you doing, uh, Yeah, it was a little, it was a little, yeah, it's a little lonely because there's four of us working on the company and only two people really wrote code. And Emmett wrote the website and the chat system and I wrote the software for this video streaming device and video server. And so, you know, it's my sole responsibility to figure that out. And I think, I think it's those, you know, setting, setting deadlines, trying to move quickly and everything where you're in that moment of intense pressure that sometimes people do their best and most interesting work. And so even though that was a terrible moment, I look back on it fondly because that's like, you know, that's one of those character defining moments, I think. So in 2013, October, you founded Cruise Automation. Yeah. So progressing forward, another exceptionally successful company was acquired by GM in 16 for $1 billion. But in October 2013, what was on your mind? What was the plan? How does one seriously start to tackle one of the hardest robotics, most important impact for robotics problems of our age? After going through Twitch, Twitch was, was, and is today, pretty successful. But the, the work was, the result was entertainment, mostly. Like, the better the product was, the more we would entertain people and then, you know, make money on the ad revenues and other things. And that was, that was a good thing. It felt, felt good to entertain people. But I figured like, you know, what is really the point of becoming a really good engineer and developing these skills other than, you know, my own enjoyment? And I realized I wanted something that scratched more of an existential itch, like something that, that truly matters. And so I basically made this list of requirements for a new, if I was going to do another company, and the one thing I knew in the back of my head that Twitch took like eight years to become successful. And so whatever I do, I better be willing to commit, you know, at least 10 years to something. And when you think about things from that perspective, you certainly, I think, raise the bar on what you choose to work on. So for me, the three things were it had to be something where the technology itself determines the success of the product, like hard, really juicy technology problems, because that's what motivates me. And then it had to have a direct and positive impact on society in some way. So an example would be like, you know, health care, self driving cars, because they save lives, other things where there's a clear connection to somehow improving other people's lives. And the last one is it had to be a big business, because for the positive impact to matter, it's got to be a large scale. And I was thinking about that for a while, and I made like, I tried writing a Gmail clone and looked at some other ideas. And then it just sort of light bulb went off, like self driving cars, like that was the most fun I had ever had in college working on that. And like, well, what's the state of the technology? It's been 10 years. Maybe times have changed, and maybe now is the time to make this work. And I poked around and looked at, the only other thing out there really at the time was the Google self driving car project. And I thought, surely there's a way to, you know, have an entrepreneur mindset and sort of solve the minimum viable product here. And so I just took the plunge right then and there and said, this is something I know I can commit 10 years to. It's the probably the greatest applied AI problem of our generation. And if it works, it's going to be both a huge business and therefore like, probably the most positive impact I can possibly have on the world. So after that light bulb went off, I went all in on cruise immediately and got to work. Did you have an idea how to solve this problem? Which aspect of the problem to solve? You know, slow, like we just had Oliver from Voyage here, slow moving retirement communities, urban driving, highway driving. Did you have, like, did you have a vision of the city of the future where, you know, the transportation is largely automated, that kind of thing? Or was it sort of more fuzzy and gray area than that? My analysis of the situation is that Google is putting a lot, had been putting a lot of money into that project. They had a lot more resources. And so, and they still hadn't cracked the fully driverless car. You know, this is 2013, I guess. So I thought, what can I do to sort of go from zero to, you know, significant scale so I can actually solve the real problem, which is the driverless cars. And I thought, here's the strategy. We'll start by doing a really simple problem or solving a really simple problem that creates value for people. So, eventually ended up deciding on automating highway driving, which is relatively more straightforward as long as there's a backup driver there. And, you know, the go to market will be able to retrofit people's cars and just sell these products directly. And the idea was, we'll take all the revenue and profits from that and use it to do the, so sort of reinvest that in research for doing fully driverless cars. And that was the plan. The only thing that really changed along the way between then and now is we never really launched the first product. We had enough interest from investors and enough of a signal that this was something that we should be working on, that after about a year of working on the highway autopilot, we had it working, you know, on a prototype stage. But we just completely abandoned that and said, we're going to go all in on driverless cars now is the time. Can't think of anything that's more exciting and if it works more impactful, so we're just going to go for it. The idea of retrofit is kind of interesting. Yeah. Being able to, it's how you achieve scale. It's a really interesting idea. Is it something that's still in the in the back of your mind as a possibility? Not at all. I've come full circle on that one. After trying to build a retrofit product, and I'll touch on some of the complexities of that, and then also having been inside an OEM and seeing how things work and how a vehicle is developed and validated. When it comes to something that has safety critical implications like controlling the steering and other control inputs on your car, it's pretty hard to get there with a retrofit. Or if you did, even if you did, it creates a whole bunch of new complications around liability or how did you truly validate that. Or you know, something in the base vehicle fails and causes your system to fail, whose fault is it? Or if the car's anti lock brake systems or other things kick in or the software has been, it's different in one version of the car you retrofit versus another and you don't know because the manufacturer has updated it behind the scenes. There's basically an infinite list of long tail issues that can get you. And if you're dealing with a safety critical product, that's not really acceptable. That's a really convincing summary of why that's really challenging. But I didn't know all that at the time, so we tried it anyway. But as a pitch also at the time, it's a really strong one. Because that's how you achieve scale and that's how you beat the current, the leader at the time of Google or the only one in the market. The other big problem we ran into, which is perhaps the biggest problem from a business model perspective, is we had kind of assumed that we started with an Audi S4 as the vehicle we retrofitted with this highway driving capability. And we had kind of assumed that if we just knock out like three make and models of vehicles, that'll cover like 80% of the San Francisco market. Doesn't everyone there drive, I don't know, a BMW or a Honda Civic or one of these three cars? And then we surveyed our users and we found out that it's all over the place. We would, to get even a decent number of units sold, we'd have to support like, you know, 20 or 50 different models. And each one is a little butterfly that takes time and effort to maintain, you know, that retrofit integration and custom hardware and all this. So it was a tough business. So GM manufactures and sells over 9 million cars a year. And what you with Cruise are trying to do some of the most cutting edge innovation in terms of applying AI. And so how do those, you've talked about a little bit before, but it's also just fascinating to me. We work a lot of automakers, you know, the difference between the gap between Detroit and Silicon Valley, let's say, just to be sort of poetic about it, I guess. How do you close that gap? How do you take GM into the future where a large part of the fleet will be autonomous, perhaps? I want to start by acknowledging that GM is made up of, you know, tens of thousands of really brilliant, motivated people who want to be a part of the future. And so it's pretty fun to work within the attitude inside a car company like that is, you know, embracing this transformation and change rather than fearing it. And I think that's a testament to the leadership at GM and that's flown all the way through to everyone you talk to, even the people in the assembly plants working on these cars. So that's really great. So starting from that position makes it a lot easier so then when the people in San Francisco at Cruise interact with the people at GM, at least we have this common set of values, which is that we really want this stuff to work because we think it's important and we think it's the future. That's not to say, you know, those two cultures don't clash. They absolutely do. There's different sort of value systems. Like in a car company, the thing that gets you promoted and sort of the reward system is following the processes, delivering the program on time and on budget. So any sort of risk taking is discouraged in many ways because if a program is late or if you shut down the plant for a day, it's, you know, you can count the millions of dollars that burn by pretty quickly. Whereas I think, you know, most Silicon Valley companies and in Cruise and the methodology we were employing, especially around the time of the acquisition, the reward structure is about trying to solve these complex problems in any way shape or form or coming up with crazy ideas that, you know, 90% of them won't work. And so meshing that culture of sort of continuous improvement and experimentation with one where everything needs to be rigorously defined up front so that you never slip a deadline or miss a budget was a pretty big challenge. And that we're over three years in now after the acquisition and I'd say like, you know, the investment we made in figuring out how to work together successfully and who should do what and how we bridge the gaps between these very different systems and way of doing engineering work is now one of our greatest assets because I think we have this really powerful thing. But for a while it was both GM and Cruise were very steep on the learning curve. Yeah, so I'm sure it was very stressful. It's really important work because that's how to revolutionize the transportation, really to revolutionize any system. You know, you look at the health care system or you look at the legal system. I have people like Loris come up to me all the time like everything they're working on can easily be automated. But then that's not a good feeling. Yeah, well it's not a good feeling but also there's no way to automate because the entire infrastructure is really, you know, based is older and it moves very slowly. And so how do you close the gap between I have an how can I replace, of course, Loris don't want to be replaced with an app, but you could replace a lot of aspect when most of the data is still on paper. And so the same thing was with automotive. I mean, it's fundamentally software. It's basically hiring software engineers. It's thinking in a software world. I mean, I'm pretty sure nobody in Silicon Valley has ever hit a deadline. So and then on GM. That's probably true, yeah. And GM side is probably the opposite. Yeah. So that's that culture gap is really fascinating. So you're optimistic about the future of that? Yeah, I mean, from what I've seen, it's impressive. And I think like especially in Silicon Valley, it's easy to write off building cars because, you know, people have been doing that for over 100 years now in this country. And so it seems like that's a solved problem, but that doesn't mean it's an easy problem. And I think it would be easy to sort of overlook that and think that, you know, we're Silicon Valley engineers. We can solve any problem, you know, building a car. It's been done. Therefore, it's, you know, it's not a real engineering challenge. But after having seen just the sheer scale and magnitude and industrialization that occurs inside of an automotive assembly plant, that is a lot of work that I am very glad that we don't have to reinvent to make self driving cars work. And so to have, you know, partners who have done that for 100 years now, these great processes and this huge infrastructure and supply base that we can tap into is just remarkable because the scope and surface area of the problem of deploying fleets of self driving cars is so large that we're constantly looking for ways to do less so we can focus on the things that really matter more. And if we had to figure out how to build and assemble and you know, build the cars themselves. I mean, we work closely with GM on that. But if we had to develop all that capability in house as well, you know, that would just make the problem really intractable, I think. So yeah, just like your first entry at the MIT DARPA challenge when there was what the motor that failed, somebody that knows what they're doing with the motor did it. That would have been nice if we could focus on the software, not the hardware platform. Yeah. Right. So from your perspective now, you know, there's so many ways that autonomous vehicles can impact society in the next year, five years, ten years. What do you think is the biggest opportunity to make money in autonomous driving, sort of make it a financially viable thing in the near term? What do you think will be the biggest impact there? Well, the things that drive the economics for fleets of self driving cars are, there's sort of a handful of variables. One is, you know, the cost to build the vehicle itself. So the material cost, how many, you know, what's the cost of all your sensors plus the cost of the vehicle and every all the other components on it. Another one is the lifetime of the vehicle. It's very different if your vehicle drives 100,000 miles and then it falls apart versus, you know, two million. And then, you know, if you have a fleet, it's kind of like an airplane or an airline where once you produce the vehicle, you want it to be in operation as many hours a day as possible producing revenue. And then, you know, the other piece of that is how are you generating revenue? I think that's kind of what you're asking. And I think the obvious things today are, you know, the ride sharing business because that's pretty clear that there's demand for that, there's existing markets you can tap into and large urban areas, that kind of thing. Yeah, yeah. And I think that there are some real benefits to having cars without drivers compared to sort of the status quo for people who use ride share services today. You know, you get privacy, consistency, hopefully significantly improve safety, all these benefits versus the current product. But it's a crowded market. And then other opportunities, which you've seen a lot of activity in the last, really in the last six or twelve months, is, you know, delivery, whether that's parcels and packages, food or groceries. Those are all sort of, I think, opportunities that are pretty ripe for these, you know, once you have this core technology, which is the fleet of autonomous vehicles, there's all sorts of different business opportunities you can build on top of that. But I think the important thing, of course, is that there's zero monetization opportunity until you actually have that fleet of very capable driverless cars that are that are as good or better than humans. And that's sort of where the entire industry is sort of in this holding pattern right now. Yeah, they're trying to achieve that baseline. So, but you said sort of not reliability, consistency. It's kind of interesting, I think I heard you say somewhere, I'm not sure if that's what you meant, but you know, I can imagine a situation where you would get an autonomous vehicle and, you know, when you get into an Uber or Lyft, you don't get to choose the driver in a sense that you don't get to choose the personality of the driving. Do you think there's a, there's room to define the personality of the car the way it drives you in terms of aggressiveness, for example, in terms of sort of pushing the bound? One of the biggest challenges of autonomous driving is the is the trade off between sort of safety and assertiveness. And do you think there's any room for the human to take a role in that decision? Sort of accept some of the liability, I guess. I wouldn't, no, I'd say within reasonable bounds, as in we're not gonna, I think it'd be highly unlikely we'd expose any knob that would let you, you know, significantly increase safety risk. I think that's just not something we'd be willing to do. But I think driving style or like, you know, are you going to relax the comfort constraints slightly or things like that, all of those things make sense and are plausible. I see all those as, you know, nice optimizations. Once again, we get the core problem solved in these fleets out there. But the other thing we've sort of observed is that you have this intuition that if you sort of slam your foot on the gas right after the light turns green and aggressively accelerate, you're going to get there faster. But the actual impact of doing that is pretty small. You feel like you're getting there faster, but so the same would be true for AVs. Even if they don't slam their, you know, the pedal to the floor when the light turns green, they're going to get you there within, you know, if it's a 15 minute trip, within 30 seconds of what you would have done otherwise if you were going really aggressively. So I think there's this sort of self deception that my aggressive driving style is getting me there faster. Well, so that's, you know, some of the things I've studied, some of the things I'm fascinated by the psychology of that. I don't think it matters that it doesn't get you there faster. It's the emotional release. Driving is a place, being inside of a car, somebody said it's like the real world version of being a troll. So you have this protection, this mental protection, you're able to sort of yell at the world, like release your anger, whatever. So there's an element of that that I think autonomous vehicles would also have to, you know, giving an outlet to people, but it doesn't have to be through, through, through driving or honking or so on. There might be other outlets, but I think to just sort of even just put that aside, the baseline is really, you know, that's the focus. That's the thing you need to solve. And then the fun human things can be solved after. But so from the baseline of just solving autonomous driving, you're working in San Francisco, one of the more difficult cities to operate in. What is, what is the, in your view, currently the hardest aspect of autonomous driving? Is it negotiating with pedestrians? Is it edge cases of perception? Is it planning? Is there a mechanical engineering? Is it data, fleet stuff? What are your thoughts on the challenge, the more challenging aspects there? That's a, that's a good question. I think before, before we go to that, though, I just want to, I like what you said about the psychology aspect of this, because I think one observation I've made is I think I read somewhere that I think it's maybe Americans on average spend, you know, over an hour a day on social media, like staring at Facebook. And so that's just, you know, 60 minutes of your life, you're not getting back. It's probably not super productive. And so that's 3,600 seconds, right? And that's, that's time, you know, it's a lot of time you're giving up. And if you compare that to people being on the road, if another vehicle, whether it's a human driver or autonomous vehicle, delays them by even three seconds, they're laying in on the horn, you know, even though that's, that's, you know, one, one thousandth of the time they waste looking at Facebook every day. So there's, there's definitely some. You know, psychology aspects of this, I think that are pretty interesting road rage in general. And then the question of course is if everyone is in self driving cars, do they even notice these three second delays anymore? Cause they're doing other things or reading or working or just talking to each other. So it'll be interesting to see where that goes. In a certain aspect, people, people need to be distracted by something entertaining, something useful inside the car. So they don't pay attention to the external world. And then, and then they can take whatever psychology and bring it back to Twitter and then focus on that as opposed to sort of interacting, sort of putting the emotion out there into the world. So it's a, it's an interesting problem, but baseline autonomy. I guess you could say self driving cars, you know, at scale will lower the collective blood pressure of society probably by a couple of points without all that road rage and stress. So that's a good, good external. So back to your question about the technology and the, I guess the biggest problems. And I have a hard time answering that question because, you know, we've been at this like specifically focusing on driverless cars and all the technology needed to enable that for a little over four and a half years now. And even a year or two in, I felt like we had completed the functionality needed to get someone from point A to point B. As in, if we need to do a left turn maneuver, or if we need to drive around at, you know, a double parked vehicle into oncoming traffic or navigate through construction zones, the scaffolding and the building blocks was there pretty early on. And so the challenge is not any one scenario or situation for which, you know, we fail at 100% of those. It's more, you know, we're benchmarking against a pretty good or pretty high standard, which is human driving. All things considered, humans are excellent at handling edge cases and unexpected scenarios where computers are the opposite. And so beating that baseline set by humans is the challenge. And so what we've been doing for quite some time now is basically, it's this continuous improvement process where we find sort of the most, you know, uncomfortable or the things that could lead to a safety issue or other things, all these events. And then we sort of categorize them and rework parts of our system to make incremental improvements and do that over and over and over again. And we just see sort of the overall performance of the system, you know, actually increasing in a pretty steady clip. But there's no one thing. There's actually like thousands of little things and just like polishing functionality and making sure that it handles, you know, every version and possible permutation of a situation by either applying more deep learning systems or just by, you know, adding more test coverage or new scenarios that we develop against and just grinding on that. We're sort of in the unsexy phase of development right now, which is doing the real engineering work that it takes to go from prototype to production. You're basically scaling the grinding, sort of taking seriously that the process of all those edge cases, both with human experts and machine learning methods to cover all those situations. Yeah. And the exciting thing for me is I don't think that grinding ever stops because there's a moment in time where you've crossed that threshold of human performance and become superhuman. But there's no reason, there's no first principles reason that AV capability will tap out anywhere near humans. Like there's no reason it couldn't be 20 times better, whether that's, you know, just better driving or safer driving or more comfortable driving or even a thousand times better given enough time. And we intend to basically chase that, you know, forever to build the best possible product. Better and better and better. And always new edge cases come up and new experiences. So, and you want to automate that process as much as possible. So what do you think in general in society? When do you think we may have hundreds of thousands of fully autonomous vehicles driving around? So first of all, predictions, nobody knows the future. You're a part of the leading people trying to define that future, but even then you still don't know. But if you think about hundreds of thousands of vehicles, so a significant fraction of vehicles in major cities are autonomous. Do you think, are you with Rodney Brooks, who is 2050 and beyond, or are you more with Elon Musk, who is, we should have had that two years ago? Well, I mean, I'd love to have it two years ago, but we're not there yet. So I guess the way I would think about that is let's flip that question around. So what would prevent you to reach hundreds of thousands of vehicles? And that's a good, that's a good rephrasing. Yeah. So the, I'd say the, it seems the consensus among the people developing self driving cars today is to sort of start with some form of an easier environment, whether it means, you know, lacking inclement weather or, you know, mostly sunny or whatever it is. And then add, add capability for more complex situations over time. And so if you're only able to deploy in areas that meet sort of your criteria or the current domain, you know, operating domain of the software you developed, that may put a cap on how many cities you could deploy in. But then as those restrictions start to fall away, like maybe you add capability to drive really well and safely in heavy rain or snow, you know, that, that probably opens up the market by two, two or three fold in terms of the cities you can expand into and so on. And so the real question is, you know, I know today if we wanted to, we could produce that, that many autonomous vehicles, but we wouldn't be able to make use of all of them yet. Cause we would sort of saturate the demand in the cities in which we would want to operate initially. So if I were to guess like what the timeline is for those things falling away and reaching hundreds of thousands of vehicles, I would say that thousands of vehicles, maybe a range is better, I would say less than five years, less than five years. Yeah. And of course you're working hard to make that happen. So you started two companies that were eventually acquired for each four billion dollars. So you're a pretty good person to ask, what does it take to build a successful startup? I think there's, there's sort of survivor bias here a little bit, but I can try to find some common threads for the things that worked for me, which is, you know, in, in both of these companies, I was really passionate about the core technology. I actually like, you know, lay awake at night thinking about these problems and how to solve them. And I think that's helpful because when you start a business, there are like to this day, there are these crazy ups and downs. Like one day you think the business is just on, you're just on top of the world and unstoppable. And the next day you think, okay, this is all going to end, you know, it's just, it's just going south and it's going to be over tomorrow. And and so I think like having a true passion that you can fall back on and knowing that you would be doing it, even if you weren't getting paid for it, helps you weather those, those tough times. So that's one thing. I think the other one is really good people. So I've always been surrounded by really good cofounders that are logical thinkers are always pushing their limits and have very high levels of integrity. So that's Dan Kahn and my current company and actually his brother and a couple other guys for Justin TV and Twitch. And then I think the last thing is just I guess persistence or perseverance, like, and, and that, that can apply to sticking to sort of, or having conviction around the original premise of your idea and sticking around to do all the, you know, the unsexy work to actually make it come to fruition, including dealing with, you know, whatever it is that you, that you're not passionate about, whether that's finance or, or HR or, or operations or those things, as long as you are grinding away and working towards, you know, that North star for your business, whatever it is, and you don't give up and you're making progress every day, it seems like eventually you'll end up in a good place. And the only things that can slow you down are, you know, running out of money or I suppose your competitors destroying you. But I think most of the time it's, it's people giving up or, or somehow destroying things themselves rather than being beaten by their competition or running out of money. Yeah. If you never quit, eventually you'll arrive. So, uh, it's a much more concise version of what I was trying to say. Yeah, that was good. So you went the Y Combinator route twice. Yeah. What do you think in a quick question, do you think is the best way to raise funds in the early days or not just funds, but just community develop your idea and so on. Can you do it solo or maybe with a co founder with like self funded? Do you think Y Combinator is good? Is it good to do VC route? Is there no right answer or is there from the Y Combinator experience something that you could take away that that was the right path to take? There's no one size fits all answer, but if your ambition I think is to, you know, see how big you can make something or, or, or rapidly expand and capture a market or solve a problem or whatever it is, then, then, you know, going to venture back route is probably a good approach so that, so that capital doesn't become your primary constraint. Y Combinator I love because it puts you in this, uh, sort of competitive environment where you're, where you're surrounded by, you know, the top, maybe 1% of other really highly motivated, you know, peers who are in the same, same place and that, uh, that environment I think just breeds breed success, right? If you're surrounded by really brilliant, hardworking people, you're going to feel, you know, sort of compelled or inspired to, to try to emulate them and or beat them. And, uh, so even though I had done it once before and I felt like, yeah, I'm pretty self motivated. I thought like, look, this is going to be a hard problem. I can use all the help I can get. So surrounding myself with other entrepreneurs is going to make me work a little bit harder or push a little harder than it's worth it. And so that's why I, why I did it, you know, for example, the second time. Let's, uh, let's go philosophical existential. If you go back and do something differently in your life, starting in the high school and MIT leaving MIT, you could have gone the PhD route doing the startup, going to see about a startup in California and you, or maybe some aspects of fundraising. Is there something you regret, something you not necessarily regret, but if you go back, you could do differently. I think I've made a lot of mistakes, like, you know, pretty much everything you can screw up. I think I've screwed up at least once, but I, you know, I don't regret those things. I think it's, it's hard to, it's hard to look back on things, even if it didn't go well and call it a regret, because hopefully it took away some new knowledge or learning from that. So I would say there was a period. Yeah. The closest I can, I can come to is there's a period, um, in, in Justin TV, I think after seven years where, you know, the company was going one direction, which is towards Twitch, uh, in video gaming. I'm not a video gamer. I don't really even use Twitch at all. And I was still, uh, working on the core technology there, but my, my heart was no longer in it because the business that we were creating was not something that I was personally passionate about. It didn't meet your bar of existential impact. Yeah. And I'd say I probably spent an extra year or two working on that. And, uh, and I'd say like, I would have just tried to do something different sooner because those, those were two years where I felt like, um, you know, from this philosophical or existential thing, I just, I just felt that something was missing. And so I would have, I would have, if I could look back now and tell myself, it's like, I would have said exactly that. Like, you're not getting any meaning out of your work personally right now. You should, you should find a way to change that. And that's, that's part of the pitch I use to basically everyone who joins Cruise today, it's like, Hey, you've got that now by coming here. Well, maybe you needed the two years of that existential dread to develop the feeling that ultimately it was the fire that created Cruise. So, you never know. You can't, good theory. So last question, what does 2019 hold for Cruise? After this, I guess we're going to go and I'll talk to your class. But one of the big things is going from prototype to production, uh, for autonomous cars and what does that mean? What does that look like? And 2019 for us is the year that we try to cross over that threshold and reach, you know, superhuman level of performance to some degree with the software and, uh, have all the other of the thousands of little building blocks in place to, to launch, um, you know, our, our first, uh, commercial product. So that's, that's, what's in store for us or in store for us. And we've got a lot of work to do. We've got a lot of brilliant people working on it. So it's, it's all up to us now. Yeah. From Charlie Miller and Chris Vells, like the people I've crossed paths with. Oh, great. If you, it sounds like you have an amazing team. So, um, like I said, it's one of the most, I think one of the most important problems in artificial intelligence of the century. It'll be one of the most defining, the super exciting that you work on it. And, uh, the best of luck in 2018, I'm really excited to see what Cruz comes up with. Thank you. Thanks for having me today. Thanks, Carl.
Kyle Vogt: Cruise Automation | Lex Fridman Podcast #14
The following is a conversation with Leslie Kaelbling. She is a roboticist and professor at MIT. She is recognized for her work in reinforcement learning, planning, robot navigation, and several other topics in AI. She won the IJCAI Computers and Thought Award and was the editor in chief of the prestigious Journal of Machine Learning Research. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Leslie Kaelbling. What made me get excited about AI, I can say that, is I read Gödel Escher Bach when I was in high school. That was pretty formative for me because it exposed the interestingness of primitives and combination and how you can make complex things out of simple parts and ideas of AI and what kinds of programs might generate intelligent behavior. So... So you first fell in love with AI reasoning logic versus robots? Yeah, the robots came because my first job, so I finished an undergraduate degree in philosophy at Stanford and was about to finish a master's in computer science. And I got hired at SRI in their AI lab and they were building a robot. It was a kind of a follow on to shaky, but all the shaky people were not there anymore. And so my job was to try to get this robot to do stuff. And that's really kind of what got me interested in robots. So maybe taking a small step back to your bachelor's in Stanford in philosophy, did master's and PhD in computer science, but the bachelor's in philosophy. So what was that journey like? What elements of philosophy do you think you bring to your work in computer science? So it's surprisingly relevant. So the part of the reason that I didn't do a computer science undergraduate degree was that there wasn't one at Stanford at the time, but that there's a part of philosophy and in fact, Stanford has a special submajor in something called now symbolic systems, which is logic, model theory, formal semantics of natural language. And so that's actually a perfect preparation for work in AI and computer science. That's kind of interesting. So if you were interested in artificial intelligence, what kind of majors were people even thinking about taking? What is it in your science? So besides philosophies, what were you supposed to do if you were fascinated by the idea of creating intelligence? There weren't enough people who did that for that even to be a conversation. I mean, I think probably, probably philosophy. I mean, it's interesting in my class, my graduating class of undergraduate philosophers, probably maybe slightly less than half went on in computer science, slightly less than half went on in law and like one or two went on in philosophy. So it was a common kind of connection. Do you think AI researchers have a role to be part time philosophers or should they stick to the solid science and engineering without sort of taking the philosophizing tangents? I mean, you work with robots, you think about what it takes to create intelligent beings. Aren't you the perfect person to think about the big picture philosophy at all? The parts of philosophy that are closest to AI, I think, or at least the closest to AI that I think about are stuff like belief and knowledge and denotation and that kind of stuff. And that's, you know, it's quite formal. And it's like just one step away from the kinds of computer science work that we do kind of routinely. I think that there are important questions still about what you can do with a machine and what you can't and so on. Although at least my personal view is that I'm completely a materialist. And I don't think that there's any reason why we can't make a robot be behaviorally indistinguishable from a human. And the question of whether it's distinguishable internally, whether it's a zombie or not in philosophy terms, I actually don't, I don't know. And I don't know if I care too much about that. Right. But there is a philosophical notions. They're mathematical and philosophical because we don't know so much of how difficult it is. How difficult is the perception problem? How difficult is the planning problem? How difficult is it to operate in this world successfully? Because our robots are not currently as successful as human beings in many tasks. The question about the gap between current robots and human beings borders a little bit on philosophy. You know, the expanse of knowledge that's required to operate in a human world, required to operate in this world and the ability to form common sense knowledge, the ability to reason about uncertainty. Much of the work you've been doing, there's open questions there that, I don't know, required to activate a certain big picture view. To me, that doesn't seem like a philosophical gap at all. To me, there is a big technical gap. There's a huge technical gap, but I don't see any reason why it's more than a technical gap. Perfect. So, when you mentioned AI, you mentioned SRI, and maybe can you describe to me when you first fell in love with robotics, with robots or inspired, so you mentioned Flaky or Shaky Flaky, and what was the robot that first captured your imagination, what's possible? Right. Well, so the first robot I worked with was Flaky. Shaky was a robot that the SRI people had built, but by the time, I think when I arrived, it was sitting in a corner of somebody's office dripping hydraulic fluid into a pan, but it's iconic and really everybody should read the Shaky Tech Report because it has so many good ideas in it. I mean, they invented ASTAR search and symbolic planning and learning macro operators. They had low level kind of configuration space planning for their robot. They had vision. That's the basic ideas of a ton of things. Can you take a step back? Shaky have arms. What was the job? Shaky was a mobile robot, but it could push objects, and so it would move things around. With which actuator? With itself, with its base. Okay, great. And they had painted the baseboards black, so it used vision to localize itself in a map. It detected objects. It could detect objects that were surprising to it. It would plan and replan based on what it saw. It reasoned about whether to look and take pictures. I mean, it really had the basics of so many of the things that we think about now. How did it represent the space around it? So it had representations at a bunch of different levels of abstraction. So it had, I think, a kind of an occupancy grid of some sort at the lowest level. At the high level, it was abstract symbolic kind of rooms and connectivity. So where does flaky come in? Yeah, okay. So I showed up at SRI and we were building a brand new robot. As I said, none of the people from the previous project were kind of there or involved anymore. So we were kind of starting from scratch and my advisor was Stan Rosenstein. He ended up being my thesis advisor and he was motivated by this idea of situated computation or situated automata. And the idea was that the tools of logical reasoning were important, but possibly only for the engineers or designers to use in the analysis of a system, but not necessarily to be manipulated in the head of the system itself. So I might use logic to prove a theorem about the behavior of my robot, even if the robot's not using logic in its head to prove theorems. So that was kind of the distinction. And so the idea was to kind of use those principles to make a robot do stuff. But a lot of the basic things we had to kind of learn for ourselves because I had zero background in robotics. I didn't know anything about control. I didn't know anything about sensors. So we reinvented a lot of wheels on the way to getting that robot to do stuff. Do you think that was an advantage or a hindrance? Oh no, I mean, I'm big in favor of wheel reinvention actually. I mean, I think you learn a lot by doing it. It's important though to eventually have the pointers to, so that you can see what's really going on. But I think you can appreciate much better the good solutions once you've messed around a little bit on your own and found a bad one. Yeah. I think you mentioned reinventing reinforcement learning and referring to rewards as pleasures, pleasure. Yeah. Or I think, which I think is a nice name for it. Yeah. It's more fun almost. Do you think you could tell the history of AI machine learning reinforcement learning and how you think about it from the fifties to now? One thing is that it's oscillates, right? So things become fashionable and then they go out and then something else becomes cool and that goes out and so on. And I think there's, so there's some interesting sociological process that actually drives a lot of what's going on. Early days was kind of cybernetics and control, right? And the idea that of homeostasis, right? People have made these robots that could, I don't know, try to plug into the wall when they needed power and then come loose and roll around and do stuff. And then I think over time, the thought, well, that was inspiring, but people said, no, no, no, we want to get maybe closer to what feels like real intelligence or human intelligence. And then maybe the expert systems people tried to do that, but maybe a little too superficially, right? So, oh, we get the surface understanding of what intelligence is like, because I understand how a steel mill works and I can try to explain it to you and you can write it down in logic and then we can make a computer and for that. And then that didn't work out. But what's interesting, I think, is when a thing starts to not be working very well, it's not only do we change methods, we change problems, right? So it's not like we have better ways of doing the problem of the expert systems people were trying to do. We have no ways of trying to do that problem. Oh, yeah, no, I think maybe a few, but we kind of give up on that problem and we switched to a different problem and we worked that for a while and we make progress. As a broad community. As a community, yeah. And there's a lot of people who would argue, you don't give up on the problem, it's just you decrease the number of people working on it. You almost kind of like put it on the shelf, say, we'll come back to this 20 years later. Yeah, I think that's right. Or you might decide that it's malformed. Like you might say, it's wrong to just try to make something that does superficial symbolic reasoning behave like a doctor. You can't do that until you've had the sensory motor experience of being a doctor or something. So there's arguments that say that that problem was not well formed. Or it could be that it is well formed, but we just weren't approaching it well. So you mentioned that your favorite part of logic and symbolic systems is that they give short names for large sets. So there is some use to this. They use symbolic reasoning. So looking at expert systems and symbolic computing, what do you think are the roadblocks that were hit in the 80s and 90s? Ah, okay. So right. So the fact that I'm not a fan of expert systems doesn't mean that I'm not a fan of some kinds of symbolic reasoning, right? So let's see, roadblocks. Well, the main road block, I think, was that the idea that humans could articulate their knowledge effectively into some kind of logical statements. So it's not just the cost, the effort, but really just the capability of doing it. Right. Because we're all experts in vision, right? But totally don't have introspective access into how we do that. Right. And it's true that, I mean, I think the idea was, well, of course, even people then would know, of course, I wouldn't ask you to please write down the rules that you use for recognizing a water bottle. That's crazy. And everyone understood that. But we might ask you to please write down the rules you use for deciding, I don't know, what tie to put on or how to set up a microphone or something like that. But even those things, I think people maybe, I think what they found, I'm not sure about this, but I think what they found was that the so called experts could give explanations that sort of post hoc explanations for how and why they did things, but they weren't necessarily very good. And then they depended on maybe some kinds of perceptual things, which again, they couldn't really define very well. So I think fundamentally, I think the underlying problem with that was the assumption that people could articulate how and why they make their decisions. Right. So it's almost encoding the knowledge from converting from expert to something that a machine could understand and reason with. No, no, no, no, not even just encoding, but getting it out of you. Right. Not, not, not writing it. I mean, yes, hard also to write it down for the computer, but I don't think that people can produce it. You can tell me a story about why you do stuff, but I'm not so sure that's the why. Great. So there are still on the hierarchical planning side, places where symbolic reasoning is very useful. So as you've talked about, so where's the gap? Yeah. Okay, good. So saying that humans can't provide a description of their reasoning processes. That's okay. Fine. But that doesn't mean that it's not good to do reasoning of various styles inside a computer. Those are just two orthogonal points. So then the question is what kind of reasoning should you do inside a computer? Right. And the answer is, I think you need to do all different kinds of reasoning inside a computer, depending on what kinds of problems you face. I guess the question is what kind of things can you encode symbolically so you can reason about? I think the idea about, and even symbolic, I don't even like that terminology because I don't know what it means technically and formally. I do believe in abstractions. So abstractions are critical, right? You cannot reason at completely fine grain about everything in your life, right? You can't make a plan at the level of images and torques for getting a PhD. So you have to reduce the size of the state space and you have to reduce the horizon if you're going to reason about getting a PhD or even buying the ingredients to make dinner. And so how can you reduce the spaces and the horizon of the reasoning you have to do? And the answer is abstraction, spatial abstraction, temporal abstraction. I think abstraction along the lines of goals is also interesting, like you might, well, abstraction and decomposition. Goals is maybe more of a decomposition thing. So I think that's where these kinds of, if you want to call it symbolic or discrete models come in. You talk about a room of your house instead of your pose. You talk about doing something during the afternoon instead of at 2.54. And you do that because it and you do that because it makes your reasoning problem easier. And also because you have, you don't have enough information to reason in high fidelity about your pose of your elbow at 2.35 this afternoon anyway. Right. When you're trying to get a PhD. Or when you're doing anything really. Yeah. Okay. Except for at that moment, at that moment, you do have to reason about the pose of your elbow, maybe, but then you, maybe you do that in some continuous joint space kind of model. And so again, I, my biggest point about all of this is that there should be the dogma is not the thing, right? We shouldn't, it shouldn't be that I'm in favor against symbolic reasoning and you're in favor against neural networks. It should be that just, just computer science tells us what the right answer to all these questions is. If we were smart enough to figure it out. Well, yeah. When you try to actually solve the problem with computers, the right answer comes out. But you mentioned abstractions. I mean, neural networks form abstractions or rather there's, there's automated ways to form abstractions and there's expert driven ways to form abstractions and expert human driven ways. And humans just seem to be way better at forming abstractions currently and certain problems. So when you're referring to 2.45 PM versus afternoon, how do we construct that taxonomy? Is there any room for automated construction of such abstractions? Oh, I think eventually, yeah. I mean, I think when we get to be better and machine learning engineers, we'll build algorithms that build awesome abstractions. That are useful in this kind of way that you're describing. Yeah. So let's then step from the, the abstraction discussion and let's talk about POMM MDPs. Partially observable Markov decision processes. So uncertainty. So first, what are Markov decision processes? What are Markov decision processes? And maybe how much of our world can be models and MDPs? How much, when you wake up in the morning and you're making breakfast, how do you, do you think of yourself as an MDP? So how do you think about MDPs and how they relate to our world? Well, so there's a stance question, right? So a stance is a position that I take with respect to a problem. So I, as a researcher or a person who designs systems, can decide to make a model of the world around me in some terms. So I take this messy world and I say, I'm going to treat it as if it were a problem of this formal kind, and then I can apply solution concepts or algorithms or whatever to solve that formal thing, right? So of course the world is not anything. It's not an MDP or a POMM DP. I don't know what it is, but I can model aspects of it in some way or some other way. And when I model some aspect of it in a certain way, that gives me some set of algorithms I can use. You can model the world in all kinds of ways. Some have, some are, some are, some are more accepting of uncertainty, more easily modeling uncertainty of the world. Some really force the world to be deterministic. And so certainly MDPs model the uncertainty of the world. Yes. Model some uncertainty. They model not present state uncertainty, but they model uncertainty in the way the future will unfold. Right. So what are Markov decision processes? So Markov decision process is a model. It's a kind of a model that you could make that says, I know completely the current state of my system. And what it means to be a state is that I, that all the, I have all the information right now that will let me make predictions about the future as well as I can. So that remembering anything about my history wouldn't make my predictions any better. And, but then it also says that then I can take some actions that might change the state of the world. And that I don't have a deterministic model of those changes. I have a probabilistic model of how the world might change. It's a, it's a useful model for some kinds of systems. I think it's a, I mean, it's certainly not a good model for most problems, I think, because for most problems you don't actually know the state. For most problems you, it's partially observed. So that's now a different problem class. So, okay. That's where the POMDPs, the part that we observe with the Markov decision processes step in. So how do they address the fact that you can't observe most incomplete information about most of the world around you? Right. So now the idea is we still kind of postulate that there exists a state. We think that there is some information about the world out there such that if we knew that we could make good predictions, but we don't know the state. And so then we have to think about how, but we do get observations. Maybe I get images or I hear things or I feel things, and those might be local or noisy. And so therefore they don't tell me everything about what's going on. And then I have to reason about given the history of actions I've taken and observations I've gotten, what do I think is going on in the world? And then given my own kind of uncertainty about what's going on in the world, I can decide what actions to take. And so how difficult is this problem of planning under uncertainty in your view and your long experience of modeling the world, trying to deal with this uncertainty in especially in real world systems? Optimal planning for even discrete POMDPs can be undecidable depending on how you set it up. And so lots of people say, I don't use POMDPs because they are intractable. And I think that that's kind of a very funny thing to say because the problem you have to solve is the problem you have to solve. So if the problem you have to solve is intractable, that's what makes us AI people, right? So we solve, we understand that the problem we're solving is wildly intractable that we can't, we will never be able to solve it optimally, at least I don't. Yeah, right. So later we can come back to an idea about bounded optimality and something. But anyway, we can't come up with optimal solutions to these problems. So we have to make approximations, approximations in modeling, approximations in the solution algorithms and so on. And so I don't have a problem with saying, yeah, my problem actually, it is POMDP in continuous space with continuous observations. And it's so computationally complex, I can't even think about it's, you know, big O whatever. But that doesn't prevent me from, it helps me, gives me some clarity to think about it that way and to then take steps to make approximation after approximation to get down to something that's like computable in some reasonable time. When you think about optimality, the community broadly has shifted on that, I think a little bit in how much they value the idea of optimality, of chasing an optimal solution. How has your views of chasing an optimal solution changed over the years when you work with robots? That's interesting. I think we have a little bit of a methodological crisis actually from the theoretical side. I mean, I do think that theory is important and that right now we're not doing much of it. So there's lots of empirical hacking around and training this and doing that and reporting numbers, but is it good? Is it bad? We don't know. It's very hard to say things. And if you look at like computer science theory, so people talked for a while, everyone was about solving problems optimally or completely. And then there were interesting relaxations. So people look at, oh, are there regret bounds or can I do some kind of approximation? Can I prove something that I can approximately solve this problem or that I get closer to the solution as I spend more time and so on? What's interesting I think is that we don't have good approximate solution concepts for very difficult problems. I like to say that I'm interested in doing a very bad job of very big problems. Right. So very bad job, very big problems. I like to do that, but I wish I could say something. I wish I had a, I don't know, some kind of a formal solution concept that I could use to say, oh, this algorithm actually, it gives me something. Like I know what I'm going to get. I can do something other than just run it and get out. So that, that notion is still somewhere deeply compelling to you. The notion that you can say, you can drop thing on the table says this, you can expect this, this algorithm will give me some good results. I hope there's, I hope science will, I mean, there's engineering and there's science. I think that they're not exactly the same. And I think right now we're making huge engineering, like leaps and bounds. So the engineering is running away ahead of the science, which is cool. And often how it goes, right? So we're making things and nobody knows how and why they work roughly, but we need to turn that into science. There's some form. It's a, yeah, there's some room for formalizing. We need to know what the principles are. Why does this work? Why does that not work? I mean, for a while, people built bridges by trying, but now we can often predict whether it's going to work or not without building it. Can we do that for learning systems or for robots? So your hope is from a materialistic perspective that intelligence, artificial intelligence systems, robots are just fancier bridges. Belief space. What's the difference between belief space and state space? So you mentioned MDPs, FOMDPs, reasoning about, you sense the world, there's a state. Uh, what, what's this belief space idea? That sounds so good. It sounds good. So belief space, that is instead of thinking about what's the state of the world and trying to control that as a robot, I think about what is the space of beliefs that I could have about the world. What's, if I think of a belief as a probability distribution of our ways the world could be, a belief state is a distribution. And then my control problem, if I'm reasoning about how to move through a world I'm uncertain about, my control problem is actually the problem of controlling my beliefs. So I think about taking actions, not just what effect they'll have on the world outside, but what effect they'll have on my own understanding of the world outside. And so that might compel me to ask a question or look somewhere to gather information, which may not really change the world state, but it changes my own belief about the world. That's a powerful way to, to empower the agent, to reason about the world, to explore the world. So what kind of problems does it allow you to solve to, to consider belief space versus just state space? Well, any problem that requires deliberate information gathering, right? So if in some problems like chess, there's no uncertainty, or maybe there's uncertainty about the opponent, there's no uncertainty about the state. And some problems, there's uncertainty, but you gather information as you go, right? You might say, Oh, I'm driving my autonomous car down the road and it doesn't know perfectly where it is, but the light hours are all going all the time. So I don't have to think about whether to gather information. But if you're a human driving down the road, you sometimes look over your shoulder to see what's going on behind you in the lane. And you have to decide whether you should do that now. And you have to trade off the fact that you're not seeing in front of you and you're looking behind you and how valuable is that information and so on. And so to make choices about information gathering, you have to reasonably space. Also, I mean, also to just take into account your own uncertainty before trying to do things. So you might say, if I understand where I'm standing relative to the door jam, pretty accurately, then it's okay for me to go through the door. But if I'm really not sure where the door is, then it might be better to not do that right now. The degree of your uncertainty about the world is actually part of the thing you're trying to optimize in forming the plan, right? So this idea of a long horizon of planning for a PhD or just even how to get out of the house or how to make breakfast. You show this presentation of the WTF, where's the fork of robot looking at a sink. And can you describe how we plan in this world of this idea of hierarchical planning we've mentioned? So yeah, how can a robot hope to plan about something with such a long horizon where the goal is quite far away? People since probably reasoning began have thought about hierarchical reasoning, the temporal hierarchy in particular. Well, there's spatial hierarchy, but let's talk about temporal hierarchy. So you might say, oh, I have this long execution I have to do, but I can divide it into some segments abstractly, right? So maybe you have to get out of the house, I have to get in the car, I have to drive and so on. And so you can plan if you can build abstractions. So this we started out by talking about abstractions. And we're back to that now, if you can build abstractions in your state space, and abstractions sort of temporal abstractions, then you can make plans at a high level. And you can say, I'm going to go to town and then I'll have to get gas and then I can go here and I can do this other thing. And you can reason about the dependencies and constraints among these actions, again, without thinking about the complete details. What we do in our hierarchical planning work is then say, all right, I make a plan at a high level of abstraction, I have to have some reason to think that it's feasible without working it out in complete detail. And that's actually the interesting step. I always like to talk about walking through an airport, like you can plan to go to New York and arrive at the airport, and then find yourself an office building later. You can't even tell me in advance what your plan is for walking through the airport, partly because you're too lazy to think about it, maybe, but partly also because you just don't have the information, you don't know what gate you're landing in, or what people are going to be in front of you or anything. So there's no point in planning in detail, but you have to have, you have to make a leap of faith that you can figure it out once you get there. And it's really interesting to me how you arrive at that. How do you, so you have learned over your lifetime to be able to make some kinds of predictions about how hard it is to achieve some kinds of sub goals. And that's critical. Like you would never plan to fly somewhere if you couldn't, didn't have a model of how hard it was to do some of the intermediate steps. So one of the things we're thinking about now is how do you do this kind of very aggressive generalization to situations that you haven't been in and so on to predict how long will it take to walk through the Kuala Lumpur airport. Like you could give me an estimate and it wouldn't be crazy. And you have to have an estimate of that in order to make plans that involve walking through the Kuala Lumpur airport, even if you don't need to know it in detail. So I'm really interested in these kinds of abstract models and how do we acquire them. But once we have them, we can use them to do hierarchical reasoning, which is, I think is very important. Yeah. There's this notion of goal regression and preimage backchaining, this idea of starting at the goal and just forming these big clouds of states. I mean, it's almost like saying to the airport, you know, once you show up to the airport that you're like a few steps away from the goal. So like thinking of it this way, it's kind of interesting. I don't know if you have sort of further comments on that of starting at the goal. Yeah. I mean, it's interesting that Simon, Herb Simon back in the early days of AI talked a lot about means ends reasoning and reasoning back from the goal. There's a kind of an intuition that people have that the number of that state space is big. The number of actions you could take is really big. So if you say, here I sit and I want to search forward from where I am, what are all the things I could do? That's just overwhelming. If you say, if you can reason at this other level and say, here's what I'm hoping to achieve, what could I do to make that true? That somehow the branching is smaller. Now what's interesting is that like in the AI planning community, that hasn't worked out in the class of problems that they look at and the methods that they tend to use. It hasn't turned out that it's better to go backward. It's still kind of my intuition that it is, but I can't prove that to you right now. Right. I share your intuition, at least for us mere humans. Speaking of which, when you maybe now we take a little step into that philosophy circle. How hard would it, when you think about human life, you give those examples often. How hard do you think it is to formulate human life as a planning problem or aspects of human life? So when you look at robots, you're often trying to think about object manipulation, tasks about moving a thing. When you take a slight step outside the room, let the robot leave and go get lunch, or maybe try to pursue more fuzzy goals. How hard do you think is that problem? If you were to try to maybe put another way, try to formulate human life as a planning problem. Well, that would be a mistake. I mean, it's not all a planning problem, right? I think it's really, really important that we understand that you have to put together pieces and parts that have different styles of reasoning and representation and learning. I think it seems probably clear to anybody that it can't all be this or all be that. Brains aren't all like this or all like that, right? They have different pieces and parts and substructure and so on. So I don't think that there's any good reason to think that there's going to be like one true algorithmic thing that's going to do the whole job. So it's a bunch of pieces together designed to solve a bunch of specific problems. Or maybe styles of problems. I mean, there's probably some reasoning that needs to go on in image space. I think, again, there's this model based versus model free idea, right? So in reinforcement learning, people talk about, oh, should I learn, I could learn a policy, just straight up a way of behaving. I could learn it's popular on a value function. That's some kind of weird intermediate ground. Or I could learn a transition model, which tells me something about the dynamics of the world. If I take it, imagine that I learned a transition model and I couple it with a planner and I draw a box around that, I have a policy again. It's just stored a different way, right? But it's just as much of a policy as the other policy. It's just I've made, I think the way I see it is it's a time space trade off in computation, right? A more overt policy representation. Maybe it takes more space, but maybe I can compute quickly what action I should take. On the other hand, maybe a very compact model of the world dynamics plus a planner lets me compute what action to take to just more slowly. There's no, I don't, I mean, I don't think there's no argument to be had. It's just like a question of what form of computation is best for us for the various sub problems. Right. So, and, and so like learning to do algebra manipulations for some reason is, I mean, that's probably gonna want naturally a sort of a different representation than writing a unicycle at the time constraints on the unicycle are serious. The space is maybe smaller. I don't know, but so I could be the more human size of falling in love, having a relationship that might be another, another style of how to model that. Yeah. Let's first solve the algebra and the object manipulation. What do you think is harder perception or planning perception? That's why understanding that's why. So what do you think is so hard about perception by understanding the world around you? Well, I mean, I think the big question is representational. Hugely the question is representation. So perception has made great strides lately, right? And we can classify images and we can play certain kinds of games and predict how to steer the car and all this sort of stuff. Um, I don't think we have a very good idea of what perception should deliver, right? So if you, if you believe in modularity, okay, there's, there's a very strong view which says we shouldn't build in any modularity. We should make a giant gigantic neural network, train it end to end to do the thing. And that's the best way forward. And it's hard to argue with that except on a sample complexity basis, right? So you might say, Oh, well if I want to do end to end reinforcement learning on this giant, giant neural network, it's going to take a lot of data and a lot of like broken robots and stuff. So then the only answer is to say, okay, we have to build something in, build in some structure or some bias. We know from theory of machine learning, the only way to cut down the sample complexity is to kind of cut down, somehow cut down the hypothesis space. You can do that by building in bias. There's all kinds of reasons to think that nature built bias into humans. Um, convolution is a bias, right? It's a very strong bias and it's a very critical bias. So my own view is that we should look for more things that are like convolution, but the address other aspects of reasoning, right? So convolution helps us a lot with a certain kind of spatial reasoning. That's quite close to the imaging. I think there's other ideas like that. Maybe some amount of forward search, maybe some notions of abstraction, maybe the notion that objects exist. Actually, I think that's pretty important. And a lot of people won't give you that to start with. Right? So almost like a convolution in the, uh, uh, in the object, semantic object space or some kind of, some kind of ideas in there. That's right. And people are starting like the graph, graph convolutions are an idea that are related to relation, relational representations. And so, so I think there are, so you, I've come I've come far field from perception, but I think, um, I think the thing that's going to make perception that kind of the next step is actually understanding better what it should produce. Right? So what are we going to do with the output of it? Right? It's fine when what we're going to do with the output is steer. It's less clear when we're just trying to make a one integrated intelligent agent, what should the output of perception be? We have no idea. And how should that hook up to the other stuff? We don't know. So I think the pressing question is, what kinds of structure can we build in that are like the moral equivalent of convolution that will make a really awesome superstructure that then learning can kind of progress on efficiently. I agree. Very compelling description of actually where we stand with the perception problem. You're teaching a course on embodied intelligence. What do you think it takes to build a robot with human level intelligence? I don't know if we knew we would do it. If you were to, I mean, okay. So do you think a robot needs to have a self awareness, consciousness, fear of mortality, or is it, is it simpler than that? Or is consciousness a simple thing? Like, do you think about these notions? I don't think much about consciousness. Even most philosophers who care about it will give you that you could have robots that are zombies, right? That behave like humans, but are not conscious. And I, at this moment would be happy enough with that. So I'm not really worried one way or the other. So the technical side, you're not thinking of the use of self awareness. Well, but I, okay, but then what does self awareness mean? I mean, that you need to have some part of the system that can observe other parts of the system and tell whether they're working well or not. That seems critical. So does that count as, I mean, does that count as self awareness or not? Well, it depends on whether you think that there's somebody at home who can articulate whether they're self aware. But clearly, if I have like, you know, some piece of code that's counting how many times this procedure gets executed, that's a kind of self awareness, right? So there's a big spectrum. It's clear you have to have some of it. Right. You know, we're quite far away in many dimensions, but is there a direction of research that's most compelling to you for, you know, trying to achieve human level intelligence in our robots? Well, to me, I guess the thing that seems most compelling to me at the moment is this question of what to build in and what to learn. Um, I think we're, we don't, we're missing a bunch of ideas and, and we, you know, people, you know, don't you dare ask me how many years it's going to be until that happens because I won't even participate in the conversation because I think we're missing ideas and I don't know how long it's going to take to find them. So I won't ask you how many years, but, uh, maybe I'll ask you what it, when you'll be sufficiently impressed that we've achieved it. So what's, what's a good test of intelligence? Do you like the Turing test, the natural language in the robotic space? Is there something where you would sit back and think, Oh, that's, that's pretty impressive. Uh, as a test, as a benchmark, do you think about these kinds of problems? No, I resist. I mean, I think all the time that we spend arguing about those kinds of things could be better spent just making the robots work better. Uh, so you don't value competition. So, I mean, there's a nature of benchmark benchmarks and datasets or Turing test challenges where everybody kind of gets together and tries to build a better robot cause they want to out compete each other. Like the DARPA challenge with the autonomous vehicles. Do you see the value of that or it can get in the way? I think it can get in the way. I mean, some people, many people find it motivating. And so that's good. I find it anti motivating personally. Uh, but I think what, I mean, I think you get an interesting cycle where for a contest, a bunch of smart people get super motivated and they hack their brains out and much of what gets done is just hacks, but sometimes really cool ideas emerge. And then that gives us something to chew on after that. So I'm, it's not a thing for me, but I don't, I don't regret that other people do it. Yeah. It's like you said with everything else that it makes us good. So jumping topics a little bit, you started the journal of machine learning research and served as its editor in chief. Uh, how did the publication come about and what do you think about the current publishing model space in machine learning artificial intelligence? Okay, good. So it came about because there was a journal called machine learning, which still exists, which was owned by Cluer and there was, I was on the editorial board and we used to have these meetings annually where we would complain to Cluer that it was too expensive for the libraries and that people couldn't publish. And we would really like to have some kind of relief on those fronts and they would always sympathize, but not do anything. So, uh, we just decided to make a new journal and, uh, there was the journal of AI research, which has, was on the same model, which had been in existence for maybe five years or so, and it was going on pretty well. So, uh, we just made a new journal. It wasn't, I mean, um, I don't know, I guess it was work, but it wasn't that hard. So basically the editorial board, probably 75% of the editorial board of, uh, machine learning resigned and we founded the new journal, but it was sort of, it was more open. Yeah. Right. So it's completely open. It's open access. Actually, uh, uh, I had a postdoc, George Conidaris who wanted to call these journals free for all, uh, because there were, I mean, it both has no page charges and has no, uh, uh, access restrictions. And the reason, and so lots of people, I mean, there were, there were people who were mad about the existence of this journal who thought it was a fraud or something. It would be impossible. They said to run a journal like this with basically, I mean, for a long time, I didn't even have a bank account. Uh, I paid for the lawyer to incorporate and the IP address and it just did cost a couple of hundred dollars a year to run. It's a little bit more now, but not that much more, but that's because I think computer scientists are competent and autonomous in a way that many scientists and other fields aren't. I mean, at doing these kinds of things, we already types out our own papers. We all have students and people who can hack a website together in an afternoon. So the infrastructure for us was like, not a problem, but for other people in other fields, it's a harder thing to do. Yeah. And this kind of open access journal is nevertheless one of the most prestigious journals. So it's not like, uh, prestige and it can be achieved without any of the... Paper is not required for prestige. Yeah. It turns out. Yeah. So on the review process side of actually a long time ago, I don't remember when I reviewed a paper where you were also a reviewer. And I remember reading your review being influenced by it and it was really well written. It influenced how I write feature reviews. Uh, you disagreed with me actually. Uh, and you made it, uh, my review, but much better. So, but nevertheless, the review process, you know, has its, uh, flaws. And how do you think, what do you think works well? How can it be improved? So actually when I started JMLR, I wanted to do something completely different. And I didn't because it felt like we needed a traditional journal of record. And so we just made JMLR be almost like a normal journal, except for the open access parts of it, basically. Um, increasingly of course, publication is not even a sensible word. You can publish something by putting it in an archive so I can publish everything tomorrow. So making stuff public is, there's no barrier. We still need curation and evaluation. I don't have time to read all of archive. And you could argue that kind of social thumbs upping of articles suffices, right? You might say, Oh, heck with this. We don't need journals at all. We'll put everything on archive and people will upvote and downvote the articles. And then your CV will say, Oh man, he got a lot upvotes. So, uh, that's good. Um, but I think there's still value in careful reading and commentary of things. And it's hard to tell when people are upvoting and downvoting or arguing about your paper on Twitter and Reddit, whether they know what they're talking about, right? So then I have the second order problem of trying to decide whose opinions I should value and such. So I don't know what I w if I had infinite time, which I don't, and I'm not going to do this because I really want to make robots work. But if I felt inclined to do something more in the publication direction, I would do this other thing, which I thought about doing the first time, which is to get together some set of people whose opinions I value and who are pretty articulate. And I guess we would be public, although we could be private. I'm not sure. And we would review papers. We wouldn't publish them and you wouldn't submit them. We would just find papers and we would write reviews and we would make those reviews public. And maybe if you, you know, so we're Leslie's friends who review papers and maybe eventually if, if we, our opinion was sufficiently valued, like the opinion of JMLR is valued, then you'd say on your CV that Leslie's friends gave my paper a five star rating. And that would be just as good as saying, I got it, so, you know, accepted into this journal. So I think, I think we should have good public commentary and organize it in some way, but I don't really know how to do it. It's interesting times. The way you describe it actually is really interesting. I mean, we do it for movies, imdb.com. There's experts, critics come in, they write reviews, but there's also regular non critics, humans write reviews and they're separated. I like open review. The iClear process I think is interesting. It's a step in the right direction, but it's still not as compelling as reviewing movies or video games. I mean, it sometimes almost, it might be silly, at least from my perspective to say, but it boils down to the user interface, how fun and easy it is to actually perform the reviews, how efficient, how much you as a reviewer get street cred for being a good reviewer. Those elements, those human elements come into play. No, it's a big investment to do a good review of a paper and the flood of papers is out of control. Right. So, you know, there aren't 3000 new, I don't know how many new movies are there in a year. I don't know, but that's probably going to be less than how many machine learning papers are in a year now. And I'm worried, you know, I, right. So I'm like an old person. So of course, I'm going to say, things are moving too fast. I'm a stick in the mud. So I can say that, but my particular flavor of that is I think the horizon for researchers has gotten very short, that students want to publish a lot of papers and there's a huge, there's value. It's exciting. And there's value in that and you get patted on the head for it and so on. But, and some of that is fine, but I'm worried that we're driving out people who would spend two years thinking about something. Back in my day, when we worked on our thesis, we did not publish papers. You did your thesis for years. You picked a hard problem and then you worked and chewed on it and did stuff and wasted time and for a long time. And when it was roughly, when it was done, you would write papers. And so I don't know how to, and I don't think that everybody has to work in that mode, but I think there's some problems that are hard enough that it's important to have a long research horizon. And I'm worried that we don't incentivize that at all at this point. In this current structure. Yeah. So what do you see as, what are your hopes and fears about the future of AI and continuing on this theme? So AI has gone through a few winters, ups and downs. Do you see another winter of AI coming? Are you more hopeful about making robots work, as you said? I think the cycles are inevitable, but I think each time we get higher, right? I mean, so, you know, it's like climbing some kind of landscape with a noisy optimizer. So it's clear that the, you know, the deep learning stuff has made deep and important improvements. And so the high water mark is now higher. There's no question. But of course, I think people are overselling and eventually investors, I guess, and other people will look around and say, well, you're not quite delivering on this grand claim and that wild hypothesis. It's probably, it's going to crash some amount and then it's okay. I mean, but I don't, I can't imagine that there's like some awesome monotonic improvement from here to human level AI. So in, you know, I have to ask this question, I probably anticipate answers, the answers, but do you have a worry short term or long term about the existential threats of AI and maybe short term, less existential, but more robots taking away jobs? Well, actually, let me talk a little bit about utility. Actually, I had an interesting conversation with some military ethicists who wanted to talk to me about autonomous weapons. And they're, they were interesting, smart, well educated guys who didn't know too much about AI or machine learning. And the first question they asked me was, has your robot ever done something you didn't expect? And I like burst out laughing because anybody who's ever done something on the robot right knows that they don't do it. And what I realized was that their model of how we program a robot was completely wrong. Their model of how we can program a robot was like Lego mind storms, like, Oh, go forward a meter, turn left, take a picture, do this, do that. And so if you have that model of programming, then it's true. It's kind of weird that your robot would do something that you didn't anticipate. But the fact is, and actually, so now this is my new educational mission. If I have to talk to non experts, I try to teach them the idea that we don't operate, we operate at least one or maybe many levels of abstraction about that. And we say, Oh, here's a hypothesis class, maybe it's a space of plans, or maybe it's a space of classifiers or whatever. But there's some set of answers and an objective function. And then we work on some optimization method that tries to optimize a solution solution in that class. And we don't know what solution is going to come out. Right. So I think it's important to communicate that. So I mean, of course, probably people who listen to this, they, they know that lesson. But I think it's really critical to communicate that lesson. And then lots of people are now talking about, you know, the value alignment problem. So you want to be sure as robots or software systems get more competent, that their objectives are aligned with your objectives, or that our objectives are compatible in some way, or we have a good way of mediating when they have different objectives. And so I think it is important to start thinking in terms like, you don't have to be freaked out by the robot apocalypse, to accept that it's important to think about objective functions of value alignment. Yes. And that you have to really everyone who's done optimization knows that you have to be careful what you wish for that, you know, sometimes you get the optimal solution, and you realize, man, that was that objective was wrong. So pragmatically, in the shortest term, it seems to me that that that those are really interesting and critical questions. And the idea that we're going to go from being people who engineer algorithms to being people who engineer objective functions. I think that's, that's definitely going to happen. And that's going to change our thinking and methodology. And so we're gonna you started at Stanford philosophy, that's where she could be. And I will go back to philosophy maybe. Well, I mean, they're mixed together, because because, as we also know, as machine learning people, right? When you design, in fact, this is the lecture I gave in class today, when you design an objective function, you have to wear both hats, there's the hat that says, what do I want? And there's the hat that says, but I know what my optimizer can do to some degree. And I have to take that into account. So it's it's always a trade off, and we have to kind of be mindful of that. The part about taking people's jobs, I understand that that's important. I don't understand sociology or economics or people very well. So I don't know how to think about that. So that's Yeah, so there might be a sociological aspect there, the economic aspect that's very difficult to think about. Okay. I mean, I think other people should be thinking about it. But I'm just that's not my strength. So what do you think is the most exciting area of research in the short term, for the community and for your for yourself? Well, so I mean, there's the story I've been telling about how to engineer intelligent robots. So that's what we want to do. We all kind of want to do well, I mean, some set of us want to do this. And the question is, what's the most effective strategy? And we've tried it. And there's a bunch of different things you could do at the extremes, right? One super extreme is, what's the most effective strategy? And there's a bunch of different things you could do at the extremes, right? One super extreme is, we do introspection, and we write a program. Okay, that has not worked out very well. Another extreme is we take a giant bunch of neural goo, and we try and train it up to do something. I don't think that's going to work either. So the question is, what's the middle ground? And, and again, this isn't a theological question or anything like that. It's just like, what's the middle ground? And I think it's clear, it's a combination of learning, to me, it's clear, it's a combination of learning and not learning. And what should that combination be? And what's the stuff we build in? So to me, that's the most compelling question. And when you say engineer robots, you mean engineering systems that work in the real world? Is that, that's the emphasis? Okay. Last question. Which robots or robot is your favorite from science fiction? So you can go with Star Wars or RTD2, or you can go with more modern, maybe Hal from... I don't think I have a favorite robot from science fiction. This is, this is back to, you like to make robots work in the real world here, not, not in... I mean, I love the process and I care more about the process. The engineering process. Yeah. I mean, I do research because it's fun, not because I care about what we produce. Well, that's a, that's a beautiful note actually. And Leslie, thank you so much for talking today. Sure. It's been fun.
Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15
The following is a conversation with Eric Weinstein. He's a mathematician, economist, physicist, and the managing director of Teal Capital. He coined the term, and you can say, is the founder of the intellectual dark web, which is a loosely assembled group of public intellectuals that includes Sam Harris, Jordan Peterson, Steven Pinker, Joe Rogan, Michael Shermer, and a few others. This conversation is part of the Artificial Intelligence Podcast at MIT and beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Eric Weinstein. Are you nervous about this? Scared shitless. Okay. You mentioned Kung Fu Panda as one of your favorite movies. It has the usual profound master student dynamic going on. So who was, who has been a teacher that significantly influenced the direction of your thinking and life's work? So if you're the Kung Fu Panda, who was your Shifu? Oh, well, it's interesting because I didn't see Shifu as being the teacher. Who was the teacher? Oogway, Master Oogway. The turtle. Oh, the turtle. Right. They only meet twice in the entire film. And the first conversation sort of doesn't count. So the magic of the film, in fact, its point is that the teaching that really matters is transferred during a single conversation and it's very brief. And so who played that role in my life? I would say, uh, either, uh, my grandfather, uh, Harry Rubin and his wife, Sophie Rubin, my grandmother, or Tom Lehrer. Tom Lehrer? Yeah. In which way? If you give a child Tom Lehrer records, what you do is you destroy their ability to be taken over by later malware. And it's so irreverent, so witty, so clever, so obscene that it destroys the ability to lead a normal life for many people. So if I meet somebody who's usually really shifted from any kind of neurotypical presentation, I'll often ask them, uh, are you a Tom Lehrer fan? And the odds that they will respond are quite high. Tom Lehrer is, uh, poisoning pigeons in the park, Tom Lehrer. That's very interesting. There are a small number of Tom Lehrer songs that broke into the general population, poisoning pigeons in the park, the element song, and perhaps the Vatican rag. Uh, so when you meet somebody who knows those songs but doesn't know, oh, you're judging me right now, aren't you? Harshly. Uh, no, but you're a Russian, so I doubt as the, you know, Nikolai Ivanovich Lubachevsky, that song. Yeah. Um, so that was a song about plagiarism that was in fact plagiarized, which most people don't know from Danny Kay, uh, where Danny Kay did a song called Stanislavsky of the Muskie arts. And so Tom Lehrer did this brilliant job of plagiarizing a song about and making it about plagiarism and then making it about this mathematician who worked in non Euclidean geometry. That was like, uh, giving heroin to a child. It was extremely addictive and eventually led me to a lot of different places, one of which may have been a PhD in mathematics. And he was also at least a lecturer in mathematics, I believe at Harvard, something like that. Yeah. I just had dinner with him. In fact, uh, when my son turned 13, we didn't tell him, but, um, his bar mitzvah present was dinner with his hero, Tom Lehrer. And Tom Lehrer was 88 years old, sharp as a tack, irreverent and funny as hell. And just, you know, there are very few people in this world that you have to meet while they're still here. And that was definitely one for our family. So that wit is a reflection of intelligence in some kind of deep way, like where that would be a good test of intelligence, whether you're a Tom Lehrer fan. So what do you think that is about wit, about that kind of humor, ability to see the absurdity in existence? Well, do you think that's connected to intelligence or are we just two Jews on a mic that appreciate that kind of humor? No, I think that it's absolutely connected to intelligence. So you can, you can see it. There's a place where Tom Lehrer decides that he's going to lampoon Gilbert of Gilbert and Sullivan and he's going to outdo Gilbert with clever, meaningless wordplay. And he has, forget the, well, let's see, he's doing Clementine as if Gilbert and Sullivan wrote it. That I misunderstood depressed her young sister named Mr. This Mr. Depester, she tried pestering sisters of festering blister you best to resist or say I, the sister persisted the Mr. Resisted I kissed her all loyalty slip when he said, when she said I could have her, her sister's cadaver must surely have turned in its crypt. That's so dense. It's so insane that that's clearly intelligence because it's hard to construct something like that. If I look at my favorite Tom Lehrer, Tom Lehrer lyric, you know, there's a perfectly absurd one, which is once all the Germans were warlike and mean, but that couldn't happen again. We taught them a lesson in 1918 and they've hardly bothered us since then, right? That is a different kind of intelligence. You know, you're taking something that is so horrific and you're, you're sort of making it palatable and funny and demonstrating also, um, just your humanity. I mean, I think the thing that came through as, as Tom Lehrer wrote all of these terrible, horrible lines was just what a sensitive and beautiful soul he was, who was channeling pain through humor and through grace. I've seen throughout Europe, throughout Russia, that same kind of humor emerged from the generation of world war II. It seemed like that humor is required to somehow deal with the pain and the suffering of that, that war created. You do need the environment to create the broad Slavic soul. I don't think that many Americans really appreciate, um, Russian humor, how you had to joke during the time of, let's say article 58 under Stalin, you had to be very, very careful. You know, the concept of a Russian satirical magazine like Crocodile, uh, doesn't make sense. So you have this cross cultural problem that there are certain areas of human experience that it would be better to know nothing about. And quite unfortunately, Eastern Europe knows a great deal about them, which makes the, you know, the songs of Vladimir Vysotsky so potent, the, uh, you know, the pros of Pushkin, whatever it is, uh, you have to appreciate the depth of the Eastern European experience. And I would think that perhaps Americans knew something like this around the time of the civil war or merit maybe, um, you know, under slavery and Jim Crow or even the, uh, harsh tyranny of, uh, the coal and steel employers during the labor wars. Um, but in general, I would say it's hard for us to understand and imagine the collective culture unless we have the system of selective pressures that, for example, uh, Russians were subjected to. Yeah, so if there's one good thing that comes out of war, it's literature, art, and humor and music. Oh, I don't think so. I think almost everything is good about war except for death and destruction. Right. Without the death, it would bring, uh, the romance of it. The whole thing is nice. Well, this is why we're always caught up in war and we have this very ambiguous relationship to it is that it makes life real and pressing and meaningful and at an unacceptable price and the price has never been higher. So just jump in, uh, into AI a little bit. You, uh, in one of the conversations you had or one of the videos, you described that one of the things AI systems can't do and biological systems can is self replicate in the physical world. Oh no, no. In the physical world. Well, yes, the physical robots can't self replicate, but the fit, but you, this is a very tricky point, which is that the only thing that we've been able to create that's really complex that has an analog of our reproductive system is software. But nevertheless, software replicates itself. Uh, if we're speaking strictly for the replication in this kind of digital space. So let me just to begin, let me ask a question. Do you see a protective barrier or a gap between the physical world and the digital world? Let's not call it digital. Let's call it the logical world versus the physical world. Why logical? Well, because even though we had, let's say Einstein's brain preserved, uh, it was meaningless to us as a physical object because we couldn't do anything with what was stored in it at a logical level. And so the idea that something may be stored logically and that it may be stored physically, uh, are not necessarily, uh, we don't always benefit from synonymizing. I'm not suggesting that there isn't a material basis to the logical world, but that it does warrant identification with a separate layer that need not invoke logic gates and zeros and ones. And uh, so connecting those two worlds, the logical world and the physical world, or maybe just connecting to the logical world inside our brain, Einstein's brain. You mentioned the idea of out, outtelligence. Artificial outtelligence. Artificial outtelligence. Yes. This is the only essay that John Brockman ever invited me to write that he refused to publish in Edge. Why? Well, maybe it wasn't, it wasn't well written, um, but I don't know. The idea is quite compelling is quite unique and new and at least from my view of a stance point, maybe you can explain it. Sure. What I was thinking about is why it is that we're waiting to be terrified by artificial general intelligence when in fact, artificial life, uh, is terrifying in and of itself and it's already here. So in order to have a system of selective pressures, you need three distinct elements. You need variation within a population. You need heritability and you need differential success. So what's really unique and I've made this point, I think elsewhere about software is that if you think about what humans know how to build, that's impressive. So I always take a car and I say, does it have an analog of each of the physical physiological systems? Does it have a skeletal structure? That's its frame. Does it have a neurological structure? Has an on board computer, has a digestive system. The one thing it doesn't have is a reproductive system. But if you can call spawn on a process, effectively you do have a reproductive system and that means that you can create something with variation, heritability and differential success. Now the next step in the chain of thinking was where do we see inanimate, non intelligent life outwitting intelligent life? And um, I have two favorite systems and I try to stay on them so that we don't get distracted. One of which is the Ofres orchid, um, subspecies or subclade. I don't know what to call it. There's a type of flower. Yeah, it's a type of flower that mimics the female of a pollinator species in order to dupe the males into, uh, engaging. It was called pseudo copulation with the fake female, which is usually represented by the lowest pedal. And there's also a pheromone component to fool the males into thinking they have a mating opportunity. But the flower doesn't have to give up energy energy in the form of nectar as a lure because it's tricking the males. The other system is a particular species, uh, of muscle lampicillus in the clear streams of Missouri and it fools bass into biting a fleshy lip that contain its young. And when the bass see this fleshy lip, which looks exactly like a species of fish that the bass like to eat, the, uh, the young explode and clamp onto the gills and parasitize the bass and also lose the best redistribute them as they eventually release both of these systems. You have a highly intelligent dupe being fooled by a lower life form and what is sculpting these, these convincing lures. It's the intelligence of previously duped targets for these strategies. So when the target is smart enough to avoid the strategy, uh, those weaker mimics, uh, fall off. They have terminal lines and only the better ones survive. So it's an arms race between the target species, uh, that is being parasitized, getting smarter and this other less intelligent or non intelligent object getting as if smarter. And so what you see is, is that artificial intelligence, artificial general intelligence is not needed to parasitize us. It's simply sufficient for us to outwit ourselves. So you could have a program, let's say, you know, one of these Nigerian scams, um, that writes letters and uses whoever sends it Bitcoin, uh, to figure out which aspects of the program should be kept, which should be varied and thrown away. And you don't need it to be in any way intelligent in order to have a really nightmarish scenario of being parasitized by something that has no idea what it's doing. So you, you, you phrased a few concepts really eloquently. So let me try to, uh, as a few directions this goes. So one first, first of all, in the way we write software today, it's not common that we allow it to self modify. But we do have that ability. Now we have the ability, it's just not common. It's not just common. So, so your, your thought is that that is a serious worry. If there becomes, uh, Self modifying code is, is available now. So there's, there's different types of self modification, right? There's a personalization, you know, your email app, your Gmail is a self modifying to you after you log in or whatever you can think of it that way. But ultimately it's central, all the information is centralized, but you're thinking of ideas where you're completely, so this is an unique entity, uh, operating under selective pressures and it changes. Well, you just, if you think about the fact that our immune systems, uh, don't know what's coming at them next, but they have a small set of spanning components and if it's, if it's a sufficiently expressive system in that any shape, uh, or binding region can be approximated, uh, with, with the Lego that is present, um, then you can have confidence that you don't need to know what's coming at you because the combinatorics, um, are sufficient to reach any configuration needed. Uh, so that's a beautiful thing. Well, terrifying thing to worry about because it's so within our reach. Whatever I suggest these things, I do always have a concern as to whether or not I will bring them into being by talking about them. So, uh, there's this thing from open AI, uh, next, next week to talk to the founder of open AI, uh, this idea that, uh, their text generation, the new, uh, the new stuff they have for generating texts is they didn't want to bring it, they didn't want to release it because they're worried about the. I'm delighted to hear that, but they're going to end up releasing. Yes. So that's the thing is I think talking about it, um, well, at least from my end, I'm more a proponent of technology preventing tech, uh, so further innovation, preventing the detrimental effects of innovation. Well, we're at a, we're sort of tumbling down a hill at accelerating speed. So whether or not we're proponents or it doesn't, it doesn't really, it may not matter, but I, well, I do feel that there are people who've held things back and, uh, you know, died poor than they might've otherwise been. We don't even know their names. I don't think that we should discount the idea that having the smartest people showing off how smart they are by what they've developed may be a terminal process. I'm very mindful in particular of a beautiful letter that Edward Teller of all people wrote to Leo Zillard and where Zillard was trying to figure out how to control the use of atomic weaponry at the end of world war II and teller rather strangely because many of us view him as a monster, um, showed some, a very advanced moral thinking talking about the slim chance we have for survival and that the only hope is to make Warren thinkable. I do think that not enough of us feel in our gut what it is we are playing with when we are working on technical problems. And I would recommend to anyone who hasn't seen it, a movie called the bridge over the bridge on the river Kwai about, I believe captured British POWs who just in a desire to do a bridge well, end up over collaborating with their Japanese captors. Well now you're making me a question the unrestricted open discussion of ideas and AI. I'm not saying I know the answer, I'm just saying that I could make a decent case for either our need to talk about this and to become technologically focused on containing it or need to stop talking about this and try to hope that the relatively small number of highly adept individuals who are looking at these problems is small enough that we should in fact be talking about how to contain them. Well the way ideas, the way innovation happens, what new ideas develop, Newton with calculus, whether if he was silent, the idea would be, would emerge elsewhere in the case of Newton of course. But in the case of AI, how small is the set of individuals out of which such ideas would arise? Well the idea is that the researchers we know and those that we don't know who may live in countries that don't wish us to know what level they're currently at are very disciplined in keeping these things to themselves. Of course I will point out that there's a religious school in Kerala that developed something very close to the calculus, certainly in terms of infinite series in I guess religious prayer and rhyme and prose. So it's not that Newton had any ability to hold that back and I don't really believe that we have an ability to hold it back. I do think that we could change the proportion of the time we spend worrying about the effects of what if we are successful rather than simply trying to succeed and hope that we'll be able to contain things later. Beautifully put. So on the idea of intelligence, what form, treading cautiously as we've agreed as we tumbled down the hill, what form... We can't stop ourselves, can we? We cannot. What form do you see it taking? So one example, Facebook, Google, do want to, I don't know a better word, you want to influence users to behave a certain way and so that's one kind of example of how intelligence is systems perhaps modifying the behavior of these intelligent human beings in order to sell more product of different kinds. But do you see other examples of this actually emerging in... Just take any parasitic system, make sure that there's some way in which that there's differential success, heritability, and variation. And those are the magic ingredients and if you really wanted to build a nightmare machine, make sure that the system that expresses the variability has a spanning set so that it can learn to arbitrary levels by making it sufficiently expressive. That's your nightmare. So it's your nightmare, but it could also be, it's a really powerful mechanism by which to create, well, powerful systems. So are you more worried about the negative direction that might go versus the positive? So you said parasitic, but that doesn't necessarily need to be what the system converges towards. It could be, what is it? And the dividing line between parasitism and symbiosis is not so clear. That's what they tell me about marriage. I'm still single, so I don't know. Well yeah, we could go into that too, but no, I think we have to appreciate, are you infected by your own mitochondria? Right. Right? Yeah. So in marriage, you fear the loss of independence, but even though the American therapeutic community may be very concerned about codependence, what's to say that codependence isn't what's necessary to have a stable relationship in which to raise children who are maximally case selected and require incredible amounts of care because you have to wait 13 years before there's any reproductive payout and most of us don't want our 13 year olds having kids. That's a very tricky situation to analyze and I would say that predators and parasites drive much of our evolution and I don't know whether to be angry at them or thank them. Well ultimately, I mean nobody knows the meaning of life or what even happiness is, but there is some metrics. They didn't tell you? They didn't. They didn't. That's why all the poetry and books are about, you know, there's some metrics under which you can kind of measure how good it is that these AI systems are roaming about. So you're more nervous about software than you are optimistic about ideas of, yeah, self replicating largely. I don't think we've really felt where we are. You know, occasionally we get a wake up, 9 11 was so anomalous compared to everything else we've experienced on American soil that it came to us as a complete shock that that was even a possibility. What it really was was a highly creative and determined R and D team deep in the bowels of Afghanistan showing us that we had certain exploits that we were open to that nobody had chosen to express. I can think of several of these things that I don't talk about publicly that just seem to have to do with, um, how relatively unimaginative those who wish to cause havoc and destruction have been up until now. But the great mystery of our time of this particular little era is how remarkably stable we've been since 1945 when we demonstrated the ability to use a nuclear weapons in anger. And we don't know why things like that haven't happened since then. We've had several close calls, we've had mistakes, we've had a brinksmanship. And what's now happened is that we've settled into a sense that, Oh, it's, it'll always be nothing. It's been so long since something was at that level of danger that we've got a wrong idea in our head. And that's why when I went on the Ben Shapiro show, I talked about the need to resume above ground testing of nuclear devices because we have people whose developmental experience suggests that when let's say Donald Trump and North Korea engage on Twitter, Oh, it's nothing. It's just posturing. Everybody's just in it for money. There's, there's an, a sense that people are in a video game mode, which has been the right call since 1945. We've been mostly in video game mode. It's amazing. So you're worried about a generation which has not seen any existential. We've lived under it. You see, you're younger. I don't know if, if, and again, you came from, from Moscow. There was a TV show called the day after that had a huge effect on a generation growing up in the U S and it talked about what life would be like after a nuclear exchange. We have not gone through an embodied experience collectively where we've thought about this. And I think it's one of the most irresponsible things that the elders among us have done, which is to provide this beautiful garden in which the thorns are cut off of the, of the Rose bushes and all of the edges are rounded and sanded. And so people have developed this, this totally unreal idea, which is everything's going to be just fine. And do I think that my leading concern is AGI or my leading concern is a thermonuclear exchange or gene drives or any one of these things? I don't know, but I know that our time here in this very long experiment here is finite because the toys that we've built are so impressive and the wisdom to accompany them has not materialized. And I think it's, we actually got a wisdom uptick since 1945. We had a lot of dangerous skilled players on the world stage who nevertheless, no matter how bad they were managed to not embroil us in something that we couldn't come back from the cold war. Yeah, and the distance from the cold war, you know, I'm very mindful of a, there was a Russian tradition actually of on your wedding day, going to visit a memorial to those who gave their lives. Can you imagine this where you, on the happiest day of your life, you go and you pay homage to the people who fought and died in the battle of Stalingrad. I'm not a huge fan of communism, I got to say, but there were a couple of things that the Russians did that were really positive in the Soviet era. And I think trying to let people know how serious life actually is, is the Russian model of seriousness is better than the American model. And maybe like you mentioned, there was a small echo of that after 9 11. But we wouldn't let it form. We talk about 9 11, but it's 9 12 that really moved the needle when we were all just there and nobody wanted to speak. We witnessed something super serious and we didn't want to run to our computers and blast out our deep thoughts and our feelings. And it was profound because we woke up briefly, you know, I talk about the gated institutional narrative that sort of programs our lives. I've seen it break three times in my life, one of which was the election of Donald Trump. Another time was the fall of Lehman Brothers when everybody who knew that Bear Stearns wasn't that important knew that Lehman Brothers met AIG was next. And the other one was 9 11. And so if I'm 53 years old and I only remember three times that the global narrative was really interrupted, that tells you how much we've been on top of developing events. You know, I mean we had the Murrow federal building explosion, but it didn't cause the narrative to break. It wasn't profound enough. Around 9 12 we started to wake up out of our slumber and the powers that be did not want to coming together. They, you know, the admonition was go shopping. And the powers that be was what is that force as opposed to blaming individuals? We don't know. So whatever that, whatever that force is, there's a component of it that's emergent and there's a component of it that's deliberate. So give yourself a portfolio with two components. Some amount of it is emergent, but some amount of it is also an understanding that if people come together, they become an incredible force. And what you're seeing right now I think is there are forces that are trying to come together and there are forces that are trying to push things apart. And you know, one of them is the globalist narrative versus the national narrative where to the global, uh, globalist perspective, uh, the nation nations are bad things in essence that they're temporary, they're nationalistic, they're jingoistic, it's all negative to people in the national, more in the national idiom, they're saying, look, this is where I pay my taxes. This is where I do my army service. This is where I have a vote. This is where I have a passport. Who the hell are you to tell me that because you've moved into someplace that you can make money globally, that you've chosen to abandon other people to whom you have a special and elevated duty. And I think that these competing narratives have been pushing towards the global perspective, uh, from the elite and a larger and larger number of disenfranchised people are saying, hey, I actually live in a, in a place and I have laws and I speak a language, I have a culture. And who are you to tell me that because you can profit in some far away land that my obligations to my fellow countrymen are so, so much diminished. So these tensions between nations and so on, ultimately you see being proud of your country and so on, which creates potentially the kind of things that led to wars and so on. They, they ultimately, it is human nature and it is good for us for wake up calls of different kinds. Well, I think that these are tensions and my point isn't, I mean, nationalism run amok is a nightmare and internationalism run amok is a nightmare. And the problem is we're trying to push these pendulums, uh, to some place where they're somewhat balanced, where we, we have a higher duty of care to those, uh, who share our log, our laws and our citizenship, but we don't forget our duties of care to the global system. I would think this is elementary, but the problem that we're facing concerns the ability for some to profit at the, by abandoning their obligations, uh, to others within their system. And that's what we've had for decades. You mentioned nuclear weapons. I was hoping to get answers from you since one of the many things you've done as a economics and maybe you can understand human behavior of why the heck we haven't blown each other up yet. But okay. So, uh, we'll get back. I don't know the answer. Yes. It's a, it's a fast. It's really important to say that we really don't know. A mild uptick in wisdom. A mild uptick in wisdom. Well, Steven Pinker, who I've talked with has a lot of really good ideas about why, but I don't trust his optimism. Listen, I'm Russian, so I never trust a guy who was that optimistic. No, no, no. It's just that you're talking about a guy who's looking at a system in which more and more of the kinetic energy like war has been turned into potential energy, like unused nuclear weapons. Beautifully put. You know, now I'm looking at that system and I'm saying, okay, well, if you don't have a potential energy term, then everything's just getting better and better. Yeah. Wow. That's beautifully put. Only a physicist could. Okay. I'm not a physicist. Is that a dirty word? No, no. I wish I were a physicist. Me too. My dad's a physicist. I'm trying to live up that probably for the rest of my life. He's probably gonna listen to this too. So. He did. Yeah. So your friend, Sam Harris, uh, worries a lot about the existential threat of AI. Not in the way that you've described, but in the more, well, he hangs out with Elon. I don't know Elon. So are you worried about that kind of, uh, you know, about the, um, about either robotic systems or, you know, traditionally defined AI systems essentially becoming a super intelligent, much more intelligent than human beings and, uh, getting, well, they already are and they're not when, when seen as a, um, a collective, you mean, well, I mean, I, I can mean all sorts of things, but certainly many of the things that we thought were peculiar to general intelligence or do not require general intelligence. So that's been one of the big awakenings that you can write a pretty convincing sports story from stats alone, uh, without needing to have watched the game. So you know, is it possible to write lively pros about politics? Yeah, no, not yet. So we were sort of all over the map. One of the, one of the things about chess that you'll, there's a question I once asked on Quora that didn't get a lot of response, which was what is the greatest brilliancy ever produced by a computer in a chess game, which was different than the question of what is the greatest game ever played. So if you think about brilliancies is what really animates many of us to think of chess as an art form. Those are those moves and combinations that just show such flair, panache and, and, and in soul, um, computers weren't really great at that. They were great positional monsters and you know, recently we, we've started seeing brilliancies and so. The grandmasters have identified with, uh, with alpha zero that things were quite brilliant. Yeah. So that's, that's, that's a, you know, that's an example of something we don't think that that's AGI, but in a very restricted set, a set of rules like chess, you're starting to see poetry, uh, of a high order. And, and so I'm not, I don't like the idea that we're waiting for AGI, AGI is sort of slowly infiltrating our lives in the same way that I don't think a worm should be, you know, the C elegans shouldn't be treated as non conscious because it only has 300 neurons. Maybe it just has a very low level of consciousness because we don't understand what these things mean as they scale up. So am I worried about this general phenomena? Sure. But I think that one of the things that's happening is that a lot of us are fretting about this, uh, in part because of human needs. We've always been worried about the Golem, right? Well, the Golem is the artificially created life, you know, it's like Frankenstein. Yeah, sure. It's a Jewish version and, um, Frankenberg, Frankenstein, yeah, that's makes sense, right? So the, uh, but we've always been worried about creating something like this and it's getting closer and closer and there are ways in which we have to realize that the whole thing is kind of, the whole thing that we've experienced are the context of our lives is almost certainly coming to an end. And I don't mean to suggest that, uh, we won't survive. I don't know. And I don't mean to suggest that it's coming tomorrow and it could be 300, 500 years, but there's no plan that I'm aware of if we have three rocks that we could possibly inhabit that are, uh, sensible within current technological dreams, the earth, the moon and Mars. And we have a very competitive civilization that is still forced into violence to sort out disputes that cannot be arbitrated. It is not clear to me that we have a longterm future until we get to the next stage, which is to figure out whether or not the Einsteinian speed limit can be broken. And that requires our source code. Our source code, the stuff in our brains to figure out what do you mean by our source code? The source code of the context, whatever it is that produces the quarks, the electrons, the neutrinos. Oh, our source code. I got it. So this is, You're talking about stuff that's written in a higher level language. Yeah. Yeah. That's right. You're talking about the low level, the bits. Yeah. That's what is currently keeping us here. We can't even imagine, you know, we have harebrained schemes for staying within the Einsteinian speed limit. Uh, you know, maybe if we could just drug ourselves and go into a suspended state or we could have multiple generations of that, I think all that stuff is pretty silly, but I think it's also pretty silly to imagine that our wisdom is going to increase to the point that we can have the toys we have and, uh, we're not going to use them for 500 years. Speaking of Einstein, I had a profound breakthrough when I realized you're just one letter away from the guy. Yeah, but I'm also one letter away from Feinstein. It's, well, you get to pick. Okay. So unified theory, you know, you've worked, uh, you, you enjoy the beauty of geometry. I don't actually know if you enjoy it. You certainly are quite good at it. I tremble before it. If you're religious, that is one of the, I don't have to be religious. It's just so beautiful. You will tremble anyway. I mean, I just read Einstein's biography and one of the ways, uh, one of the things you've done is try to explore a unified theory, uh, talking about a 14 dimensional observers that has the 4d space time continuum embedded in it. I, I'm just curious how you think and how philosophically at a high level about something more than four dimensions, uh, how do you try to, what, what does it make you feel? Talking in the mathematical world about dimensions that are greater than the ones we can perceive. Is there something that you take away that's more than just the math? Well, first of all, stick out your tongue at me. Okay. Now on the front of that time, yeah, there was a sweet receptor and next to that were salt receptors and two different sides, a little bit farther back. There were sour receptors and you wouldn't show me the back of your tongue where your bitter receptor was. Show the good side always. Okay. So you had four dimensions of taste receptors, but you also had pain receptors on that tongue and probably heat receptors on that time. So let's assume that you had one of each, that would be six dimensions. So when you eat something, you eat a slice of pizza and it's got some, some, uh, some hot pepper on it, maybe some jalapeno, you're having a six dimensional experience, dude. Do you think we overemphasize the value of time as one of the dimensions or space? Well, we certainly overemphasize the value of time cause we like things to start and end or we really don't like things to end, but they seem to. Well, what if you flipped one of the spatial dimensions into being a temporal dimension? And you and I were to meet in New York city and say, well, where, where and when should we meet? What about, I'll meet you on a 36 in Lexington at two in the afternoon and uh, 11 oclock in the morning. That would be very confusing. Well, so it's convenient for us to think about time, you mean. We happen to be in a delicious situation in which we have three dimensions of space and one of time and they're woven together in this sort of strange fabric where we can trade off a little space for a little time, but we still only have one dimension that is picked out relative to the other three. It's very much Gladys Knight and the pips. So which one developed for who? Do we develop for these dimensions or did the dimensions or were they always there and it doesn't? Well, do you imagine that there isn't a place where there are four temporal dimensions or two and two of space and time or three of time and one of space and then would time not be playing the role of space? Why do you imagine that the sector that you're in is all that there is? I certainly do not, but I can't imagine otherwise. I mean, I haven't done ayahuasca or any of those drugs that hope to one day, but instead of doing ayahuasca, you could just head over to building two. That's where the mathematicians are? Yeah, that's where they hang. Just to look at some geometry. Well, just ask about pseudo Ramanian geometry. That's what you're interested in. Okay. Or you could talk to a shaman and end up in Peru. And then it's an extra money for that trip. Yeah, but you won't be able to do any calculations if that's how you choose to go about it. Well, a different kind of calculation, so to speak. Yeah. One of my favorite people, Edward Frankel, Berkeley professor, author of Love and Math, great title for a book, said that you are quite a remarkable intellect to come up with such beautiful original ideas in terms of unified theory and so on, but you're working outside academia. So one question in developing ideas that are truly original, truly interesting, what's the difference between inside academia and outside academia when it comes to developing such ideas? Oh, it's a terrible choice. Terrible choice. So if you do it inside of academics, you are forced to constantly show great loyalty to the consensus and you distinguish yourself with small, almost microscopic heresies to make your reputation in general. And you have very competent people and brilliant people who are working together, who form very deep social networks and have a very high level of behavior, at least within mathematics and at least technically within physics, theoretical physics. When you go outside, you meet lunatics and crazy people, madmen. And these are people who do not usually subscribe to the consensus position and almost always lose their way. And the key question is, will progress likely come from someone who has miraculously managed to stay within the system and is able to take on a larger amount of heresy that is sort of unthinkable? In which case, that will be fascinating, or is it more likely that somebody will maintain a level of discipline from outside of academics and be able to make use of the freedom that comes from not having to constantly affirm your loyalty to the consensus of your field? So you've characterized in ways that academia in this particular sense is declining. You posted a plot, the older population of the faculty is getting larger, the younger is getting smaller and so on. So which direction of the two are you more hopeful about? Well, the baby boomers can't hang on forever. First of all, in general, true, and second of all, in academia. But that's really what this time is about. We're used to financial bubbles that last a few years in length and then pop. The baby boomer bubble is this really long lived thing, and all of the ideology, all of the behavior patterns, the norms. For example, string theory is an almost entirely baby boomer phenomenon. It was something that baby boomers were able to do because it required a very high level of mathematical ability. You don't think of string theory as an original idea? Oh, I mean, it was original to Veneziano, probably is older than the baby boomers. And there are people who are younger than the baby boomers who are still doing string theory. And I'm not saying that nothing discovered within the large string theoretic complex is wrong. Quite the contrary. A lot of brilliant mathematics and a lot of the structure of physics was elucidated by string theorists. What do I think of the deliverable nature of this product that will not ship called string theory? I think that it is largely an affirmative action program for highly mathematically and geometrically talented baby boomer physics physicists so that they can say that they're working on something within the constraints of what they will say is quantum gravity. Now there are other schemes, you know, there's like asymptotic safety, there are other things that you could imagine doing. I don't think much of any of the major programs, but to have inflicted this level of loyalty through a shibboleth. Well, surely you don't question X. Well, I question almost everything in the string program. And that's why I got out of physics. When you called me a physicist, it was a great honor, but the reason I didn't become a physicist wasn't that I fell in love with mathematics. I said, wow, in 1984, 1983, I saw the field going mad and I saw that mathematics, which has all sorts of problems, was not going insane. And so instead of studying things within physics, I thought it was much safer to study the same objects within mathematics. There's a huge price to pay for that. You lose physical intuition. But the point is, is that it wasn't a North Korean reeducation camp either. Are you hopeful about cracking open the Einstein unified theory in a way that has been really, really understanding whether this, uh, uniting everything together with quantum theory and so on? I mean, I'm trying to play this role myself to do it to the extent of handing it over to the more responsible, more professional, more competent community. Um, so I think that they're wrong about a great number of their belief structures, but I do believe, I mean, I have a really profound love, hate relationship with this group of people. I think the physics side, cause the mathematicians actually seem to be much more open minded and uh, well they are and they aren't, they're open minded about anything that looks like great math. Right. Right. They'll study something that isn't very important physics, but if it's beautiful mathematics, then they'll have a, they have great intuition about these things as good as the mathematicians are. And I might even intellectually at some horsepower level, give them the edge. The theoretical theoretical physics community is bar none. The most profound intellectual community that we have ever created. It is the number one. There's nobody in second place as far as I'm concerned, like in their spare time and the spare time they invented molecular biology. What, what was the origin of molecular biology? You're saying something like Francis Crick. I mean, a lot of, a lot of the early molecular biologists were physicists. Yeah. I mean, you know, Schrodinger wrote what is life and that was highly inspirational. I mean, you have to appreciate that there is no community like the basic research community in theoretical physics and it's not something I'm highly critical of these guys. I think that they would just wasted the decades of time with a near religious devotion to their misconception of where the problems were in physics. But this has been the greatest intellectual collapse ever witnessed within academics. You see it as a collapse or just a lull? Oh, I'm terrified that we're about to lose the vitality. We can't afford to pay these people. We can't afford to give them an accelerator just to play with in case they find something at the next energy level. These people created our economy. They gave us the rad lab and radar. They gave us two atomic devices to end world war two. They created the semiconductor and the transistor to power our economy through Moore's law. As a positive externality of particle accelerators, they created the worldwide web and we have the insolence to say, why should we fund you with our taxpayer dollars? No, the question is, are you enjoying your physics dollars? These guys signed the world's worst licensing agreement and if they simply charged for every time you used a transistor or a URL or enjoyed the piece that they have provided during this period of time through the terrible weapons that they developed or your communications devices, all of the things that power our economy, I really think came out of physics, even to the extent the chemistry came out of physics and molecular biology came out of physics. So, first of all, you have to know that I'm very critical of this community. Second of all, it is our most important community. We have neglected it. We've abused it. We don't take it seriously. We don't even care to get them to rehab after a couple of generations of failure, right? No one, I think the youngest person to have really contributed to the standard model of theoretical level was born in 1951, right? Frank Wilczek and almost nothing has happened that in theoretical physics after 1973, 74 that sent somebody to Stockholm for theoretical development that predicted experiment. So we have to understand that we are doing this to ourselves. Now, with that said, these guys have behaved abysmally in my opinion because they haven't owned up to where they actually are, what problems they're really facing, how definite they can actually be. They haven't shared some of their most brilliant discoveries, which are desperately needed in other fields like gauge theory, which at least the mathematicians can, can share, which is an upgrade of the differential calculus of Newton and Leibniz. And they haven't shared the importance of renormalization theory. Even though this should be standard operating procedure for people across the sciences dealing with different layers and different levels of phenomena. And by shared, you mean communicated in such a way that it disseminates throughout the different sizes. These guys are sitting, both theoretical physicists and mathematicians are sitting on top of a giant stockpile of intellectual gold, right? They have so many things that have not been manifested anywhere. I was just on Twitter, I think I mentioned the Habermann switch pitch that shows the self duality of the tetrahedron realized as a linkage mechanism. Now this is like a triviality and it makes an amazing toy that's, you know, built a market, hopefully a fortune for Chuck Habermann. Well, you have no idea how much great stuff that these priests have in their monastery. So it's truly a love and hate relationship for you. Yeah. Well, it sounds like it's more on the love side. This building that we're in right here is the building in which I really put together the conspiracy between the National Academy of Sciences and the National Science Foundation through the government university industry research round table to destroy the bargaining power of American academics, uh, using foreign labor with, uh, on microfeature in the base. Oh yeah. That was done here in this building. Isn't that weird? And I'm, I'm truly speaking with a revolutionary and a radical, uh, no, no, no, no, no, no, no, no, no, no, no, no. At an intellectual level, I am absolutely garden variety. I'm just straight down the middle. The system that we are in this, this university is functionally insane. Yeah. Harvard is functionally insane and we don't understand that when we get these things wrong, the financial crisis made this very clear. There was a long period where every grownup, everybody with a tie, uh, who spoke in a, you know, in Barrett, baritone tones, uh, with, with the right degree at the end of their name. Yeah. Uh, we're talking about how we banished volunteer volatility. We were in the great moderation. Okay. They were all crazy. And who was, who was right? It was like Nassim Taleb, Nouriel Roubini. Now what happens is, is that they claimed the market went crazy, but the market didn't go crazy. The market had been crazy and what happened is, is that it suddenly went sane. Well, that's where we are with academics. Academics right now is mad as a hatter and it's, it's absolutely evident. I can show you graph after graph. I can show you the internal discussions. I can show you the conspiracies. Barrett's dealing with one right now over, uh, it's admissions policies for people, uh, of color, uh, who happened to come from Asia. All of this madness is necessary to keep the game going. What we're talking about just on, well, we're on the topic of revolutionaries is we're talking about the danger of an outbreak of sanity. Yeah. You're, you're the guy pointing out the elephant in the room here and the elephant has no clothes. Is that how that goes? I was going to talk a little bit to, uh, Joe Rogan about this, ran out of time, but I think you're, you have some, you, just listening to you, you could probably speak really eloquently to academia on the difference between the different fields. So you think there's a difference between science, engineering, and then the humanities in academia in terms of tolerance that they're willing to tolerate? So from my perspective, I thought computer science and maybe engineering is more tolerant to radical ideas, but that's perhaps innocent of me is that I always, you know, all the battles going on now are a little bit more on the humanity side and gender studies and so on. Have you seen the, uh, American mathematical society's publication of an essay called get out the way? I have not. What's, what's the idea is that white men who hold, uh, positions. Yeah. Within universities and mathematics should vacate their positions so that young black women can take over something like this. That's in terms of diversity, which I also want to ask you about, but in terms of diversity of strictly ideas, do you think, cause you're basically saying physics as a community has become a little bit intolerant to some degree to new radical ideas, or at least you, uh, you said it's changed a little bit recently, which is that even string theory is now admitting, okay, we don't look very promising in the short term, right? So the question is what compiles if you want to take the computer science metaphor, what will get you into a journal? Will you spend your life trying to push some paper into a journal or will it be accepted easily? What about the characteristics of the submitter and what gets taken up and what does not? All of these fields are experiencing pressure because no field is performing so brilliantly well, um, that it's revolutionizing our way of speaking and thinking in the ways in which we've become accustomed. But don't you think even in theoretical physics, a lot of times, even with theories like string theory, you could speak to this, it does eventually lead to what are the ways that this theory would be testable. Yeah, ultimately, although look, there's this thing about popper and the scientific method that's a cancer and a disease and the minds of very smart people. That's not really how most of the stuff gets worked out. It's how it gets checked. All right, so, and there is a dialogue between theory and experiment, but everybody should read Paul directs 1963 American scientific American article where he, he, you know, it's very interesting. He talks about it as if it was about the Schrodinger equation and Schrodinger's failure to advance his own work because of his failure to account for some phenomenon. The key point is that if your theory is a slight bit off, it won't agree with experiment, but it doesn't mean that the theory is actually wrong. Um, but direct could as easily have been talking about his own equation in which he predicted that the electrons should have an antiparticle. And since the only positively charged particle that was known at the time was the proton, Heisenberg pointed out, well, shouldn't your antiparticle, the proton have the same mass as the electron and doesn't that invalidate your theory? So I think the direct was actually being quite potentially quite sneaky, um, and, uh, talking about the fact that he had been pushed off of his own theory to some extent by Heisenberg. Um, but look, we've fetishized the scientific method and popper and falsification, um, because it protects us from crazy ideas entering the field. So you know, it's a question of balancing type one and type two error. And we're pretty, we were pretty maxed out in one direction. The opposite of that, let me say what comforts me sort of biology or engineering, uh, at the end of the day, does the thing work? Yeah. You can test the crazies away and the crazy. Well see now you're saying, but some ideas are truly crazy and some are, are actually correct. So, well there's pre correct currently crazy. Yeah. Right. And so you don't want to get rid of everybody who's pre correct and currently crazy. Um, the problem is, is that we don't have standards in general for trying to determine who has to be put to the sword in terms of their career and who has to be protected, uh, as some sort of giant time suck pain in the ass, uh, who may change everything. Do you think that's possible? Uh, creating a mechanism of those select? Well, you're not going to like the answer, but here it comes. Oh boy. It has to do with very human elements. We're trying to do this at the level of like rules and fairness. That's not going to work cause the only thing that really understands this, you read the double helix? It's a book. Oh, you have to read this book. Not only did Jim Watson, uh, half discover this three dimensional structure of DNA, he's also one hell of a writer before he became an ass, uh, that no, he's tried to destroy his own reputation. I knew about the ass, I didn't know about the good writer. Jim Watson is one of the most important people now living. And uh, as I've said before, Jim Watson is too important, a legacy to be left to Jim Watson. Um, yeah, that book tells you more about what actually moves the dial, right? There's another story about him, which I don't, don't agree with, which is that he stole everything from Rosalind Franklin. I mean the, the problems that he had with Rosalind Franklin are real, but we should actually honor that tension in our history by delving into it rather than having a simple solution. Jim Watson talks about Francis Crick being a pain in the ass that everybody secretly knew was super brilliant. And there's an encounter between, uh, Chargaff, uh, who came up with the equimolar relations between the nucleotides who should have gotten the structure of DNA and Watson and Crick. And you know, he talks about missing a shiver in the heartbeat of biology and stuff is so gorgeous. It just makes you tremble even thinking about it. Um, look, we know very often who is to be feared and we need to fund the people that we fear. The people who are wasting our time need to be excluded from the conversation. You see, and you know, maybe we'll make some errors in both directions. But we have known our own people. We know the pains in the asses that might work out and we know the people who are really just blowhards who really have very little to contribute most of the time. It's not a hundred percent, but you're not going to get there with rules. Right. It's a using some kind of instinct. I mean, I, to be honest, I'm going to make you roll your eyes for a second, but uh, and the first time I heard that there is a large community of people who believe the earth is flat actually made me pause and ask myself the question, why would there be such a community? Yeah. Is it possible the earth is flat? So I had to like, wait a minute. I mean, then you go through a thinking process that I think is really healthy. It ultimately ends up being a geometry thing. I think, uh, it's an interesting, it's an interesting thought experiment at the very least. Well, I don't, I do a different version of it. I say, why is this community stable? Yeah. That's a good, uh, way to analyze it. Well, interesting that whatever we've done has not erased the community. So you know, they're taking a long shot bet that won't pan out, you know, maybe we just haven't thought enough about the rationality of the square root of two and somebody brilliant will figure it out. Maybe we will eventually land one day on the surface of Jupiter and explore it, right? These are crazy things that will never happen. So much of social media operates by AI algorithms. You talked about this a little bit, uh, recommending the content you see. So on this idea of radical thought, how much should AI show you things you disagree with on Twitter and so on in a Twitter word verse in this question? Yeah. Yeah. Cause you don't know the answer? No, no, no, no. Look, we've been, they've pushed out this cognitive Lego to us that will just lead to madness. It's good to be challenged with things that you disagree with. The answer is no, it's good to be challenged with interesting things with which you currently disagree, but that might be true. So I don't really care about whether or not I disagree with something or don't disagree. I need to know why that particular disagreeable thing is being pushed out. Is it because it's likely to be true? Is it because, is there some reason? Because I can write, I can write a computer generator, uh, to come up with an infinite number of disagreeable statements that nobody needs to look at. So please, before you push things at me that are disagreeable, tell me why. There is an aspect in which that question is quite dumb, especially because it's being used to, uh, almost, um, uh, very generically by these different networks to say, well, we're trying to work this out. But you know, basically, uh, how much do you see the value of seeing things, uh, you don't like, not you disagree with, because it's very difficult to know exactly what you articulated, which is, uh, the stuff that's important for you to consider that you disagree with. That's really hard to figure out. The bottom line is this stuff you don't like. If you're a, uh, uh, Hillary Clinton supporter, you may not want to, it might not make you feel good to see anything about Donald Trump. That's the only thing algorithms can really optimize for currently. They really can't. Now they can do better. This is where we're. You think so? No, we're engaged in some moronic back and forth where I have no idea why people who are capable of building Google, Facebook, Twitter are having us in these incredibly low level discussions. Do they not know any smart people? Do they not have the phone numbers of people who can elevate these discussions? They do, but this, they're optimizing for a different thing and they're pushing those people out of those rooms. They're, they're optimizing for things we can't see. And yes, profit is there. Nobody, nobody's questioning that, but they're also optimizing for things like political control or the fact that they're doing business in Pakistan. And so they don't want to talk about all the things that they're going to be bending to in Pakistan. So we're involved in a fake discussion. You think so? You think these conversations at that depth are happening inside Google? You don't think they have some basic metrics under user engagements? You're having a fake conversation with us guys. We know you're having a fake conversation. I do not wish to be part of your fake conversation. You know how to cool, you know, these units, you know, high availability, like nobody's business. My Gmail never goes down. Almost. So you think just because they can do incredible work on the software side with infrastructure, they can also deal with some of these difficult questions about human behavior, human understanding. You're not. I mean, I've seen the, I've seen the developers screens that people take shots of inside of Google. And I've heard stories inside of Facebook and Apple. We're not, we're engaged. They're engaging us in the wrong conversations. We are not at this low level. Here's one of my favorite questions. Why is every piece of hardware that I purchase in tech space equipped as a listening device? Where's my physical shutter to cover my lens? We had this in the 1970s, cameras that had lens caps, you know, how much would it cost to have a security model pay five extra bucks? Why is my indicator light software controlled? Why when my camera is on, do I not see that the light is on by putting it as something that cannot be bypassed? Why have you set up my, all of my devices at some difficulty to yourselves as listening devices and we don't even talk about this. This is, this thing is total fucking bullshit. Well, I hope these discussions are happening about privacy. Is there a more difficult thing you're giving them credit for? It's not just privacy. It's about social control. We're talking about social control. Why do I not have controls over my own levers? Just have a really cute UI where I can switch, I can dial things or I can at least see what the algorithms are. You think that there is some deliberate choices being made here. There's emergence and there is intention. There are two dimensions. The vector does not collapse onto either axis, but the idea that anybody who suggests that intention is completely absent is a child. That's really beautifully put and uh, like many things you've said is going to make me can I turn this around slightly? Yeah. I sit down with you and you say that you're obsessed with my feed. I don't even know what my feed is. What are you seeing that I'm not? I was obsessively looking through your feed on Twitter because it was really enjoyable because there's the Tom layer element is the humor in it. By the way, that feed is Eric R. Weinstein on Twitter at Eric R. Weinstein. No, but seriously, why? Why did I find it enjoyable or what was I seeing? What are you looking for? Why are we doing this? What is this podcast about? I know you've got all these interesting people. I'm just some guy who's sort of a podcast guest. Sort of a podcast. You're not even wearing a tie. I mean, it's not even a serious interview. I'm searching for meaning, for happiness, for a dopamine rush, so short term, long term. And how are you finding your way to me? I don't honestly know what I'm doing to reach you. The representing ideas which feel common sense to me and not many people are speaking. So it's kind of like the intellectual dark web folks, right? These folks from Sam Harris to Jordan Peterson to yourself are saying things where you're like saying, look, there's an elephant and he's not wearing any clothes. And I say, yeah, yeah, let's have more of that conversation. That's how I'm finding you. I'm desperate to try to change the conversation we're having. I'm very worried we've got an election in 2020. I don't think we can afford four more years of a misinterpreted message, which is what Donald Trump was. And I don't want the destruction of our institutions. They all seem hell bent on destroying themselves. So I'm trying to save theoretical physics, trying to save the New York Times, trying to save our various processes. And I think it feels delusional to me that this is falling to a tiny group of people who are willing to speak out without getting so freaked out that everything they say will be misinterpreted and that their lives will be ruined through the process. I mean, I think we're in an absolutely bananas period of time and I don't believe it should fall to such a tiny number of shoulders to shoulder this way. So I have to ask you on the capitalism side, you mentioned that technology is killing capitalism or it has effects that are unintended, well, not unintended, but not what economists would predict or speak of capitalism creating. I just want to talk to you about in general, the effect of even then artificial intelligence or technology automation taking away jobs and these kinds of things and what you think is the way to alleviate that, whether the Andrew Ng presidential candidate with universal basic income, UBI, what are your thoughts there? How do we fight off the negative effects of technology that... All right, you're a software guy, right? Yep. A human being is a worker is an old idea, a human being has a worker is a different object, right? Yeah. So if you think about object oriented programming as a paradigm, a human being has a worker and a human being has a soul. We're talking about the fact that for a period of time, the worker that a human being has was in a position to feed the soul that a human being has. However, we have two separate claims on the value in society. One is as a worker and the other is as a soul and the soul needs sustenance, it needs dignity, it needs meaning, it needs purpose. As long as your means of support is not highly repetitive, I think you have a while to go before you need to start worrying. And if what you do is highly repetitive and it's not terribly generative, you are in the cross hairs of for loops and while loops. And that's what computers excel at, repetitive behavior and when I say repetitive, I may mean things that have never happened through combinatorial possibilities, but as long as it has a looped characteristic to it, you're in trouble. We are seeing a massive push towards socialism because capitalists are slow to address the fact that a worker may not be able to make claims, a relatively undistinguished median member of our society is still has needs to reproduce, needs to have to dignity. And when capitalism abandons the median individual or the bottom 10th or whatever it's going to do, it's flirting with revolution. And what concerns me is that the capitalists aren't sufficiently capitalistic to understand this. You really want to court authoritarian control in our society because you can't see that people may not be able to defend themselves in the marketplace because the marginal product of their labor was too low to feed their dignity as a soul. So my great concern is that our free society has to do with the fact that we are self organized. I remember looking down from my office in Manhattan when Lehman brothers collapsed and thinking who's going to tell all these people that they need to show up at work when they don't have a financial system to incentivize them to show up at work. So my complaint is first of all, not with the socialists, but with the capitalists, which is you guys are being idiots. You're courting revolution by continuing to harp on the same old ideas that, well, you know, try, try harder, bootstrap yourself. Yeah, to an extent that works to an extent, but we are clearly headed in a place that there's nothing that ties together our need to contribute and our need to consume. And that may not be provided by capitalism because it may have been a temporary phenomena. So check out my article on anthropic capitalism and the new gimmick economy. I think people are late getting the wake up call and we would be doing a better job saving capitalism from itself because I don't want this done under authoritarian control. And the more we insist that everybody who's not thriving in our society during their reproductive years in order to have a family is failing at a personal level. I mean, what a disgusting thing that we're saying. What horrible message who, who the hell have we become that we've so bought into the Chicago model that we can't see the humanity that we're destroying in that process. And it's, I hate, I hate the thought of communism. I really do. My family has flirted with it decades past. It's a wrong, bad idea, but we are going to need to figure out how to make sure that those souls are nourished and respected and capitalism better have an answer. And I'm betting on capitalism, but I've got to tell you, I'm pretty disappointed with my team. So you're still on the capitalism team. You just, uh, there's a theme here. Radical capitalism. Hyper capitalism. Yeah. I want, I think hyper capitalism is going to have to be coupled to hyper socialism. You need to allow the most productive people to create wonders and you've got to stop bogging them down with all of these extra nice requirements. You know, nice is dead. Good has a future. Nice doesn't have a future because nice ends up with, with gulags. Damn, that's a good line. Okay. Last question. You tweeted today a simple, quite insightful equation saying, uh, imagine that every unit F of fame you picked up as stalkers and H haters. So I imagine S and H are dependent on your path to fame perhaps a little bit. Well, it's not as simple. I mean, people always take these things literally when you have like 280 characters to explain yourself. So you mean that that's not a mathematical, uh, no, there's no law. Oh, okay. All right. So I put the word imagine because I still have a mathematician's desire for precision. Imagine that this were true, but it was a beautiful way to imagine that there is a law that has those variables in it. And uh, you've become quite famous these days. So how do you yourself optimize that equation with the peculiar kind of fame that you have gathered along the way? I want to be kinder. I want to be kinder to myself. I want to be kinder to others. I want to be able to have heart, compassion, or these things are really important. And uh, I have a pretty spectrumy kind of approach to analysis. I'm quite literal. I can go full rain man on you at any given moment. No, I can't. I can't. Uh, it's facultative autism if you like, and people are gonna get angry because they want autism to be respected. So when you see me coding or you see me doing mathematics, I'm, you know, I speak with speech apnea, uh, be right down to dinner, you know, we have to try to integrate ourselves and those tensions between, you know, it's sort of back to us as a worker and us as a soul. Many of us are optimizing one to the, at the expense of the other. And I struggle with social media and I struggle with people making threats against our families and I struggle with, um, just how much pain people are in. And if there's one message I would like to push out there, um, you're responsible, everybody, all of us, myself included with struggling, struggle, struggle mightily because you, it's nobody else's job to do your struggle for you. Now with that said, if you're struggling and you're trying and you're trying to figure out how to better yourself and where you failed and where you've let down your family, your friends, your workers, all this kind of stuff, give yourself a break. You know, if, if, if it's not working out, I have a lifelong relationship with failure and success. There's been no period of my life where both haven't been present in one form or another. And I do wish to say that a lot of times people think this is glamorous. I'm about to go, you know, do a show with Sam Harris. People are going to listen in on two guys having a conversation on stage. It's completely crazy when I'm always trying to figure out how to make sure that those people get maximum value. And uh, that's why I'm doing this podcast, you know, just give yourself a break. You owe us, you owe us your struggle. You don't owe your family or your coworkers or your lovers or your family members success. Um, as long as you're in there and you're picking yourself up, recognize that this, this new situation with the economy that doesn't have the juice to sustain our institutions has caused the people who've risen to the top of those institutions to get quite brutal and cruel. Everybody is lying at the moment. Nobody's really a truth teller. Um, try to keep your humanity about you. Try to recognize that if you're failing, if things aren't where you want them to be and you're struggling and you're trying to figure out what you're doing wrong, which you could do, it's not necessarily all your fault. We are in a global situation. I have not met the people who are honest, kind, good, successful. Nobody that I've met is checking all the boxes. Nobody's getting all tens. So I just think that's an important message that doesn't get pushed out enough. Either people want to hold society responsible for their failures, which is not reasonable. You have to struggle, you have to try, or they want to say you're a hundred percent responsible for your failures, which is total nonsense. Beautifully put. Eric, thank you so much for talking today. Thanks for having me, buddy.
Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | Lex Fridman Podcast #16
The following is a conversation with Greg Brockman. He's the cofounder and CTO of OpenAI, a world class research organization developing ideas in AI with a goal of eventually creating a safe and friendly artificial general intelligence, one that benefits and empowers humanity. OpenAI is not only a source of publications, algorithms, tools, and data sets. Their mission is a catalyst for an important public discourse about our future with both narrow and general intelligence systems. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Greg Brockman. So in high school, and right after you wrote a draft of a chemistry textbook, saw that that covers everything from basic structure of the atom to quantum mechanics. So it's clear you have an intuition and a passion for both the physical world with chemistry and now robotics to the digital world with AI, deep learning, reinforcement learning, so on. Do you see the physical world and the digital world as different? And what do you think is the gap? A lot of it actually boils down to iteration speed. I think that a lot of what really motivates me is building things. I think about mathematics, for example, where you think really hard about a problem. You understand it. You write it down in this very obscure form that we call a proof. But then, this is in humanity's library. It's there forever. This is some truth that we've discovered. Maybe only five people in your field will ever read it. But somehow, you've kind of moved humanity forward. And so I actually used to really think that I was going to be a mathematician. And then I actually started writing this chemistry textbook. One of my friends told me, you'll never publish it because you don't have a PhD. So instead, I decided to build a website and try to promote my ideas that way. And then I discovered programming. And in programming, you think hard about a problem. You understand it. You write it down in a very obscure form that we call a program. But then once again, it's in humanity's library. And anyone can get the benefit from it. And the scalability is massive. And so I think that the thing that really appeals to me about the digital world is that you can have this insane leverage. A single individual with an idea is able to affect the entire planet. And that's something I think is really hard to do if you're moving around physical atoms. But you said mathematics. So if you look at the wet thing over here, our mind, do you ultimately see it as just math, as just information processing? Or is there some other magic, as you've seen, if you've seen through biology and chemistry and so on? Yeah, I think it's really interesting to think about humans as just information processing systems. And that seems like it's actually a pretty good way of describing a lot of how the world works or a lot of what we're capable of, to think that, again, if you just look at technological innovations over time, that in some ways, the most transformative innovation that we've had has been the computer. In some ways, the internet, that what has the internet done? The internet is not about these physical cables. It's about the fact that I am suddenly able to instantly communicate with any other human on the planet. I'm able to retrieve any piece of knowledge that in some ways the human race has ever had, and that those are these insane transformations. Do you see our society as a whole, the collective, as another extension of the intelligence of the human being? So if you look at the human being as an information processing system, you mentioned the internet, the networking. Do you see us all together as a civilization as a kind of intelligent system? Yeah, I think this is actually a really interesting perspective to take and to think about, that you sort of have this collective intelligence of all of society, the economy itself is this superhuman machine that is optimizing something, right? And in some ways, a company has a will of its own, right? That you have all these individuals who are all pursuing their own individual goals and thinking really hard and thinking about the right things to do, but somehow the company does something that is this emergent thing and that is a really useful abstraction. And so I think that in some ways, we think of ourselves as the most intelligent things on the planet and the most powerful things on the planet, but there are things that are bigger than us that are the systems that we all contribute to. And so I think actually, it's interesting to think about if you've read Isaac Asimov's foundation, right? That there's this concept of psychohistory in there, which is effectively this, that if you have trillions or quadrillions of beings, then maybe you could actually predict what that being, that huge macro being will do and almost independent of what the individuals want. And I actually have a second angle on this that I think is interesting, which is thinking about technological determinism. One thing that I actually think a lot about with OpenAI, right, is that we're kind of coming on to this insanely transformational technology of general intelligence, right, that will happen at some point. And there's a question of how can you take actions that will actually steer it to go better rather than worse. And that I think one question you need to ask is as a scientist, as an inventor, as a creator, what impact can you have in general, right? You look at things like the telephone invented by two people on the same day. Like, what does that mean? Like, what does that mean about the shape of innovation? And I think that what's going on is everyone's building on the shoulders of the same giants. And so you can kind of, you can't really hope to create something no one else ever would. You know, if Einstein wasn't born, someone else would have come up with relativity. You know, he changed the timeline a bit, right, that maybe it would have taken another 20 years, but it wouldn't be that fundamentally humanity would never discover these fundamental truths. So there's some kind of invisible momentum that some people like Einstein or OpenAI is plugging into that anybody else can also plug into and ultimately that wave takes us into a certain direction. That's what he means by digital. That's right, that's right. And you know, this kind of seems to play out in a bunch of different ways, that there's some exponential that is being written and that the exponential itself, which one it is, changes. Think about Moore's Law, an entire industry set its clock to it for 50 years. Like, how can that be, right? How is that possible? And yet somehow it happened. And so I think you can't hope to ever invent something that no one else will. Maybe you can change the timeline a little bit. But if you really want to make a difference, I think that the thing that you really have to do, the only real degree of freedom you have is to set the initial conditions under which a technology is born. And so you think about the internet, right? That there are lots of other competitors trying to build similar things. And the internet won. And that the initial conditions were that it was created by this group that really valued people being able to be, anyone being able to plug in this very academic mindset of being open and connected. And I think that the internet for the next 40 years really played out that way. You know, maybe today things are starting to shift in a different direction. But I think that those initial conditions were really important to determine the next 40 years worth of progress. That's really beautifully put. So another example that I think about, you know, I recently looked at it. I looked at Wikipedia, the formation of Wikipedia. And I wondered what the internet would be like if Wikipedia had ads. You know, there's an interesting argument that why they chose not to make it, put advertisement on Wikipedia. I think Wikipedia's one of the greatest resources we have on the internet. It's extremely surprising how well it works and how well it was able to aggregate all this kind of good information. And essentially the creator of Wikipedia, I don't know, there's probably some debates there, but set the initial conditions. And now it carried itself forward. That's really interesting. So the way you're thinking about AGI or artificial intelligence is you're focused on setting the initial conditions for the progress. That's right. That's powerful. Okay, so looking to the future, if you create an AGI system, like one that can ace the Turing test, natural language, what do you think would be the interactions you would have with it? What do you think are the questions you would ask? Like what would be the first question you would ask? It, her, him. That's right. I think that at that point, if you've really built a powerful system that is capable of shaping the future of humanity, the first question that you really should ask is how do we make sure that this plays out well? And so that's actually the first question that I would ask a powerful AGI system is. So you wouldn't ask your colleague, you wouldn't ask like Ilya, you would ask the AGI system. Oh, we've already had the conversation with Ilya, right? And everyone here. And so you want as many perspectives and a piece of wisdom as you can for answering this question. So I don't think you necessarily defer to whatever your powerful system tells you, but you use it as one input to try to figure out what to do. But, and I guess fundamentally what it really comes down to is if you built something really powerful and you think about, for example, the creation of shortly after the creation of nuclear weapons, right? The most important question in the world was what's the world order going to be like? How do we set ourselves up in a place where we're going to be able to survive as a species? With AGI, I think the question is slightly different, right? That there is a question of how do we make sure that we don't get the negative effects, but there's also the positive side, right? You imagine that, like what won't AGI be like? Like what will it be capable of? And I think that one of the core reasons that an AGI can be powerful and transformative is actually due to technological development, right? If you have something that's capable as a human and that it's much more scalable, that you absolutely want that thing to go read the whole scientific literature and think about how to create cures for all the diseases, right? You want it to think about how to go and build technologies to help us create material abundance and to figure out societal problems that we have trouble with. Like how are we supposed to clean up the environment? And maybe you want this to go and invent a bunch of little robots that will go out and be biodegradable and turn ocean debris into harmless molecules. And I think that that positive side is something that I think people miss sometimes when thinking about what an AGI will be like. And so I think that if you have a system that's capable of all of that, you absolutely want its advice about how do I make sure that we're using your capabilities in a positive way for humanity. So what do you think about that psychology that looks at all the different possible trajectories of an AGI system, many of which, perhaps the majority of which are positive, and nevertheless focuses on the negative trajectories? I mean, you get to interact with folks, you get to think about this, maybe within yourself as well. You look at Sam Harris and so on. It seems to be, sorry to put it this way, but almost more fun to think about the negative possibilities. Whatever that's deep in our psychology, what do you think about that? And how do we deal with it? Because we want AI to help us. So I think there's kind of two problems entailed in that question. The first is more of the question of how can you even picture what a world with a new technology will be like? Now imagine we're in 1950, and I'm trying to describe Uber to someone. Apps and the internet. Yeah, I mean, that's going to be extremely complicated. But it's imaginable. It's imaginable, right? And now imagine being in 1950 and predicting Uber, right? And you need to describe the internet, you need to describe GPS, you need to describe the fact that everyone's going to have this phone in their pocket. And so I think that just the first truth is that it is hard to picture how a transformative technology will play out in the world. We've seen that before with technologies that are far less transformative than AGI will be. And so I think that one piece is that it's just even hard to imagine and to really put yourself in a world where you can predict what that positive vision would be like. And I think the second thing is that I think it is always easier to support the negative side than the positive side. It's always easier to destroy than create. And less in a physical sense and more just in an intellectual sense, right? Because I think that with creating something, you need to just get a bunch of things right. And to destroy, you just need to get one thing wrong. And so I think that what that means is that I think a lot of people's thinking dead ends as soon as they see the negative story. But that being said, I actually have some hope, right? I think that the positive vision is something that I think can be, is something that we can talk about. And I think that just simply saying this fact of, yeah, there's positive, there's negatives, everyone likes to dwell on the negative. People actually respond well to that message and say, huh, you're right, there's a part of this that we're not talking about, not thinking about. And that's actually something that's I think really been a key part of how we think about AGI at OpenAI. You can kind of look at it as like, okay, OpenAI talks about the fact that there are risks and yet they're trying to build this system. How do you square those two facts? So do you share the intuition that some people have, I mean from Sam Harris to even Elon Musk himself, that it's tricky as you develop AGI to keep it from slipping into the existential threats, into the negative? What's your intuition about how hard is it to keep AI development on the positive track? What's your intuition there? To answer that question, you can really look at how we structure OpenAI. So we really have three main arms. We have capabilities, which is actually doing the technical work and pushing forward what these systems can do. There's safety, which is working on technical mechanisms to ensure that the systems we build are aligned with human values. And then there's policy, which is making sure that we have governance mechanisms, answering that question of, well, whose values? And so I think that the technical safety one is the one that people kind of talk about the most, right? You talk about, like think about all of the dystopic AI movies, a lot of that is about not having good technical safety in place. And what we've been finding is that, you know, I think that actually a lot of people look at the technical safety problem and think it's just intractable, right? This question of what do humans want? How am I supposed to write that down? Can I even write down what I want? No way. And then they stop there. But the thing is, we've already built systems that are able to learn things that humans can't specify. You know, even the rules for how to recognize if there's a cat or a dog in an image. Turns out it's intractable to write that down, and yet we're able to learn it. And that what we're seeing with systems we build at OpenAI, and they're still in early proof of concept stage, is that you are able to learn human preferences. You're able to learn what humans want from data. And so that's kind of the core focus for our technical safety team, and I think that there actually, we've had some pretty encouraging updates in terms of what we've been able to make work. So you have an intuition and a hope that from data, you know, looking at the value alignment problem, from data we can build systems that align with the collective better angels of our nature. So align with the ethics and the morals of human beings. To even say this in a different way, I mean, think about how do we align humans, right? Think about like a human baby can grow up to be an evil person or a great person. And a lot of that is from learning from data, right? That you have some feedback as a child is growing up, they get to see positive examples. And so I think that just like, that the only example we have of a general intelligence that is able to learn from data to align with human values and to learn values, I think we shouldn't be surprised that we can do the same sorts of techniques or whether the same sort of techniques end up being how we solve value alignment for AGI's. So let's go even higher. I don't know if you've read the book, Sapiens, but there's an idea that, you know, that as a collective, as us human beings, we kind of develop together ideas that we hold. There's no, in that context, objective truth. We just kind of all agree to certain ideas and hold them as a collective. Did you have a sense that there is, in the world of good and evil, do you have a sense that to the first approximation, there are some things that are good and that you could teach systems to behave to be good? So I think that this actually blends into our third team, right, which is the policy team. And this is the one, the aspect I think people really talk about way less than they should, right? Because imagine that we build super powerful systems that we've managed to figure out all the mechanisms for these things to do whatever the operator wants. The most important question becomes, who's the operator, what do they want, and how is that going to affect everyone else, right? And I think that this question of what is good, what are those values, I mean, I think you don't even have to go to those, those very grand existential places to start to realize how hard this problem is. You just look at different countries and cultures across the world, and that there's a very different conception of how the world works and what kinds of ways that society wants to operate. And so I think that the really core question is actually very concrete, and I think it's not a question that we have ready answers to, right? It's how do you have a world where all of the different countries that we have, United States, China, Russia, and the hundreds of other countries out there are able to continue to not just operate in the way that they see fit, but in the world that emerges where you have these very powerful systems operating alongside humans, ends up being something that empowers humans more, that makes human existence be a more meaningful thing, and that people are happier and wealthier, and able to live more fulfilling lives. It's not an obvious thing for how to design that world once you have that very powerful system. So if we take a little step back, and we're having a fascinating conversation, and OpenAI is in many ways a tech leader in the world, and yet we're thinking about these big existential questions, which is fascinating, really important. I think you're a leader in that space, and that's a really important space of just thinking how AI affects society in a big picture view. So Oscar Wilde said, we're all in the gutter, but some of us are looking at the stars, and I think OpenAI has a charter that looks to the stars, I would say, to create intelligence, to create general intelligence, make it beneficial, safe, and collaborative. So can you tell me how that came about, how a mission like that and the path to creating a mission like that at OpenAI was founded? Yeah, so I think that in some ways it really boils down to taking a look at the landscape. So if you think about the history of AI, that basically for the past 60 or 70 years, people have thought about this goal of what could happen if you could automate human intellectual labor. Imagine you could build a computer system that could do that, what becomes possible? We have a lot of sci fi that tells stories of various dystopias, and increasingly you have movies like Her that tell you a little bit about, maybe more of a little bit utopic vision. You think about the impacts that we've seen from being able to have bicycles for our minds and computers, and I think that the impact of computers and the internet has just far outstripped what anyone really could have predicted. And so I think that it's very clear that if you can build an AGI, it will be the most transformative technology that humans will ever create. And so what it boils down to then is a question of, well, is there a path, is there hope, is there a way to build such a system? And I think that for 60 or 70 years, that people got excited and that ended up not being able to deliver on the hopes that people had pinned on them. And I think that then, that after two winters of AI development, that people I think kind of almost stopped daring to dream, right? That really talking about AGI or thinking about AGI became almost this taboo in the community. But I actually think that people took the wrong lesson from AI history. And if you look back, starting in 1959 is when the Perceptron was released. And this is basically one of the earliest neural networks. It was released to what was perceived as this massive overhype. So in the New York Times in 1959, you have this article saying that the Perceptron will one day recognize people, call out their names, instantly translate speech between languages. And people at the time looked at this and said, this is, your system can't do any of that. And basically spent 10 years trying to discredit the whole Perceptron direction and succeeded. And all the funding dried up. And people kind of went in other directions. And in the 80s, there was this resurgence. And I'd always heard that the resurgence in the 80s was due to the invention of backpropagation and these algorithms that got people excited. But actually the causality was due to people building larger computers. That you can find these articles from the 80s saying that the democratization of computing power suddenly meant that you could run these larger neural networks. And then people started to do all these amazing things. Backpropagation algorithm was invented. And the neural nets people were running were these tiny little 20 neuron neural nets. What are you supposed to learn with 20 neurons? And so of course, they weren't able to get great results. And it really wasn't until 2012 that this approach, that's almost the most simple, natural approach that people had come up with in the 50s, in some ways even in the 40s before there were computers, with the Pitts–McCullough neuron, suddenly this became the best way of solving problems. And I think there are three core properties that deep learning has that I think are very worth paying attention to. The first is generality. We have a very small number of deep learning tools. SGD, deep neural net, maybe some RL. And it solves this huge variety of problems. Speech recognition, machine translation, game playing, all of these problems, small set of tools. So there's the generality. There's a second piece, which is the competence. You want to solve any of those problems? Throw up 40 years worth of normal computer vision research, replace it with a deep neural net, it's going to work better. And there's a third piece, which is the scalability. One thing that has been shown time and time again is that if you have a larger neural network, throw more compute, more data at it, it will work better. Those three properties together feel like essential parts of building a general intelligence. Now it doesn't just mean that if we scale up what we have, that we will have an AGI, right? There are clearly missing pieces. There are missing ideas. We need to have answers for reasoning. But I think that the core here is that for the first time, it feels that we have a paradigm that gives us hope that general intelligence can be achievable. And so as soon as you believe that, everything else comes into focus, right? If you imagine that you may be able to, and you know that the timeline I think remains uncertain, but I think that certainly within our lifetimes and possibly within a much shorter period of time than people would expect, if you can really build the most transformative technology that will ever exist, you stop thinking about yourself so much, right? You start thinking about just like, how do you have a world where this goes well? And that you need to think about the practicalities of how do you build an organization and get together a bunch of people and resources and to make sure that people feel motivated and ready to do it. But I think that then you start thinking about, well, what if we succeed? And how do we make sure that when we succeed, that the world is actually the place that we want ourselves to exist in? And almost in the Rawlsian Veil sense of the word. And so that's kind of the broader landscape. And OpenAI was really formed in 2015 with that high level picture of AGI might be possible sooner than people think, and that we need to try to do our best to make sure it's going to go well. And then we spent the next couple of years really trying to figure out what does that mean? How do we do it? And I think that typically with a company, you start out very small, see you in a co founder, and you build a product, you get some users, you get a product market fit. Then at some point you raise some money, you hire people, you scale, and then down the road, then the big companies realize you exist and try to kill you. And for OpenAI, it was basically everything in exactly the opposite order. Let me just pause for a second, you said a lot of things. And let me just admire the jarring aspect of what OpenAI stands for, which is daring to dream. I mean, you said it's pretty powerful. It caught me off guard because I think that's very true. The step of just daring to dream about the possibilities of creating intelligence in a positive, in a safe way, but just even creating intelligence is a very powerful is a much needed refreshing catalyst for the AI community. So that's the starting point. Okay, so then formation of OpenAI, what's that? I would just say that when we were starting OpenAI, that kind of the first question that we had is, is it too late to start a lab with a bunch of the best people? Right, is that even possible? Wow, okay. That was an actual question? That was the core question of, we had this dinner in July of 2015, and that was really what we spent the whole time talking about. And, you know, because you think about kind of where AI was is that it had transitioned from being an academic pursuit to an industrial pursuit. And so a lot of the best people were in these big research labs and that we wanted to start our own one that no matter how much resources we could accumulate would be pale in comparison to the big tech companies. And we knew that. And it was a question of, are we going to be actually able to get this thing off the ground? You need critical mass. You can't just do you and a cofounder build a product. You really need to have a group of five to 10 people. And we kind of concluded it wasn't obviously impossible. So it seemed worth trying. Well, you're also a dreamer, so who knows, right? That's right. Okay, so speaking of that, competing with the big players, let's talk about some of the tricky things as you think through this process of growing, of seeing how you can develop these systems at a scale that competes. So you recently formed OpenAI LP, a new cap profit company that now carries the name OpenAI. So OpenAI is now this official company. The original nonprofit company still exists and carries the OpenAI nonprofit name. So can you explain what this company is, what the purpose of this creation is, and how did you arrive at the decision to create it? OpenAI, the whole entity and OpenAI LP as a vehicle is trying to accomplish the mission of ensuring that artificial general intelligence benefits everyone. And the main way that we're trying to do that is by actually trying to build general intelligence ourselves and make sure the benefits are distributed to the world. That's the primary way. We're also fine if someone else does this, right? Doesn't have to be us. If someone else is going to build an AGI and make sure that the benefits don't get locked up in one company or with one set of people, like we're actually fine with that. And so those ideas are baked into our charter, which is kind of the foundational document that describes kind of our values and how we operate. But it's also really baked into the structure of OpenAI LP. And so the way that we've set up OpenAI LP is that in the case where we succeed, right? If we actually build what we're trying to build, then investors are able to get a return, but that return is something that is capped. And so if you think of AGI in terms of the value that you could really create, you're talking about the most transformative technology ever created, it's going to create orders of magnitude more value than any existing company. And that all of that value will be owned by the world, like legally titled to the nonprofit to fulfill that mission. And so that's the structure. So the mission is a powerful one, and it's one that I think most people would agree with. It's how we would hope AI progresses. And so how do you tie yourself to that mission? How do you make sure you do not deviate from that mission, that other incentives that are profit driven don't interfere with the mission? So this was actually a really core question for us for the past couple of years, because I'd say that like the way that our history went was that for the first year, we were getting off the ground, right? We had this high level picture, but we didn't know exactly how we wanted to accomplish it. And really two years ago is when we first started realizing in order to build AGI, we're just going to need to raise way more money than we can as a nonprofit. And we're talking many billions of dollars. And so the first question is how are you supposed to do that and stay true to this mission? And we looked at every legal structure out there and concluded none of them were quite right for what we wanted to do. And I guess it shouldn't be too surprising if you're gonna do some like crazy unprecedented technology that you're gonna have to come with some crazy unprecedented structure to do it in. And a lot of our conversation was with people at OpenAI, the people who really joined because they believe so much in this mission and thinking about how do we actually raise the resources to do it and also stay true to what we stand for. And the place you gotta start is to really align on what is it that we stand for, right? What are those values? What's really important to us? And so I'd say that we spent about a year really compiling the OpenAI charter and that determines, and if you even look at the first line item in there, it says that, look, we expect we're gonna have to marshal huge amounts of resources, but we're going to make sure that we minimize conflict of interest with the mission. And that kind of aligning on all of those pieces was the most important step towards figuring out how do we structure a company that can actually raise the resources to do what we need to do. I imagine OpenAI, the decision to create OpenAI LP was a really difficult one. And there was a lot of discussions, as you mentioned, for a year, and there was different ideas, perhaps detractors within OpenAI, sort of different paths that you could have taken. What were those concerns? What were the different paths considered? What was that process of making that decision like? Yep, so if you look actually at the OpenAI charter, there's almost two paths embedded within it. There is, we are primarily trying to build AGI ourselves, but we're also okay if someone else does it. And this is a weird thing for a company. It's really interesting, actually. There is an element of competition that you do wanna be the one that does it, but at the same time, you're okay if somebody else doesn't. We'll talk about that a little bit, that trade off, that dance that's really interesting. And I think this was the core tension as we were designing OpenAI LP, and really the OpenAI strategy, is how do you make sure that both you have a shot at being a primary actor, which really requires building an organization, raising massive resources, and really having the will to go and execute on some really, really hard vision, right? You need to really sign up for a long period to go and take on a lot of pain and a lot of risk. And to do that, normally you just import the startup mindset, right? And that you think about, okay, like how do we out execute everyone? You have this very competitive angle. But you also have the second angle of saying that, well, the true mission isn't for OpenAI to build AGI. The true mission is for AGI to go well for humanity. And so how do you take all of those first actions and make sure you don't close the door on outcomes that would actually be positive and fulfill the mission? And so I think it's a very delicate balance, right? And I think that going 100% one direction or the other is clearly not the correct answer. And so I think that even in terms of just how we talk about OpenAI and think about it, there's just like one thing that's always in the back of my mind is to make sure that we're not just saying OpenAI's goal is to build AGI, right? That it's actually much broader than that, right? That first of all, it's not just AGI, it's safe AGI that's very important. But secondly, our goal isn't to be the ones to build it. Our goal is to make sure it goes well for the world. And so I think that figuring out how do you balance all of those and to get people to really come to the table and compile a single document that encompasses all of that wasn't trivial. So part of the challenge here is your mission is, I would say, beautiful, empowering, and a beacon of hope for people in the research community and just people thinking about AI. So your decisions are scrutinized more than, I think, a regular profit driven company. Do you feel the burden of this in the creation of the charter and just in the way you operate? Yes. So why do you lean into the burden by creating such a charter? Why not keep it quiet? I mean, it just boils down to the mission, right? Like I'm here and everyone else is here because we think this is the most important mission. Dare to dream. All right, so do you think you can be good for the world or create an AGI system that's good when you're a for profit company? From my perspective, I don't understand why profit interferes with positive impact on society. I don't understand why Google, that makes most of its money from ads, can't also do good for the world or other companies, Facebook, anything. I don't understand why those have to interfere. You know, profit isn't the thing, in my view, that affects the impact of a company. What affects the impact of the company is the charter, is the culture, is the people inside, and profit is the thing that just fuels those people. So what are your views there? Yeah, so I think that's a really good question and there's some real longstanding debates in human society that are wrapped up in it. The way that I think about it is just think about what are the most impactful non profits in the world? What are the most impactful for profits in the world? Right, it's much easier to list the for profits. That's right, and I think that there's some real truth here that the system that we set up, the system for kind of how today's world is organized, is one that really allows for huge impact. And that kind of part of that is that you need to be, that for profits are self sustaining and able to kind of build on their own momentum. And I think that's a really powerful thing. It's something that when it turns out that we haven't set the guardrails correctly, causes problems, right? Think about logging companies that go into forest, the rainforest, that's really bad, we don't want that. And it's actually really interesting to me that kind of this question of how do you get positive benefits out of a for profit company, it's actually very similar to how do you get positive benefits out of an AGI, right? That you have this like very powerful system, it's more powerful than any human, and is kind of autonomous in some ways, it's superhuman in a lot of axes, and somehow you have to set the guardrails to get good things to happen. But when you do, the benefits are massive. And so I think that when I think about nonprofit versus for profit, I think just not enough happens in nonprofits, they're very pure, but it's just kind of, it's just hard to do things there. In for profits in some ways, like too much happens, but if kind of shaped in the right way, it can actually be very positive. And so with OpenAI LP, we're picking a road in between. Now the thing that I think is really important to recognize is that the way that we think about OpenAI LP is that in the world where AGI actually happens, right, in a world where we are successful, we build the most transformative technology ever, the amount of value we're gonna create will be astronomical. And so then in that case, that the cap that we have will be a small fraction of the value we create, and the amount of value that goes back to investors and employees looks pretty similar to what would happen in a pretty successful startup. And that's really the case that we're optimizing for, right? That we're thinking about in the success case, making sure that the value we create doesn't get locked up. And I expect that in other for profit companies that it's possible to do something like that. I think it's not obvious how to do it, right? I think that as a for profit company, you have a lot of fiduciary duty to your shareholders and that there are certain decisions that you just cannot make. In our structure, we've set it up so that we have a fiduciary duty to the charter, that we always get to make the decision that is right for the charter rather than, even if it comes at the expense of our own stakeholders. And so I think that when I think about what's really important, it's not really about nonprofit versus for profit, it's really a question of if you build AGI and you kind of, humanity's now in this new age, who benefits, whose lives are better? And I think that what's really important is to have an answer that is everyone. Yeah, which is one of the core aspects of the charter. So one concern people have, not just with OpenAI, but with Google, Facebook, Amazon, anybody really that's creating impact at scale is how do we avoid, as your charter says, avoid enabling the use of AI or AGI to unduly concentrate power? Why would not a company like OpenAI keep all the power of an AGI system to itself? The charter. The charter. So how does the charter actualize itself in day to day? So I think that first, to zoom out, that the way that we structure the company is so that the power for sort of dictating the actions that OpenAI takes ultimately rests with the board, the board of the nonprofit. And the board is set up in certain ways with certain restrictions that you can read about in the OpenAI LP blog post. But effectively the board is the governing body for OpenAI LP. And the board has a duty to fulfill the mission of the nonprofit. And so that's kind of how we tie, how we thread all these things together. Now there's a question of, so day to day, how do people, the individuals, who in some ways are the most empowered ones, right? Now the board sort of gets to call the shots at the high level, but the people who are actually executing are the employees, right? People here on a day to day basis who have the keys to the technical whole kingdom. And there I think that the answer looks a lot like, well, how does any company's values get actualized, right? And I think that a lot of that comes down to that you need people who are here because they really believe in that mission and they believe in the charter and that they are willing to take actions that maybe are worse for them, but are better for the charter. And that's something that's really baked into the culture. And honestly, I think it's, you know, I think that that's one of the things that we really have to work to preserve as time goes on. And that's a really important part of how we think about hiring people and bringing people into OpenAI. So there's people here, there's people here who could speak up and say, like, hold on a second, this is totally against what we stand for, culture wise. Yeah, yeah, for sure. I mean, I think that we actually have, I think that's like a pretty important part of how we operate and how we have, even again with designing the charter and designing OpenAI LP in the first place, that there has been a lot of conversation with employees here and a lot of times where employees said, wait a second, this seems like it's going in the wrong direction and let's talk about it. And so I think one thing that's I think a really, and you know, here's actually one thing that I think is very unique about us as a small company, is that if you're at a massive tech giant, that's a little bit hard for someone who's a line employee to go and talk to the CEO and say, I think that we're doing this wrong. And you know, you'll get companies like Google that have had some collective action from employees to make ethical change around things like Maven. And so maybe there are mechanisms at other companies that work. But here, super easy for anyone to pull me aside, to pull Sam aside, to pull Ilya aside, and people do it all the time. One of the interesting things in the charter is this idea that it'd be great if you could try to describe or untangle switching from competition to collaboration in late stage AGI development. It's really interesting, this dance between competition and collaboration. How do you think about that? Yeah, assuming that you can actually do the technical side of AGI development, I think there's going to be two key problems with figuring out how do you actually deploy it, make it go well. The first one of these is the run up to building the first AGI. You look at how self driving cars are being developed, and it's a competitive race. And the thing that always happens in competitive race is that you have huge amounts of pressure to get rid of safety. And so that's one thing we're very concerned about, is that people, multiple teams figuring out we can actually get there, but if we took the slower path that is more guaranteed to be safe, we will lose. And so we're going to take the fast path. And so the more that we can both ourselves be in a position where we don't generate that competitive race, where we say, if the race is being run and that someone else is further ahead than we are, we're not going to try to leapfrog. We're going to actually work with them, right? We will help them succeed. As long as what they're trying to do is to fulfill our mission, then we're good. We don't have to build AGI ourselves. And I think that's a really important commitment from us, but it can't just be unilateral, right? I think that it's really important that other players who are serious about building AGI make similar commitments, right? I think that, again, to the extent that everyone believes that AGI should be something to benefit everyone, then it actually really shouldn't matter which company builds it. And we should all be concerned about the case where we just race so hard to get there that something goes wrong. So what role do you think government, our favorite entity, has in setting policy and rules about this domain, from research to the development to early stage to late stage AI and AGI development? So I think that, first of all, it's really important that government's in there, right? In some way, shape, or form. At the end of the day, we're talking about building technology that will shape how the world operates, and that there needs to be government as part of that answer. And so that's why we've done a number of different congressional testimonies, we interact with a number of different lawmakers, and that right now, a lot of our message to them is that it's not the time for regulation, it is the time for measurement, right? That our main policy recommendation is that people, and the government does this all the time with bodies like NIST, spend time trying to figure out just where the technology is, how fast it's moving, and can really become literate and up to speed with respect to what to expect. So I think that today, the answer really is about measurement, and I think that there will be a time and place where that will change. And I think it's a little bit hard to predict exactly what exactly that trajectory should look like. So there will be a point at which regulation, federal in the United States, the government steps in and helps be the, I don't wanna say the adult in the room, to make sure that there is strict rules, maybe conservative rules that nobody can cross. Well, I think there's kind of maybe two angles to it. So today, with narrow AI applications that I think there are already existing bodies that are responsible and should be responsible for regulation, you think about, for example, with self driving cars, that you want the national highway. Netsa. Yeah, exactly, to be regulating that. That makes sense, right, that basically what we're saying is that we're going to have these technological systems that are going to be performing applications that humans already do, great. We already have ways of thinking about standards and safety for those. So I think actually empowering those regulators today is also pretty important. And then I think for AGI, that there's going to be a point where we'll have better answers. And I think that maybe a similar approach of first measurement and start thinking about what the rules should be. I think it's really important that we don't prematurely squash progress. I think it's very easy to kind of smother a budding field. And I think that's something to really avoid. But I don't think that the right way of doing it is to say, let's just try to blaze ahead and not involve all these other stakeholders. So you recently released a paper on GPT2 language modeling, but did not release the full model because you had concerns about the possible negative effects of the availability of such model. It's outside of just that decision, it's super interesting because of the discussion at a societal level, the discourse it creates. So it's fascinating in that aspect. But if you think that's the specifics here at first, what are some negative effects that you envisioned? And of course, what are some of the positive effects? Yeah, so again, I think to zoom out, the way that we thought about GPT2 is that with language modeling, we are clearly on a trajectory right now where we scale up our models and we get qualitatively better performance. GPT2 itself was actually just a scale up of a model that we've released in the previous June. We just ran it at much larger scale and we got these results where suddenly starting to write coherent pros, which was not something we'd seen previously. And what are we doing now? Well, we're gonna scale up GPT2 by 10x, by 100x, by 1000x, and we don't know what we're gonna get. And so it's very clear that the model that we released last June, I think it's kind of like, it's a good academic toy. It's not something that we think is something that can really have negative applications or to the extent that it can, that the positive of people being able to play with it is far outweighs the possible harms. You fast forward to not GPT2, but GPT20, and you think about what that's gonna be like. And I think that the capabilities are going to be substantive. And so there needs to be a point in between the two where you say, this is something where we are drawing the line and that we need to start thinking about the safety aspects. And I think for GPT2, we could have gone either way. And in fact, when we had conversations internally that we had a bunch of pros and cons, and it wasn't clear which one outweighed the other. And I think that when we announced that, hey, we decide not to release this model, then there was a bunch of conversation where various people said, it's so obvious that you should have just released it. There are other people said, it's so obvious you should not have released it. And I think that that almost definitionally means that holding it back was the correct decision. Right, if it's not obvious whether something is beneficial or not, you should probably default to caution. And so I think that the overall landscape for how we think about it is that this decision could have gone either way. There are great arguments in both directions, but for future models down the road and possibly sooner than you'd expect, because scaling these things up doesn't actually take that long, those ones you're definitely not going to want to release into the wild. And so I think that we almost view this as a test case and to see, can we even design, you know, how do you have a society or how do you have a system that goes from having no concept of responsible disclosure, where the mere idea of not releasing something for safety reasons is unfamiliar to a world where you say, okay, we have a powerful model, let's at least think about it, let's go through some process. And you think about the security community, it took them a long time to design responsible disclosure, right? You know, you think about this question of, well, I have a security exploit, I send it to the company, the company is like, tries to prosecute me or just sit, just ignores it, what do I do, right? And so, you know, the alternatives of, oh, I just always publish your exploits, that doesn't seem good either, right? And so it really took a long time and took this, it was bigger than any individual, right? It's really about building a whole community that believe that, okay, we'll have this process where you send it to the company, you know, if they don't act in a certain time, then you can go public and you're not a bad person, you've done the right thing. And I think that in AI, part of the response at GPT2 just proves that we don't have any concept of this. So that's the high level picture. And so I think that, I think this was a really important move to make and we could have maybe delayed it for GPT3, but I'm really glad we did it for GPT2. And so now you look at GPT2 itself and you think about the substance of, okay, what are potential negative applications? So you have this model that's been trained on the internet, which, you know, it's also going to be a bunch of very biased data, a bunch of, you know, very offensive content in there, and you can ask it to generate content for you on basically any topic, right? You just give it a prompt and it'll just start writing and it writes content like you see on the internet, you know, even down to like saying advertisement in the middle of some of its generations. And you think about the possibilities for generating fake news or abusive content. And, you know, it's interesting seeing what people have done with, you know, we released a smaller version of GPT2 and the people have done things like try to generate, you know, take my own Facebook message history and generate more Facebook messages like me and people generating fake politician content or, you know, there's a bunch of things there where you at least have to think, is this going to be good for the world? There's the flip side, which is I think that there's a lot of awesome applications that we really want to see, like creative applications in terms of if you have sci fi authors that can work with this tool and come up with cool ideas, like that seems awesome if we can write better sci fi through the use of these tools and we've actually had a bunch of people write into us asking, hey, can we use it for, you know, a variety of different creative applications? So the positive are actually pretty easy to imagine. They're, you know, the usual NLP applications are really interesting, but let's go there. It's kind of interesting to think about a world where, look at Twitter, where not just fake news, but smarter and smarter bots being able to spread in an interesting, complex, networking way information that just floods out us regular human beings with our original thoughts. So what are your views of this world with GPT20, right? How do we think about it? Again, it's like one of those things about in the 50s trying to describe the internet or the smartphone. What do you think about that world, the nature of information? One possibility is that we'll always try to design systems that identify robot versus human and we'll do so successfully and so we'll authenticate that we're still human and the other world is that we just accept the fact that we're swimming in a sea of fake news and just learn to swim there. Well, have you ever seen the popular meme of robot with a physical arm and pen clicking the I'm not a robot button? Yeah. I think the truth is that really trying to distinguish between robot and human is a losing battle. Ultimately, you think it's a losing battle? I think it's a losing battle ultimately, right? I think that that is, in terms of the content, in terms of the actions that you can take. I mean, think about how captures have gone, right? The captures used to be a very nice, simple, you just have this image, all of our OCR is terrible, you put a couple of artifacts in it, humans are gonna be able to tell what it is. An AI system wouldn't be able to. Today, I could barely do captures. And I think that this is just kind of where we're going. I think captures were a moment in time thing and as AI systems become more powerful, that there being human capabilities that can be measured in a very easy, automated way that AIs will not be capable of. I think that's just like, it's just an increasingly hard technical battle. But it's not that all hope is lost, right? You think about how do we already authenticate ourselves, right, that we have systems, we have social security numbers if you're in the US or you have ways of identifying individual people and having real world identity tied to digital identity seems like a step towards authenticating the source of content rather than the content itself. Now, there are problems with that. How can you have privacy and anonymity in a world where the only content you can really trust is, or the only way you can trust content is by looking at where it comes from? And so I think that building out good reputation networks may be one possible solution. But yeah, I think that this question is not an obvious one. And I think that we, maybe sooner than we think, will be in a world where today I often will read a tweet and be like, hmm, do I feel like a real human wrote this? Or do I feel like this is genuine? I feel like I can kind of judge the content a little bit. And I think in the future, it just won't be the case. You look at, for example, the FCC comments on net neutrality. It came out later that millions of those were auto generated and that the researchers were able to do various statistical techniques to do that. What do you do in a world where those statistical techniques don't exist? It's just impossible to tell the difference between humans and AIs. And in fact, the most persuasive arguments are written by AI. All that stuff, it's not sci fi anymore. You look at GPT2 making a great argument for why recycling is bad for the world. You gotta read that and be like, huh, you're right. We are addressing just the symptoms. Yeah, that's quite interesting. I mean, ultimately it boils down to the physical world being the last frontier of proving, so you said like basically networks of people, humans vouching for humans in the physical world. And somehow the authentication ends there. I mean, if I had to ask you, I mean, you're way too eloquent for a human. So if I had to ask you to authenticate, like prove how do I know you're not a robot and how do you know I'm not a robot? Yeah. I think that's so far where in this space, this conversation we just had, the physical movements we did, is the biggest gap between us and AI systems is the physical manipulation. So maybe that's the last frontier. Well, here's another question is why is, why is solving this problem important, right? Like what aspects are really important to us? And I think that probably where we'll end up is we'll hone in on what do we really want out of knowing if we're talking to a human. And I think that, again, this comes down to identity. And so I think that the internet of the future, I expect to be one that will have lots of agents out there that will interact with you. But I think that the question of is this flesh, real flesh and blood human or is this an automated system, may actually just be less important. Let's actually go there. It's GPT2 is impressive and let's look at GPT20. Why is it so bad that all my friends are GPT20? Why is it so important on the internet, do you think, to interact with only human beings? Why can't we live in a world where ideas can come from models trained on human data? Yeah, I think this is actually a really interesting question. This comes back to the how do you even picture a world with some new technology? And I think that one thing that I think is important is, you know, let's say honesty. And I think that if you have almost in the Turing test style sense of technology, you have AIs that are pretending to be humans and deceiving you. I think that feels like a bad thing, right? I think that it's really important that we feel like we're in control of our environment, right? That we understand who we're interacting with. And if it's an AI or a human, that's not something that we're being deceived about. But I think that the flip side of can I have as meaningful of an interaction with an AI as I can with a human? Well, I actually think here you can turn to sci fi. And her I think is a great example of asking this very question, right? One thing I really love about her is it really starts out almost by asking how meaningful are human virtual relationships, right? And then you have a human who has a relationship with an AI and that you really start to be drawn into that, right? That all of your emotional buttons get triggered in the same way as if there was a real human that was on the other side of that phone. And so I think that this is one way of thinking about it is that I think that we can have meaningful interactions and that if there's a funny joke, some sense it doesn't really matter if it was written by a human or an AI. But what you don't want and why I think we should really draw hard lines is deception. And I think that as long as we're in a world where why do we build AI systems at all, right? The reason we want to build them is to enhance human lives, to make humans be able to do more things, to have humans feel more fulfilled. And if we can build AI systems that do that, sign me up. So the process of language modeling, how far do you think it'd take us? Let's look at movie Her. Do you think a dialogue, natural language conversation is formulated by the Turing test, for example, do you think that process could be achieved through this kind of unsupervised language modeling? So I think the Turing test in its real form isn't just about language, right? It's really about reasoning too, right? To really pass the Turing test, I should be able to teach calculus to whoever's on the other side and have it really understand calculus and be able to go and solve new calculus problems. And so I think that to really solve the Turing test, we need more than what we're seeing with language models. We need some way of plugging in reasoning. Now, how different will that be from what we already do? That's an open question, right? Might be that we need some sequence of totally radical new ideas, or it might be that we just need to kind of shape our existing systems in a slightly different way. But I think that in terms of how far language modeling will go, it's already gone way further than many people would have expected, right? I think that things like, and I think there's a lot of really interesting angles to poke in terms of how much does GPT2 understand physical world? Like, you read a little bit about fire underwater in GPT2. So it's like, okay, maybe it doesn't quite understand what these things are, but at the same time, I think that you also see various things like smoke coming from flame, and a bunch of these things that GPT2, it has no body, it has no physical experience, it's just statically read data. And I think that the answer is like, we don't know yet. These questions, though, we're starting to be able to actually ask them to physical systems, to real systems that exist, and that's very exciting. Do you think, what's your intuition? Do you think if you just scale language modeling, like significantly scale, that reasoning can emerge from the same exact mechanisms? I think it's unlikely that if we just scale GPT2 that we'll have reasoning in the full fledged way. And I think that there's like, the type signature's a little bit wrong, right? That like, there's something we do with, that we call thinking, right? Where we spend a lot of compute, like a variable amount of compute, to get to better answers, right? I think a little bit harder, I get a better answer. And that that kind of type signature isn't quite encoded in a GPT, right? GPT will kind of like, it's been a long time, and it's like evolutionary history, baking in all this information, getting very, very good at this predictive process. And then at runtime, I just kind of do one forward pass, and I'm able to generate stuff. And so, you know, there might be small tweaks to what we do in order to get the type signature, right? For example, well, you know, it's not really one forward pass, right? You know, you generate symbol by symbol, and so maybe you generate like a whole sequence of thoughts, and you only keep like the last bit or something. But I think that at the very least, I would expect you have to make changes like that. Yeah, just exactly how we, you said, think, is the process of generating thought by thought in the same kind of way, like you said, keep the last bit, the thing that we converge towards. Yep. And I think there's another piece which is interesting, which is this out of distribution generalization, right? That like thinking somehow lets us do that, right? That we haven't experienced a thing, and yet somehow we just kind of keep refining our mental model of it. This is, again, something that feels tied to whatever reasoning is, and maybe it's a small tweak to what we do, maybe it's many ideas, and we'll take as many decades. Yeah, so the assumption there, generalization out of distribution, is that it's possible to create new ideas. Mm hmm. You know, it's possible that nobody's ever created any new ideas, and then with scaling GPT2 to GPT20, you would essentially generalize to all possible thoughts that us humans could have. I mean. Just to play devil's advocate. Right, right, right, I mean, how many new story ideas have we come up with since Shakespeare, right? Yeah, exactly. It's just all different forms of love and drama and so on. Okay. Not sure if you read Bitter Lesson, a recent blog post by Rich Sutton. Yep, I have. He basically says something that echoes some of the ideas that you've been talking about, which is, he says the biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately going to, ultimately win out. Do you agree with this? So basically, and OpenAI in general, but the ideas you're exploring about coming up with methods, whether it's GPT2 modeling or whether it's OpenAI 5 playing Dota, or a general method is better than a more fine tuned, expert tuned method. Yeah, so I think that, well one thing that I think was really interesting about the reaction to that blog post was that a lot of people have read this as saying that compute is all that matters. And that's a very threatening idea, right? And I don't think it's a true idea either. Right, it's very clear that we have algorithmic ideas that have been very important for making progress and to really build AGI. You wanna push as far as you can on the computational scale and you wanna push as far as you can on human ingenuity. And so I think you need both. But I think the way that you phrased the question is actually very good, right? That it's really about what kind of ideas should we be striving for? And absolutely, if you can find a scalable idea, you pour more compute into it, you pour more data into it, it gets better, like that's the real holy grail. And so I think that the answer to the question, I think, is yes, that that's really how we think about it and that part of why we're excited about the power of deep learning, the potential for building AGI is because we look at the systems that exist in the most successful AI systems and we realize that you scale those up, they're gonna work better. And I think that that scalability is something that really gives us hope for being able to build transformative systems. So I'll tell you, this is partially an emotional, a response that people often have, if compute is so important for state of the art performance, individual developers, maybe a 13 year old sitting somewhere in Kansas or something like that, they're sitting, they might not even have a GPU or may have a single GPU, a 1080 or something like that, and there's this feeling like, well, how can I possibly compete or contribute to this world of AI if scale is so important? So if you can comment on that and in general, do you think we need to also in the future focus on democratizing compute resources more or as much as we democratize the algorithms? Well, so the way that I think about it is that there's this space of possible progress, right? There's a space of ideas and sort of systems that will work that will move us forward and there's a portion of that space and to some extent, an increasingly significant portion of that space that does just require massive compute resources. And for that, I think that the answer is kind of clear and that part of why we have the structure that we do is because we think it's really important to be pushing the scale and to be building these large clusters and systems. But there's another portion of the space that isn't about the large scale compute that are these ideas that, and again, I think that for the ideas to really be impactful and really shine, that they should be ideas that if you scale them up, would work way better than they do at small scale. But that you can discover them without massive computational resources. And if you look at the history of recent developments, you think about things like the GAN or the VAE, that these are ones that I think you could come up with them without having, and in practice, people did come up with them without having massive, massive computational resources. Right, I just talked to Ian Goodfellow, but the thing is the initial GAN produced pretty terrible results, right? So only because it was in a very specific, it was only because they're smart enough to know that this is quite surprising it can generate anything that they know. Do you see a world, or is that too optimistic and dreamer like to imagine that the compute resources are something that's owned by governments and provided as utility? Actually, to some extent, this question reminds me of a blog post from one of my former professors at Harvard, this guy Matt Welsh, who was a systems professor. I remember sitting in his tenure talk, right, and that he had literally just gotten tenure. He went to Google for the summer and then decided he wasn't going back to academia, right? And kind of in his blog post, he makes this point that, look, as a systems researcher, that I come up with these cool system ideas, right, and I kind of build a little proof of concept, and the best thing I can hope for is that the people at Google or Yahoo, which was around at the time, will implement it and actually make it work at scale, right? That's like the dream for me, right? I build the little thing, and they turn it into the big thing that's actually working. And for him, he said, I'm done with that. I want to be the person who's actually doing building and deploying. And I think that there's a similar dichotomy here, right? I think that there are people who really actually find value, and I think it is a valuable thing to do to be the person who produces those ideas, right, who builds the proof of concept. And yeah, you don't get to generate the coolest possible GAN images, but you invented the GAN, right? And so there's a real trade off there, and I think that that's a very personal choice, but I think there's value in both sides. So do you think creating AGI or some new models, we would see echoes of the brilliance even at the prototype level? So you would be able to develop those ideas without scale, the initial seeds. So take a look at, you know, I always like to look at examples that exist, right? Look at real precedent. And so take a look at the June 2018 model that we released, that we scaled up to turn into GPT2. And you can see that at small scale, it set some records, right? This was the original GPT. We actually had some cool generations. They weren't nearly as amazing and really stunning as the GPT2 ones, but it was promising. It was interesting. And so I think it is the case that with a lot of these ideas, that you see promise at small scale. But there is an asterisk here, a very big asterisk, which is sometimes we see behaviors that emerge that are qualitatively different from anything we saw at small scale. And that the original inventor of whatever algorithm looks at and says, I didn't think it could do that. This is what we saw in Dota, right? So PPO was created by John Shulman, who's a researcher here. And with Dota, we basically just ran PPO at massive, massive scale. And there's some tweaks in order to make it work, but fundamentally, it's PPO at the core. And we were able to get this long term planning, these behaviors to really play out on a time scale that we just thought was not possible. And John looked at that and was like, I didn't think it could do that. That's what happens when you're at three orders of magnitude more scale than you tested at. Yeah, but it still has the same flavors of, you know, at least echoes of the expected billions. Although I suspect with GPT scaled more and more, you might get surprising things. So yeah, you're right, it's interesting. It's difficult to see how far an idea will go when it's scaled. It's an open question. Well, so to that point with Dota and PPO, like, I mean, here's a very concrete one, right? It's like, it's actually one thing that's very surprising about Dota that I think people don't really pay that much attention to is the decree of generalization out of distribution that happens, right? That you have this AI that's trained against other bots for its entirety, the entirety of its existence. Sorry to take a step back. Can you talk through, you know, a story of Dota, a story of leading up to opening I5 and that past, and what was the process of self play and so on of training on this? Yeah, yeah, yeah. So with Dota. What is Dota? Yeah, Dota is a complex video game and we started trying to solve Dota because we felt like this was a step towards the real world relative to other games like chess or Go, right? Those very cerebral games where you just kind of have this board, very discreet moves. Dota starts to be much more continuous time that you have this huge variety of different actions that you have a 45 minute game with all these different units and it's got a lot of messiness to it that really hasn't been captured by previous games. And famously, all of the hard coded bots for Dota were terrible, right? It's just impossible to write anything good for it because it's so complex. And so this seemed like a really good place to push what's the state of the art in reinforcement learning. And so we started by focusing on the one versus one version of the game and we're able to solve that. We're able to beat the world champions and the skill curve was this crazy exponential, right? And it was like constantly we were just scaling up that we were fixing bugs and that you look at the skill curve and it was really a very, very smooth one. This is actually really interesting to see how that human iteration loop yielded very steady exponential progress. And to one side note, first of all, it's an exceptionally popular video game. The side effect is that there's a lot of incredible human experts at that video game. So the benchmark that you're trying to reach is very high. And the other, can you talk about the approach that was used initially and throughout training these agents to play this game? Yep, and so the approach that we used is self play. And so you have two agents that don't know anything. They battle each other, they discover something a little bit good and now they both know it. And they just get better and better and better without bound. And that's a really powerful idea, right? That we then went from the one versus one version of the game and scaled up to five versus five, right? So you think about kind of like with basketball where you have this like team sport and you need to do all this coordination and we were able to push the same idea, the same self play to really get to the professional level at the full five versus five version of the game. And the things I think are really interesting here is that these agents, in some ways, they're almost like an insect like intelligence, right? Where they have a lot in common with how an insect is trained, right? An insect kind of lives in this environment for a very long time or the ancestors of this insect have been around for a long time and had a lot of experience that gets baked into this agent. And it's not really smart in the sense of a human, right? It's not able to go and learn calculus, but it's able to navigate its environment extremely well. And it's able to handle unexpected things in the environment that it's never seen before pretty well. And we see the same sort of thing with our Dota bots, right? That they're able to, within this game, they're able to play against humans, which is something that never existed in its evolutionary environment, totally different play styles from humans versus the bots. And yet it's able to handle it extremely well. And that's something that I think was very surprising to us, was something that doesn't really emerge from what we've seen with PPO at smaller scale, right? And the kind of scale we're running this stuff at was, I could say like 100,000 CPU cores running with like hundreds of GPUs. It was probably about something like hundreds of years of experience going into this bot every single real day. And so that scale is massive and we start to see very different kinds of behaviors out of the algorithms that we all know and love. Dota, you mentioned, beat the world expert one v one. And then you weren't able to win five v five this year. Yeah. At the best players in the world. So what's the comeback story? First of all, talk through that. That was an exceptionally exciting event. And what's the following months and this year look like? Yeah, yeah, so one thing that's interesting is that we lose all the time. Because we play. Who's we here? The Dota team at OpenAI. We play the bot against better players than our system all the time. Or at least we used to, right? Like the first time we lost publicly was we went up on stage at the international and we played against some of the best teams in the world and we ended up losing both games, but we gave them a run for their money, right? That both games were kind of 30 minutes, 25 minutes and they went back and forth, back and forth, back and forth. And so I think that really shows that we're at the professional level and that kind of looking at those games, we think that the coin could have gone a different direction and we could have had some wins. That was actually very encouraging for us. And it's interesting because the international was at a fixed time, right? So we knew exactly what day we were going to be playing and we pushed as far as we could, as fast as we could. Two weeks later, we had a bot that had an 80% win rate versus the one that played at TI. So the march of progress, you should think of it as a snapshot rather than as an end state. And so in fact, we'll be announcing our finals pretty soon. I actually think that we'll announce our final match prior to this podcast being released. So we'll be playing against the world champions. And for us, it's really less about, like the way that we think about what's upcoming is the final milestone, the final competitive milestone for the project, right? That our goal in all of this isn't really about beating humans at Dota. Our goal is to push the state of the art in reinforcement learning. And we've done that, right? And we've actually learned a lot from our system and that we have, I think, a lot of exciting next steps that we want to take. And so kind of as a final showcase of what we built, we're going to do this match. But for us, it's not really the success or failure to see do we have the coin flip go in our direction or against. Where do you see the field of deep learning heading in the next few years? Where do you see the work and reinforcement learning perhaps heading, and more specifically with OpenAI, all the exciting projects that you're working on, what does 2019 hold for you? Massive scale. Scale. I will put an asterisk on that and just say, I think that it's about ideas plus scale. You need both. So that's a really good point. So the question, in terms of ideas, you have a lot of projects that are exploring different areas of intelligence. And the question is, when you think of scale, do you think about growing the scale of those individual projects or do you think about adding new projects? And sorry to, and if you're thinking about adding new projects, or if you look at the past, what's the process of coming up with new projects and new ideas? Yep. So we really have a life cycle of project here. So we start with a few people just working on a small scale idea. And language is actually a very good example of this. That it was really one person here who was pushing on language for a long time. I mean, then you get signs of life, right? And so this is like, let's say, with the original GPT, we had something that was interesting and we said, okay, it's time to scale this, right? It's time to put more people on it, put more computational resources behind it. And then we just kind of keep pushing and keep pushing. And the end state is something that looks like Dota or robotics, where you have a large team of 10 or 15 people that are running things at very large scale and that you're able to really have material engineering and sort of machine learning science coming together to make systems that work and get material results that just would have been impossible otherwise. So we do that whole life cycle. We've done it a number of times, typically end to end. It's probably two years or so to do it. The organization has been around for three years, so maybe we'll find that we also have longer life cycle projects, but we'll work up to those. So one team that we were actually just starting, Ilya and I are kicking off a new team called the Reasoning Team, and that this is to really try to tackle how do you get neural networks to reason? And we think that this will be a long term project. It's one that we're very excited about. In terms of reasoning, super exciting topic, what kind of benchmarks, what kind of tests of reasoning do you envision? What would, if you sat back with whatever drink and you would be impressed that this system is able to do something, what would that look like? Theorem proving. Theorem proving. So some kind of logic, and especially mathematical logic. I think so. I think that there's other problems that are dual to theorem proving in particular. You think about programming, you think about even security analysis of code, that these all kind of capture the same sorts of core reasoning and being able to do some out of distribution generalization. So it would be quite exciting if OpenAI Reasoning Team was able to prove that P equals NP. That would be very nice. It would be very, very, very exciting, especially. If it turns out that P equals NP, that'll be interesting too. It would be ironic and humorous. So what problem stands out to you as the most exciting and challenging and impactful to the work for us as a community in general and for OpenAI this year? You mentioned reasoning. I think that's a heck of a problem. Yeah, so I think reasoning's an important one. I think it's gonna be hard to get good results in 2019. Again, just like we think about the life cycle, takes time. I think for 2019, language modeling seems to be kind of on that ramp. It's at the point that we have a technique that works. We wanna scale 100x, 1,000x, see what happens. Awesome. Do you think we're living in a simulation? I think it's hard to have a real opinion about it. It's actually interesting. I separate out things that I think can have like, yield materially different predictions about the world from ones that are just kind of fun to speculate about. I kind of view simulation as more like, is there a flying teapot between Mars and Jupiter? Like, maybe, but it's a little bit hard to know what that would mean for my life. So there is something actionable. So some of the best work OpenAI has done is in the field of reinforcement learning. And some of the success of reinforcement learning come from being able to simulate the problem you're trying to solve. So do you have a hope for reinforcement, for the future of reinforcement learning and for the future of simulation? Like whether it's, we're talking about autonomous vehicles or any kind of system. Do you see that scaling to where we'll be able to simulate systems and hence, be able to create a simulator that echoes our real world and proving once and for all, even though you're denying it, that we're living in a simulation? I feel like it's two separate questions, right? So kind of at the core there of like, can we use simulation for self driving cars? Take a look at our robotic system, Dactyl, right? That was trained in simulation using the Dota system, in fact, and it transfers to a physical robot. And I think everyone looks at our Dota system, they're like, okay, it's just a game. How are you ever gonna escape to the real world? And the answer is, well, we did it with a physical robot that no one could program. And so I think the answer is simulation goes a lot further than you think if you apply the right techniques to it. Now, there's a question of, are the beings in that simulation gonna wake up and have consciousness? I think that one seems a lot harder to, again, reason about. I think that you really should think about where exactly does human consciousness come from in our own self awareness? And is it just that once you have a complicated enough neural net, you have to worry about the agents feeling pain? And I think there's interesting speculation to do there, but again, I think it's a little bit hard to know for sure. Well, let me just keep with the speculation. Do you think to create intelligence, general intelligence, you need, one, consciousness, and two, a body? Do you think any of those elements are needed, or is intelligence something that's orthogonal to those? I'll stick to the non grand answer first, right? So the non grand answer is just to look at, what are we already making work? You look at GPT2, a lot of people would have said that to even get these kinds of results, you need real world experience. You need a body, you need grounding. How are you supposed to reason about any of these things? How are you supposed to like even kind of know about smoke and fire and those things if you've never experienced them? And GPT2 shows that you can actually go way further than that kind of reasoning would predict. So I think that in terms of, do we need consciousness? Do we need a body? It seems the answer is probably not, right? That we could probably just continue to push kind of the systems we have. They already feel general. They're not as competent or as general or able to learn as quickly as an AGI would, but they're at least like kind of proto AGI in some way, and they don't need any of those things. Now let's move to the grand answer, which is, are our neural nets conscious already? Would we ever know? How can we tell, right? And here's where the speculation starts to become at least interesting or fun and maybe a little bit disturbing depending on where you take it. But it certainly seems that when we think about animals, that there's some continuum of consciousness. You know, my cat I think is conscious in some way, right? Not as conscious as a human. And you could imagine that you could build a little consciousness meter, right? You point at a cat, it gives you a little reading. Point at a human, it gives you much bigger reading. What would happen if you pointed one of those at a donor neural net? And if you're training in this massive simulation, do the neural nets feel pain? You know, it becomes pretty hard to know that the answer is no. And it becomes pretty hard to really think about what that would mean if the answer were yes. And it's very possible, you know, for example, you could imagine that maybe the reason that humans have consciousness is because it's a convenient computational shortcut, right? If you think about it, if you have a being that wants to avoid pain, which seems pretty important to survive in this environment and wants to like, you know, eat food, then that maybe the best way of doing it is to have a being that's conscious, right? That, you know, in order to succeed in the environment, you need to have those properties and how are you supposed to implement them and maybe this consciousness's way of doing that. If that's true, then actually maybe we should expect that really competent reinforcement learning agents will also have consciousness. But you know, that's a big if. And I think there are a lot of other arguments they can make in other directions. I think that's a really interesting idea that even GPT2 has some degree of consciousness. That's something, it's actually not as crazy to think about, it's useful to think about as we think about what it means to create intelligence of a dog, intelligence of a cat, and the intelligence of a human. So last question, do you think we will ever fall in love, like in the movie Her, with an artificial intelligence system or an artificial intelligence system falling in love with a human? I hope so. If there's any better way to end it is on love. So Greg, thanks so much for talking today. Thank you for having me.
Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17
The following is a conversation with Elon Musk. He's the CEO of Tesla, SpaceX, Neuralink, and a cofounder of several other companies. This conversation is part of the Artificial Intelligence podcast. The series includes leading researchers in academia and industry, including CEOs and CTOs of automotive, robotics, AI, and technology companies. This conversation happened after the release of the paper from our group at MIT on Driver Functional Vigilance, during use of Tesla's Autopilot. The Tesla team reached out to me offering a podcast conversation with Mr. Musk. I accepted, with full control of questions I could ask and the choice of what is released publicly. I ended up editing out nothing of substance. I've never spoken with Elon before this conversation, publicly or privately. Neither he nor his companies have any influence on my opinion, nor on the rigor and integrity of the scientific method that I practice in my position at MIT. Tesla has never financially supported my research, and I've never owned a Tesla vehicle, and I've never owned Tesla stock. This podcast is not a scientific paper. It is a conversation. I respect Elon as I do all other leaders and engineers I've spoken with. We agree on some things and disagree on others. My goal is always with these conversations is to understand the way the guest sees the world. One particular point of disagreement in this conversation was the extent to which camera based driver monitoring will improve outcomes and for how long it will remain relevant for AI assisted driving. As someone who works on and is fascinated by human centered artificial intelligence, I believe that if implemented and integrated effectively, camera based driver monitoring is likely to be of benefit in both the short term and the long term. In contrast, Elon and Tesla's focus is on the improvement of autopilot such that it's statistical safety benefits override any concern of human behavior and psychology. Elon and I may not agree on everything, but I deeply respect the engineering and innovation behind the efforts that he leads. My goal here is to catalyze a rigorous nuanced and objective discussion in industry and academia on AI assisted driving. One that ultimately makes for a safer and better world. And now here's my conversation with Elon Musk. What was the vision, the dream of autopilot when, in the beginning, the big picture system level, when it was first conceived and started being installed in 2014, the hardware and the cars, what was the vision, the dream? I wouldn't characterize the vision or dream, simply that there are obviously two massive revolutions in, in the automobile industry. One is the transition to electrification and then the other is autonomy. And it became obvious to me that in the future, any car that does not have autonomy would be about as useful as a horse, which is not to say that there's no use, it's just rare and somewhat idiosyncratic if somebody has a horse at this point. It's just obvious that cars will drive themselves completely. It's just a question of time. And if we did not participate in the autonomy revolution, then our cars would not be useful to people relative to cars that are autonomous. I mean, an autonomous car is arguably worth five to 10 times more than a car which is not autonomous. In the long term. Turns out what you mean by long term, but let's say at least for the next five years, perhaps 10 years. So there are a lot of very interesting design choices with autopilot early on. First is showing on the instrument cluster or in the Model 3 on the center stack display, what the combined sensor suite sees, what was the thinking behind that choice? Was there a debate? What was the process? The whole point of the display is to provide a health check on the vehicle's perception of reality. So the vehicle's taking information from a bunch of sensors, primarily cameras, but also radar and ultrasonics, GPS, and so forth. And then that, that information is then rendered into vector space and that, you know, with a bunch of objects with, with properties like lane lines and traffic lights and other cars. And then in vector space that is rerendered onto a display. So you can confirm whether the car knows what's going on or not by looking out the window. Right. I think that's an extremely powerful thing for people to get an understanding. So it become one with the system and understanding what the system is capable of. Now, have you considered showing more? So if we look at the computer vision, you know, like road segmentation, lane detection, vehicle detection, object detection, underlying the system, there is at the edges, some uncertainty. Have you considered revealing the parts that the vehicle is in, the parts that the, the uncertainty in the system, the sort of probabilities associated with, with say image recognition or something like that? Yeah. So right now it shows like the vehicles in the vicinity, a very clean, crisp image. And people do confirm that there's a car in front of me and the system sees there's a car in front of me, but to help people build an intuition of what computer vision is by showing some of the uncertainty. Well, I think it's, in my car, I always look at the sort of the debug view. And there's, there's two debug views. Uh, one is augmented vision, uh, where, which I'm sure you've seen where it's basically, we draw boxes and labels around objects that are recognized. And then there's a work called the visualizer, which is basically vector space representation, summing up the input from all sensors that doesn't, that doesn't, does not show any pictures, but it shows, uh, all of the, it's basically shows the car's view of, of, of the world in vector space. Um, but I think this is very difficult for people to know, normal people to understand, they would not know what they're looking at. So it's almost an HMI challenge to the current things that are being displayed is optimized for the general public understanding of what the system is capable of. It's like, if you have no idea what, how computer vision works or anything, you can sort of look at the screen and see if the car knows what's going on. And then if you're, you know, if you're a development engineer or if you're, you know, if you're, if you have the development build like I do, then you can see, uh, you know, all the debug information, but those would just be like total diverse to most people. What's your view on how to best distribute effort. So there's three, I would say technical aspects of autopilot that are really important. So it's the underlying algorithms, like the neural network architecture, there's the data, so that the strain on, and then there's a hardware development. There may be others, but so look, algorithm, data, hardware, you don't, you only have so much money, only have so much time, what do you think is the most important thing to, to, uh, allocate resources to, or do you see it as pretty evenly distributed between those three? We automatically get a fast amounts of data because all of our cars have eight external facing cameras and radar, and usually 12 ultrasonic sensors, uh, GPS, obviously, um, and, uh, IMU. And so we basically have a fleet that has, uh, and we've got about 400,000 cars on the road that have that level of data, I think you keep quite close track of it actually. Yes. Yeah. So we're, we're approaching half a million cars on the road that have the full sensor suite. Um, so this is, I'm, I'm not sure how many other cars on the road have the sensor suite, but I would be surprised if it's more than 5,000, which means that we have 99% of all the data. So there's this huge inflow of data. Absolutely. Massive inflow of data, and then we, it's, it's taken us about three years, but now we've finally developed our full self driving computer, which can process, uh, and in order of magnitude as much as the Nvidia system that we currently have in the, in the cars, and it's really just a, to use it, you've unplugged the Nvidia computer and plug the Tesla computer in and that's it. And it's, it's, uh, in fact, we're not even, we're still exploring the boundaries of capabilities, uh, but we're able to run the cameras at full frame rate, full resolution, uh, not even crop the images and it's still got headroom even on one of the systems, the harder full self driving computer is really two computers, two systems on a chip that are fully redundant. So you could put a bolt through basically any part of that system and it still works. The redundancy, are they perfect copies of each other or also it's purely for redundancy as opposed to an argue machine kind of architecture where they're both making decisions. This is purely for redundancy. I think it would more like it's, if you have a twin engine aircraft, uh, commercial aircraft, the system will operate best if both systems are operating, but it's, it's capable of operating safely on one. So, but as it is right now, we can just run, we're, we haven't even hit the, the, the edge of performance. So there's no need to actually distribute functionality across both SOCs. We can actually just run a full duplicate on, on, on each one. Do you haven't really explored or hit the limit of this? Not yet at the limiter. So the magic of deep learning is that it gets better with data. You said there's a huge inflow of data, but the thing about driving the really valuable data to learn from is the edge cases. So how do you, I mean, I've, I've heard you talk somewhere about, uh, autopilot disengagements being an important moment of time to use. Is there other edge cases where you can, you know, you can, you can, you can drive, is there other edge cases or perhaps can you speak to those edge cases? What aspects of that might be valuable or if you have other ideas, how to discover more and more and more edge cases in driving? Well, there's a lot of things that are learned. There are certainly edge cases where I say somebody is on autopilot and they, they take over and then, okay, that, that, that, that's a trigger that goes to our system that says, okay, did they take over for convenience or do they take over because the autopilot wasn't working properly. There's also like, let's say we're, we're trying to figure out what is the optimal spline for traversing an intersection. Um, then then the ones where there are no interventions and are the right ones. So you then say, okay, when it looks like this, do the following. And then, and then you get the optimal spline for a complex, uh, navigating a complex, uh, intersection. So that's for this. So there's kind of the common case you're trying to, uh, capture a huge amount of samples of a particular intersection, how, when things went right, and then there's the edge case where, uh, as you said, not for convenience, but something didn't go exactly right. Somebody took over, somebody asserted manual control from autopilot. And really like the way to look at this as view all input is error. If the user had to do input, it does something all input is error. That's a powerful line. That's a powerful line to think of it that way, because they may very well be error, but if you want to exit the highway, or if you want to, uh, it's a navigation decision that all autopilot is not currently designed to do. Then the driver takes over. How do you know the difference? That's going to change with navigate an autopilot, which we were just released and without still confirm. So the navigation, like lane change based, like a certain control in order to change, do a lane change or exit a freeway or, or doing a highway under change, the vast majority of that will go away with, um, the release that just went out. Yeah. So that, that I don't think people quite understand how big of a step that is. Yeah, they don't. So if you drive the car, then you do. So you still have to keep your hands on the steering wheel currently when it does the automatic lane change. What are, so there's, there's these big leaps through the development of autopilot through its history and what stands out to you as the big leaps? I would say this one, navigate an autopilot without, uh, confirm without having to confirm is a huge leap. It is a huge leap. It also automatically overtakes low cars. So it's, it's both navigation, um, and seeking the fastest lane. So it'll, it'll, it'll, you know, overtake a slow cause, um, and exit the freeway and take highway interchanges. And, and then, uh, we have, uh, traffic lights, uh, recognition, which introduced initially as a, as a warning. I mean, on the development version that I'm driving, the car fully, fully stops and goes at traffic lights. So those are the steps, right? You've just mentioned something sort of inkling a step towards full autonomy. What would you say are the biggest technological roadblocks to full self driving? Actually, I don't think, I think we just, the full self driving computer that we just, uh, that the Tesla, what we call the FSD computer, uh, that that's now in production. Uh, so if you order, uh, any model SRX or any model three that has the full self driving package, you'll get the FSD computer. That, that was, that's important to have enough, uh, base computation, uh, then refining the neural net and the control software, uh, which, but all of that can just be provided as an over there update. The thing that's really profound and where I'll be emphasizing at the, uh, sort of what that investor day that we're having focused on autonomy is that the cars currently being produced with the hardware currently being produced is capable of full self driving, but capable is an interesting word because, um, like the hardware is, and as we refine the software, the capabilities will increase dramatically, um, and then the reliability will increase dramatically, and then it will receive regulatory approval. So essentially buying a car today is an investment in the future. You're essentially buying a car, you're buying the, I think the most profound thing is that if you buy a Tesla today, I believe you are buying an appreciating asset, not a depreciating asset. So that's a really important statement there because if hardware is capable enough, that's the hard thing to upgrade usually. Exactly. So then the rest is a software problem. Yes. Software has no marginal cost really. But what's your intuition on the software side? How hard are the remaining steps to, to get it to where, um, you know, uh, the, the experience, uh, not just the safety, but the full experience is something that people would, uh, enjoy. Well, I think people enjoy it very much so on, on, on the highways. It's, it's a total game changer for quality of life for using, you know, Tesla autopilot on the highways, uh, so it's really just extending that functionality to city streets, adding in the traffic light recognition, uh, navigating complex intersections and, um, and then, uh, being able to navigate complicated parking lots so the car can, uh, exit a parking space and come and find you, even if it's in a complete maze of a parking lot, um, and, and, and, and then if, and then you can just, it can just drop you off and find a parking spot by itself. Yeah. In terms of enjoyability and something that people would, uh, would actually find a lot of use from the parking lot is a, is a really, you know, it's, it's rich of annoyance when you have to do it manually. So there's a lot of benefit to be gained from automation there. So let me start injecting the human into this discussion a little bit. Uh, so let's talk about, uh, the, the, the, the, the, the, the, the, the, the, about full autonomy. If you look at the current level four vehicles being tested on road, like Waymo and so on, they're only technically autonomous. They're really level two systems with just the different design philosophy, because there's always a safety driver in almost all cases and they're monitoring the system. Right. Do you see Tesla's full self driving as still for a time to come requiring supervision of the human being. So it's capabilities are powerful enough to drive, but nevertheless requires the human to still be supervising, just like a safety driver is in a other fully autonomous vehicles. I think it will require detecting hands on wheel for at least, uh, six months or something like that from here. It really is a question of like, from a regulatory standpoint, uh, what, how much safer than a person does autopilot need to be for it to be okay to not monitor the car, you know, and, and this is a debate that one can have it. And then if you, but you need, you know, a large sample, a large amount of data. Um, so you can prove with high confidence, statistically speaking, that the car is dramatically safer than a person, um, and that adding in the person monitoring does not materially affect the safety. So it might need to be like two or 300% safer than a person. And how do you prove that incidents per mile incidents per mile crashes and fatalities, fatalities would be a factor, but there, there are just not enough fatalities to be statistically significant at scale, but there are enough. Crashes, you know, there are far more crashes than there are fatalities. So you can assess what is the probability of a crash that then there's another step which probability of injury and probability of permanent injury, the probability of death, and all of those need to be a much better than a person, uh, by at least perhaps 200%. And you think there's, uh, the ability to have a healthy discourse with the regulatory bodies on this topic? I mean, there's no question that, um, but, um, regulators pay just disproportionate amount of attention to that, which generates press. This is just an objective fact. Um, and Tesla generates a lot of press. So the, you know, in the United States, this, I think almost, you know, uh, in the United States, this, I think almost 40,000 automotive deaths per year. Uh, but if there are four in Tesla, they'll probably receive a thousand times more press than anyone else. So the, the psychology of that is actually fascinating. I don't think we'll have enough time to talk about that, but I have to talk to you about the human side of things. So myself and our team at MIT recently released the paper on functional vigilance of drivers while using autopilot. This is work we've been doing since autopilot was first released publicly over three years ago, collecting video of driver faces and driver body. So I saw that you tweeted a quote from the abstract, so I can at least, uh, guess that you've glanced at it. Yeah, I read it. Can I talk you through what we found? Sure. Okay. So it appears that in the data that we've collected, that drivers are maintaining functional vigilance such that we're looking at 18,000 disengagement from autopilot, 18,900 and annotating, were they able to take over control in a timely manner? So they were there present looking at the road, uh, to take over control. Okay. So this, uh, goes against what, what many would predict from the body of literature on vigilance with automation. Now, the question is, do you think these results hold across the broader population? So ours is just a small subset. Do you think, uh, one of the criticism is that, you know, there's a small minority of drivers that may be highly responsible where their vigilance decrement would increase with autopilot use? I think this is all really going to be swept. I mean, the system's improving so much, so fast that this is going to be a mood point very soon where vigilance is like, if something's many times safer than a person, then adding a person, uh, does the, the, the effect on safety is, is limited. Um, and in fact, uh, it could be negative. That's really interesting. So the, uh, the, so the fact that a human may, some percent of the population may, uh, exhibit a vigilance decrement will not affect overall statistics numbers of safety. No, in fact, I think it will become, uh, very, very quickly, maybe even towards the end of this year, but I'd say I'd be shocked if it's not next year. At the latest, that, um, having the person, having a human intervene will decrease safety decrease. It's like, imagine if you're an elevator and it used to be that there were elevator operators, um, and, and you couldn't go on an elevator by yourself and work the lever to move between floors. Um, and now, uh, nobody wants it an elevator operator because the automated elevator that stops the floors is much safer than the elevator operator. And in fact, it would be quite dangerous to have someone with a lever that can move the elevator between floors. So that's a, that's a really powerful statement and really interesting one. But I also have to ask from a user experience and from a safety perspective, one of the passions for me algorithmically is a camera based detection of, uh, of just sensing the human, but detecting what the driver is looking at, cognitive load, body pose on the computer vision side, that's a fascinating problem. But do you, and there's many in industry believe you have to have camera based driver monitoring. Do you think there could be benefit gained from driver monitoring? If you have a system that's, that's at, that's at or below a human level reliability, then driver monitoring makes sense. But if your system is dramatically better, more likely to be better, more liable than, than a human, then drive monitoring monitoring is not just not help much. And, uh, like I said, you, you, just like, as an, you wouldn't want someone into like, you wouldn't want someone in the elevator, if you're in an elevator, do you really want someone with a big lever, some, some random person operating the elevator between floors? I wouldn't trust that or rather have the buttons. Okay. You're optimistic about the pace of improvement of the system that from what you've seen with the full self driving car computer, the rate of improvement is exponential. So one of the other very interesting design choices early on that connects to this is the operational design domain of autopilot. So where autopilot is able to be turned on the, so contrast another vehicle system that we're studying is the Cadillac SuperCrew system. That's in terms of ODD, very constrained to particular kinds of highways, well mapped, tested, but it's much narrower than the ODD of Tesla vehicles. What's there's, there's pros and... It's like ADD. Yeah. That's good. That's a, that's a good line. Uh, what was the design decision, uh, what, in that different philosophy of thinking where there's pros and cons, what we see with, uh, a wide ODD is drive Tesla drivers are able to explore more the limitations of the system, at least early on, and they understand together with the instrument cluster display, they start to understand what are the capabilities. So that's a benefit. The con is you go, you're letting drivers use it basically anywhere. So anyway, that could detect lanes with confidence. Was there a philosophy, uh, design decisions that were challenging that were being made there or from the very beginning, was that, uh, done on purpose with intent? Well, I mean, I think it's frankly, it's pretty crazy giving it, letting people drive a two ton death machine manually. Uh, that's crazy. Like, like in the future of people who are like, I can't believe anyone was just allowed to drive for one of these two ton death machines and they just drive wherever they wanted. Just like elevators. He was like, move the elevator with that lever, wherever you want. It can stop at halfway between floors if you want. It's pretty crazy. So it's going to seem like a mad thing in the future that people were driving cars. So I have a bunch of questions about the human psychology, about behavior and so on that would become that because, uh, you have faith in the AI system, uh, not faith, but, uh, the, both on the hardware side and the deep learning approach of learning from data will make it just far safer than humans. Yeah, exactly. Recently, there are a few hackers who, uh, tricked autopilot to act in unexpected ways with adversarial examples. So we all know that neural network systems are very sensitive to minor disturbances to these adversarial examples on input. Do you think it's possible to defend against something like this for the broader, for the industry? Sure. So can you elaborate on the, on the confidence behind that answer? Um, well the, you know, neural net is just like a basic bunch of matrix math. Or you have to be like a very sophisticated, somebody who really understands neural nets and like basically reverse engineer how the matrix is being built and then create a little thing that's just exactly, um, causes the matrix math to be slightly off. But it's very easy to then block it, block that by, by having basically anti negative recognition. It's like if you, if the system sees something that looks like a matrix hack, uh, exclude it, so it's such an easy thing to do. So learn both on the, the valid data and the invalid data. So basically learn on the adversarial examples to be able to exclude them. Yeah. Like you basically want to both know what is, what is a car and what is definitely not a car. And you train for this is a car and this is definitely not a car. Those are two different things. People have no idea neural nets really. They probably think neural nets are both like, you know, fishing net only. So as you know, so taking a step beyond just Tesla and autopilot, uh, current deep learning approaches still seem in some ways to be far from general intelligence systems. Do you think the current approaches will take us to general intelligence or do totally new ideas need to be invented? I think we're missing a few key ideas for general intelligence, general artificial general intelligence, but it's going to be upon us very quickly. And then we'll need to figure out what shall we do if we even have that choice? But it's amazing how people can't differentiate between say the narrow AI that, you know, allows a car to figure out what a lane line is and, and, and, you know, and navigate streets versus general intelligence. Like these are just very different things. Like your toaster and your computer are both machines, but one's much more sophisticated than another. You're confident with Tesla. You can create the world's best toaster. The world's best toaster. Yes. The world's best toaster. Yes. The world's best self driving. I'm, I, yes. To me right now, this seems game set match. I don't, I mean, that sounds, I don't want to be complacent or overconfident, but that's what it appears. That is just literally what it, how it appears right now. I could be wrong, but it appears to be the case that Tesla is vastly ahead of everyone. Do you think we will ever create an AI system that we can love and loves us back in a deep, meaningful way? Like in the movie, her, I think AI will be capable of convincing you to fall in love with it very well. And that's different than us humans. You know, we start getting into a metaphysical question of like, do emotions and thoughts exist in a different realm than the physical? And maybe they do. Maybe they don't. I don't know. But from a physics standpoint, I tend to think of things, you know, like physics was my main sort of training and from a physics standpoint, essentially, if it loves you in a way that is, that you can't tell whether it's real or not, it is real. That's a physics view of love. Yeah. If there's no, if you cannot just, if you cannot prove that it does not, if there's no, if there's no test that you can apply that would make it, allow you to tell the difference, then there is no difference. Right. And it's similar to seeing our world as simulation. There may not be a test to tell the difference between what the real world and the simulation, and therefore from a physics perspective, it might as well be the same thing. Yes. And there may be ways to test whether it's a simulation. There might be, I'm not saying there aren't, but you could certainly imagine that a simulation could correct that once an entity in the simulation found a way to detect the simulation, it could either restart, you know, pause the simulation, start a new simulation, or do one of many other things that then corrects for that error. So when maybe you or somebody else creates an AGI system and you get to ask her one question, what would that question be? What's outside the simulation? Elon, thank you so much for talking today. It was a pleasure. All right. Thank you.
Elon Musk: Tesla Autopilot | Lex Fridman Podcast #18
The following is a conversation with Ian Goodfellow. He's the author of the popular textbook on deep learning simply titled Deep Learning. He coined the term of Generative Adversarial Networks, otherwise known as GANs, and with his 2014 paper is responsible for launching the incredible growth of research and innovation in this subfield of deep learning. He got his BS and MS at Stanford, his PhD at University of Montreal with Yoshua Bengio and Aaron Kerrville. He held several research positions including at OpenAI, Google Brain, and now at Apple as the Director of Machine Learning. This recording happened while Ian was still at Google Brain, but we don't talk about anything specific to Google or any other organization. This conversation is part of the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now here's my conversation with Ian Goodfellow. You open your popular deep learning book with a Russian doll type diagram that shows deep learning is a subset of representation learning, which in turn is a subset of machine learning and finally a subset of AI. So this kind of implies that there may be limits to deep learning in the context of AI. So what do you think is the current limits of deep learning and are those limits something that we can overcome with time? Yeah, I think one of the biggest limitations of deep learning is that right now it requires really a lot of data, especially labeled data. There are some unsupervised and semi supervised learning algorithms that can reduce the amount of labeled data you need, but they still require a lot of unlabeled data, reinforcement learning algorithms. They don't need labels, but they need really a lot of experiences. As human beings, we don't learn to play Pong by failing at Pong 2 million times. So just getting the generalization ability better is one of the most important bottlenecks in the capability of the technology today. And then I guess I'd also say deep learning is like a component of a bigger system. So far, nobody is really proposing to have only what you'd call deep learning as the entire ingredient of intelligence. You use deep learning as sub modules of other systems, like AlphaGo has a deep learning model that estimates the value function. Most reinforcement learning algorithms have a deep learning module that estimates which action to take next, but you might have other components. So you're basically building a function estimator. Do you think it's possible, you said nobody's kind of been thinking about this so far, but do you think neural networks could be made to reason in the way symbolic systems did in the 80s and 90s to do more, create more like programs as opposed to functions? Yeah, I think we already see that a little bit. I already kind of think of neural nets as a kind of program. I think of deep learning as basically learning programs that have more than one step. So if you draw a flow chart or if you draw a TensorFlow graph describing your machine learning model, I think of the depth of that graph as describing the number of steps that run in sequence. And then the width of that graph is the number of steps that run in parallel. Now it's been long enough that we've had deep learning working that it's a little bit silly to even discuss shallow learning anymore. But back when I first got involved in AI, when we used machine learning, we were usually learning things like support vector machines. You could have a lot of input features to the model and you could multiply each feature by a different weight. All those multiplications were done in parallel to each other. There wasn't a lot done in series. I think what we got with deep learning was really the ability to have steps of a program that run in sequence. And I think that we've actually started to see that what's important with deep learning is more the fact that we have a multi step program rather than the fact that we've learned a representation. If you look at things like resonance, for example, they take one particular kind of representation and they update it several times. Back when deep learning first really took off in the academic world in 2006, when Jeff Hinton showed that you could train deep belief networks, everybody who was interested in the idea thought of it as each layer learns a different level of abstraction. That the first layer trained on images learns something like edges and the second layer learns corners. And eventually you get these kind of grandmother cell units that recognize specific objects. Today I think most people think of it more as a computer program where as you add more layers you can do more updates before you output your final number. But I don't think anybody believes that layer 150 of the ResNet is a grandmother cell and layer 100 is contours or something like that. Okay, so you're not thinking of it as a singular representation that keeps building. You think of it as a program, sort of almost like a state. Representation is a state of understanding. Yeah, I think of it as a program that makes several updates and arrives at better and better understandings, but it's not replacing the representation at each step. It's refining it. And in some sense, that's a little bit like reasoning. It's not reasoning in the form of deduction, but it's reasoning in the form of taking a thought and refining it and refining it carefully until it's good enough to use. So do you think, and I hope you don't mind, we'll jump philosophical every once in a while. Do you think of cognition, human cognition, or even consciousness as simply a result of this kind of sequential representation learning? Do you think that can emerge? Cognition, yes, I think so. Consciousness, it's really hard to even define what we mean by that. I guess there's, consciousness is often defined as things like having self awareness, and that's relatively easy to turn into something actionable for a computer scientist to reason about. People also define consciousness in terms of having qualitative states of experience, like qualia, and there's all these philosophical problems, like could you imagine a zombie who does all the same information processing as a human, but doesn't really have the qualitative experiences that we have? That sort of thing, I have no idea how to formalize or turn it into a scientific question. I don't know how you could run an experiment to tell whether a person is a zombie or not. And similarly, I don't know how you could run an experiment to tell whether an advanced AI system had become conscious in the sense of qualia or not. But in the more practical sense, like almost like self attention, you think consciousness and cognition can, in an impressive way, emerge from current types of architectures that we think of as learning. Or if you think of consciousness in terms of self awareness and just making plans based on the fact that the agent itself exists in the world, reinforcement learning algorithms are already more or less forced to model the agent's effect on the environment. So that more limited version of consciousness is already something that we get limited versions of with reinforcement learning algorithms if they're trained well. But you say limited, so the big question really is how you jump from limited to human level, right? And whether it's possible, even just building common sense reasoning seems to be exceptionally difficult. So if we scale things up, if we get much better on supervised learning, if we get better at labeling, if we get bigger data sets, more compute, do you think we'll start to see really impressive things that go from limited to something, echoes of human level cognition? I think so, yeah. I'm optimistic about what can happen just with more computation and more data. I do think it'll be important to get the right kind of data. Today, most of the machine learning systems we train are mostly trained on one type of data for each model. But the human brain, we get all of our different senses and we have many different experiences like riding a bike, driving a car, talking to people, reading. I think when we get that kind of integrated data set, working with a machine learning model that can actually close the loop and interact, we may find that algorithms not so different from what we have today learn really interesting things when you scale them up a lot and train them on a large amount of multimodal data. So multimodal is really interesting, but within, like you're working adversarial examples. So selecting within modal, within one mode of data, selecting better at what are the difficult cases from which you're most useful to learn from. Oh yeah, like could we get a whole lot of mileage out of designing a model that's resistant to adversarial examples or something like that? Right, that's the question. My thinking on that has evolved a lot over the last few years. When I first started to really invest in studying adversarial examples, I was thinking of it mostly as adversarial examples reveal a big problem with machine learning and we would like to close the gap between how machine learning models respond to adversarial examples and how humans respond. After studying the problem more, I still think that adversarial examples are important. I think of them now more of as a security liability than as an issue that necessarily shows there's something uniquely wrong with machine learning as opposed to humans. Also, do you see them as a tool to improve the performance of the system? Not on the security side, but literally just accuracy. I do see them as a kind of tool on that side, but maybe not quite as much as I used to think. We've started to find that there's a trade off between accuracy on adversarial examples and accuracy on clean examples. Back in 2014, when I did the first adversarily trained classifier that showed resistance to some kinds of adversarial examples, it also got better at the clean data on MNIST. And that's something we've replicated several times on MNIST, that when we train against weak adversarial examples, MNIST classifiers get more accurate. So far that hasn't really held up on other data sets and hasn't held up when we train against stronger adversaries. It seems like when you confront a really strong adversary, you tend to have to give something up. Interesting. But it's such a compelling idea because it feels like that's how us humans learn is through the difficult cases. We try to think of what would we screw up and then we make sure we fix that. It's also in a lot of branches of engineering, you do a worst case analysis and make sure that your system will work in the worst case. And then that guarantees that it'll work in all of the messy average cases that happen when you go out into a really randomized world. Yeah, with driving with autonomous vehicles, there seems to be a desire to just look for, think adversarially, try to figure out how to mess up the system. And if you can be robust to all those difficult cases, then you can, it's a hand wavy empirical way to show your system is safe. Today, most adversarial example research isn't really focused on a particular use case, but there are a lot of different use cases where you'd like to make sure that the adversary can't interfere with the operation of your system. Like in finance, if you have an algorithm making trades for you, people go to a lot of an effort to obfuscate their algorithm. That's both to protect their IP because you don't want to research and develop a profitable trading algorithm then have somebody else capture the gains. But it's at least partly because you don't want people to make adversarial examples that fool your algorithm into making bad trades. Or I guess one area that's been popular in the academic literature is speech recognition. If you use speech recognition to hear an audio wave form and then turn that into a command that a phone executes for you, you don't want a malicious adversary to be able to produce audio that gets interpreted as malicious commands, especially if a human in the room doesn't realize that something like that is happening. And speech recognition, has there been much success in being able to create adversarial examples that fool the system? Yeah, actually. I guess the first work that I'm aware of is a paper called Hidden Voice Commands that came out in 2016, I believe. And they were able to show that they could make sounds that are not understandable by a human but are recognized as the target phrase that the attacker wants the phone to recognize it as. Since then, things have gotten a little bit better on the attacker's side when worse on the defender's side. It's become possible to make sounds that sound like normal speech but are actually interpreted as a different sentence than the human hears. The level of perceptibility of the adversarial perturbation is still kind of high. When you listen to the recording, it sounds like there's some noise in the background, just like rustling sounds. But those rustling sounds are actually the adversarial perturbation that makes the phone hear a completely different sentence. Yeah, that's so fascinating. Peter Norvig mentioned that you're writing the deep learning chapter for the fourth edition of the Artificial Intelligence, A Modern Approach book. So how do you even begin summarizing the field of deep learning in a chapter? Well, in my case, I waited like a year before I actually wrote anything. Even having written a full length textbook before, it's still pretty intimidating to try to start writing just one chapter that covers everything. One thing that helped me make that plan was actually the experience of having written the full book before and then watching how the field changed after the book came out. I've realized there's a lot of topics that were maybe extraneous in the first book and just seeing what stood the test of a few years of being published and what seems a little bit less important to have included now helped me pare down the topics I wanted to cover for the book. It's also really nice now that the field is kind of stabilized to the point where some core ideas from the 1980s are still used today. When I first started studying machine learning, almost everything from the 1980s had been rejected and now some of it has come back. So that stuff that's really stood the test of time is what I focused on putting into the book. There's also, I guess, two different philosophies about how you might write a book. One philosophy is you try to write a reference that covers everything. The other philosophy is you try to provide a high level summary that gives people the language to understand a field and tells them what the most important concepts are. The first deep learning book that I wrote with Joshua and Aaron was somewhere between the two philosophies, that it's trying to be both a reference and an introductory guide. Writing this chapter for Russell Norvig's book, I was able to focus more on just a concise introduction of the key concepts and the language you need to read about them more. In a lot of cases, I actually just wrote paragraphs that said, here's a rapidly evolving area that you should pay attention to. It's pointless to try to tell you what the latest and best version of a learn to learn model is. I can point you to a paper that's recent right now, but there isn't a whole lot of a reason to delve into exactly what's going on with the latest learning to learn approach or the latest module produced by a learning to learn algorithm. You should know that learning to learn is a thing and that it may very well be the source of the latest and greatest convolutional net or recurrent net module that you would want to use in your latest project. But there isn't a lot of point in trying to summarize exactly which architecture and which learning approach got to which level of performance. So you maybe focus more on the basics of the methodology. So from back propagation to feed forward to recurrent neural networks, convolutional, that kind of thing? Yeah, yeah. So if I were to ask you, I remember I took algorithms and data structures algorithms course. I remember the professor asked, what is an algorithm? And yelled at everybody in a good way that nobody was answering it correctly. Everybody knew what the algorithm, it was graduate course. Everybody knew what an algorithm was, but they weren't able to answer it well. So let me ask you in that same spirit, what is deep learning? I would say deep learning is any kind of machine learning that involves learning parameters of more than one consecutive step. So that, I mean, shallow learning is things where you learn a lot of operations that happen in parallel. You might have a system that makes multiple steps. Like you might have hand designed feature extractors, but really only one step is learned. Deep learning is anything where you have multiple operations in sequence, and that includes the things that are really popular today, like convolutional networks and recurrent networks. But it also includes some of the things that have died out like Bolton machines, where we weren't using back propagation. Today, I hear a lot of people define deep learning as gradient descent applied to these differentiable functions. And I think that's a legitimate usage of the term. It's just different from the way that I use the term myself. So what's an example of deep learning that is not gradient descent and differentiable functions? In your, I mean, not specifically perhaps, but more even looking into the future, what's your thought about that space of approaches? Yeah, so I tend to think of machine learning algorithms as decomposed into really three different pieces. There's the model, which can be something like a neural net or a Bolton machine or a recurrent model. And that basically just describes how do you take data and how do you take parameters? And what function do you use to make a prediction given the data and the parameters? Another piece of the learning algorithm is the optimization algorithm. Or not every algorithm can be really described in terms of optimization, but what's the algorithm for updating the parameters or updating whatever the state of the network is? And then the last part is the data set, like how do you actually represent the world as it comes into your machine learning system? So I think of deep learning as telling us something about what does the model look like? And basically to qualify as deep, I say that it just has to have multiple layers. That can be multiple steps in a feed forward differentiable computation. That can be multiple layers in a graphical model. There's a lot of ways that you could satisfy me that something has multiple steps that are each parameterized separately. I think of gradient descent as being all about that other piece, the how do you actually update the parameters piece? So you could imagine having a deep model like a convolutional net and training it with something like evolution or a genetic algorithm. And I would say that still qualifies as deep learning. And then in terms of models that aren't necessarily differentiable, I guess Bolton machines are probably the main example of something where you can't really take a derivative and use that for the learning process. But you can still argue that the model has many steps of processing that it applies when you run inference in the model. So it's the steps of processing that's key. So Jeff Hinton suggests that we need to throw away back propagation and start all over. What do you think about that? What could an alternative direction of training neural networks look like? I don't know that back propagation is gonna go away entirely. Most of the time when we decide that a machine learning algorithm isn't on the critical path to research for improving AI, the algorithm doesn't die. It just becomes used for some specialized set of things. A lot of algorithms like logistic regression don't seem that exciting to AI researchers who are working on things like speech recognition or autonomous cars today. But there's still a lot of use for logistic regression and things like analyzing really noisy data in medicine and finance or making really rapid predictions in really time limited contexts. So I think back propagation and gradient descent are around to stay, but they may not end up being everything that we need to get to real human level or super human AI. Are you optimistic about us discovering back propagation has been around for a few decades? So are you optimistic about us as a community being able to discover something better? Yeah, I am. I think we likely will find something that works better. You could imagine things like having stacks of models where some of the lower level models predict parameters of the higher level models. And so at the top level, you're not learning in terms of literally calculating gradients, but just predicting how different values will perform. You can kind of see that already in some areas like Bayesian optimization, where you have a Gaussian process that predicts how well different parameter values will perform. We already use those kinds of algorithms for things like hyper parameter optimization. And in general, we know a lot of things other than back prop that work really well for specific problems. The main thing we haven't found is a way of taking one of these other non back prop based algorithms and having it really advanced the state of the art on an AI level problem. Right. But I wouldn't be surprised if eventually we find that some of these algorithms that even the ones that already exist, not even necessarily new one, we might find some way of customizing one of these algorithms to do something really interesting at the level of cognition or the level of, I think one system that we really don't have working quite right yet is like short term memory. We have things like LSTMs, they're called long short term memory. They still don't do quite what a human does with short term memory. Like gradient descent to learn a specific fact has to do multiple steps on that fact. Like if I tell you the meeting today is at 3 p.m., I don't need to say over and over again, it's at 3 p.m., it's at 3 p.m., it's at 3 p.m., it's at 3 p.m. for you to do a gradient step on each one. You just hear it once and you remember it. There's been some work on things like self attention and attention like mechanisms, like the neural Turing machine that can write to memory cells and update themselves with facts like that right away. But I don't think we've really nailed it yet. And that's one area where I'd imagine that new optimization algorithms or different ways of applying existing optimization algorithms could give us a way of just lightning fast updating the state of a machine learning system to contain a specific fact like that without needing to have it presented over and over and over again. So some of the success of symbolic systems in the 80s is they were able to assemble these kinds of facts better. But there's a lot of expert input required and it's very limited in that sense. Do you ever look back to that as something that we'll have to return to eventually? Sort of dust off the book from the shelf and think about how we build knowledge, representation, knowledge base. Like will we have to use graph searches? Graph searches, right. And like first order logic and entailment and things like that. That kind of thing, yeah, exactly. In my particular line of work, which has mostly been machine learning security and also generative modeling, I haven't usually found myself moving in that direction. For generative models, I could see a little bit of, it could be useful if you had something like a differentiable knowledge base or some other kind of knowledge base where it's possible for some of our fuzzier machine learning algorithms to interact with a knowledge base. I mean, your network is kind of like that. It's a differentiable knowledge base of sorts. Yeah. But. If we had a really easy way of giving feedback to machine learning models, that would clearly help a lot with generative models. And so you could imagine one way of getting there would be get a lot better at natural language processing. But another way of getting there would be take some kind of knowledge base and figure out a way for it to actually interact with a neural network. Being able to have a chat with a neural network. Yeah. So like one thing in generative models we see a lot today is you'll get things like faces that are not symmetrical, like people that have two eyes that are different colors. I mean, there are people with eyes that are different colors in real life, but not nearly as many of them as you tend to see in the machine learning generated data. So if you had either a knowledge base that could contain the fact, people's faces are generally approximately symmetric and eye color is especially likely to be the same on both sides. Being able to just inject that hint into the machine learning model without it having to discover that itself after studying a lot of data would be a really useful feature. I could see a lot of ways of getting there without bringing back some of the 1980s technology, but I also see some ways that you could imagine extending the 1980s technology to play nice with neural nets and have it help get there. Awesome. So you talked about the story of you coming up with the idea of GANs at a bar with some friends. You were arguing that this, you know, GANs would work, generative adversarial networks, and the others didn't think so. Then you went home at midnight, coded it up, and it worked. So if I was a friend of yours at the bar, I would also have doubts. It's a really nice idea, but I'm very skeptical that it would work. What was the basis of their skepticism? What was the basis of your intuition why it should work? I don't want to be someone who goes around promoting alcohol for the purposes of science, but in this case, I do actually think that drinking helped a little bit. When your inhibitions are lowered, you're more willing to try out things that you wouldn't try out otherwise. So I have noticed in general that I'm less prone to shooting down some of my own ideas when I have had a little bit to drink. I think if I had had that idea at lunchtime, I probably would have thought, it's hard enough to train one neural net, you can't train a second neural net in the inner loop of the outer neural net. That was basically my friend's objection, was that trying to train two neural nets at the same time would be too hard. So it was more about the training process, unless, so my skepticism would be, you know, I'm sure you could train it, but the thing it would converge to would not be able to generate anything reasonable, any kind of reasonable realism. Yeah, so part of what all of us were thinking about when we had this conversation was deep Bolton machines, which a lot of us in the lab, including me, were a big fan of deep Bolton machines at the time. They involved two separate processes running at the same time. One of them is called the positive phase, where you load data into the model and tell the model to make the data more likely. The other one is called the negative phase, where you draw samples from the model and tell the model to make those samples less likely. In a deep Bolton machine, it's not trivial to generate a sample. You have to actually run an iterative process that gets better and better samples coming closer and closer to the distribution the model represents. So during the training process, you're always running these two systems at the same time, one that's updating the parameters of the model and another one that's trying to generate samples from the model. And they worked really well in things like MNIST, but a lot of us in the lab, including me, had tried to get deep Bolton machines to scale past MNIST to things like generating color photos, and we just couldn't get the two processes to stay synchronized. So when I had the idea for GANs, a lot of people thought that the discriminator would have more or less the same problem as the negative phase in the Bolton machine, that trying to train the discriminator in the inner loop, you just couldn't get it to keep up with the generator in the outer loop, and that would prevent it from converging to anything useful. Yeah, I share that intuition. Yeah. But turns out to not be the case. A lot of the time with machine learning algorithms, it's really hard to predict ahead of time how well they'll actually perform. You have to just run the experiment and see what happens. And I would say I still today don't have like one factor I can put my finger on and say, this is why GANs worked for photo generation and deep Bolton machines don't. There are a lot of theory papers showing that under some theoretical settings, the GAN algorithm does actually converge, but those settings are restricted enough that they don't necessarily explain the whole picture in terms of all the results that we see in practice. So taking a step back, can you, in the same way as we talked about deep learning, can you tell me what generative adversarial networks are? Yeah, so generative adversarial networks are a particular kind of generative model. A generative model is a machine learning model that can train on some set of data. Like, so you have a collection of photos of cats and you want to generate more photos of cats, or you want to estimate a probability distribution over cats. So you can ask how likely it is that some new image is a photo of a cat. GANs are one way of doing this. Some generative models are good at creating new data. Other generative models are good at estimating that density function and telling you how likely particular pieces of data are to come from the same distribution as the training data. GANs are more focused on generating samples rather than estimating the density function. There are some kinds of GANs like FlowGAN that can do both, but mostly GANs are about generating samples, generating new photos of cats that look realistic. And they do that completely from scratch. It's analogous to human imagination. When a GAN creates a new image of a cat, it's using a neural network to produce a cat that has not existed before. It isn't doing something like compositing photos together. You're not literally taking the eye off of one cat and the ear off of another cat. It's more of this digestive process where the neural net trains in a lot of data and comes up with some representation of the probability distribution and generates entirely new cats. There are a lot of different ways of building a generative model. What's specific to GANs is that we have a two player game in the game theoretic sense. And as the players in this game compete, one of them becomes able to generate realistic data. The first player is called the generator. It produces output data such as just images, for example. And at the start of the learning process, it'll just produce completely random images. The other player is called the discriminator. The discriminator takes images as input and guesses whether they're real or fake. You train it both on real data, so photos that come from your training set, actual photos of cats, and you train it to say that those are real. You also train it on images that come from the generator network and you train it to say that those are fake. As the two players compete in this game, the discriminator tries to become better at recognizing whether images are real or fake. And the generator becomes better at fooling the discriminator into thinking that its outputs are real. And you can analyze this through the language of game theory and find that there's a Nash equilibrium where the generator has captured the correct probability distribution. So in the cat example, it makes perfectly realistic cat photos. And the discriminator is unable to do better than random guessing because all the samples coming from both the data and the generator look equally likely to have come from either source. So do you ever sit back and does it just blow your mind that this thing works? So from very, so it's able to estimate that density function enough to generate realistic images. I mean, does it, yeah. Do you ever sit back and think how does this even, why, this is quite incredible, especially where GANs have gone in terms of realism. Yeah, and not just to flatter my own work, but generative models, all of them have this property that if they really did what we ask them to do, they would do nothing but memorize the training data. Right, exactly. Models that are based on maximizing the likelihood, the way that you obtain the maximum likelihood for a specific training set is you assign all of your probability mass to the training examples and nowhere else. For GANs, the game is played using a training set. So the way that you become unbeatable in the game is you literally memorize training examples. One of my former interns wrote a paper, his name is Vaishnav Nagarajan, and he showed that it's actually hard for the generator to memorize the training data, hard in a statistical learning theory sense, that you can actually create reasons for why it would require quite a lot of learning steps and a lot of observations of different latent variables before you could memorize the training data. That still doesn't really explain why when you produce samples that are new, why do you get compelling images rather than just garbage that's different from the training set. And I don't think we really have a good answer for that, especially if you think about how many possible images are out there and how few images the generative model sees during training. It seems just unreasonable that generative models create new images as well as they do, especially considering that we're basically training them to memorize rather than generalize. I think part of the answer is there's a paper called Deep Image Prior where they show that you can take a convolutional net and you don't even need to learn the parameters of it at all, you just use the model architecture. And it's already useful for things like inpainting images. I think that shows us that the convolutional network architecture captures something really important about the structure of images. And we don't need to actually use the learning to capture all the information coming out of the convolutional net. That would imply that it would be much harder to make generative models in other domains. So far, we're able to make reasonable speech models and things like that. But to be honest, we haven't actually explored a whole lot of different data sets all that much. We don't, for example, see a lot of deep learning models of like biology data sets where you have lots of microarrays measuring the amount of different enzymes and things like that. So we may find that some of the progress that we've seen for images and speech turns out to really rely heavily on the model architecture. And we were able to do what we did for vision by trying to reverse engineer the human visual system. And maybe it'll turn out that we can't just use that same trick for arbitrary kinds of data. Right, so there's aspect to the human vision system, the hardware of it, that makes it without learning, without cognition, just makes it really effective at detecting the patterns we see in the visual world. Yeah. Yeah, that's really interesting. What, in a big, quick overview, in your view, what types of GANs are there and what other generative models besides GANs are there? Yeah, so it's maybe a little bit easier to start with what kinds of generative models are there other than GANs. So most generative models are likelihood based where to train them, you have a model that tells you how much probability it assigns to a particular example and you just maximize the probability assigned to all the training examples. It turns out that it's hard to design a model that can create really complicated images or really complicated audio waveforms and still have it be possible to estimate the likelihood function from a computational point of view. Most interesting models that you would just write down intuitively, it turns out that it's almost impossible to calculate the amount of probability they assign to a particular point. So there's a few different schools of generative models in the likelihood family. One approach is to very carefully design the model so that it is computationally tractable to measure the density it assigns to a particular point. So there are things like autoregressive models, like PixelCNN, those basically break down the probability distribution into a product over every single feature. So for an image, you estimate the probability of each pixel given all of the pixels that came before it. There's tricks where if you want to measure the density function, you can actually calculate the density for all these pixels more or less in parallel. Generating the image still tends to require you to go one pixel at a time, and that can be very slow. But there are, again, tricks for doing this in a hierarchical pattern where you can keep the runtime under control. Are the quality of the images it generates, putting runtime aside, pretty good? They're reasonable, yeah. I would say a lot of the best results are from GANs these days, but it can be hard to tell how much of that is based on who's studying which type of algorithm, if that makes sense. The amount of effort invested in a particular. Yeah, or like the kind of expertise. So a lot of people who've traditionally been excited about graphics or art and things like that have gotten interested in GANs. And to some extent, it's hard to tell are GANs doing better because they have a lot of graphics and art experts behind them, or are GANs doing better because they're more computationally efficient, or are GANs doing better because they prioritize the realism of samples over the accuracy of the density function. I think all of those are potentially valid explanations, and it's hard to tell. So can you give a brief history of GANs from 2014? Were you paper 13? Yeah, so a few highlights. In the first paper, we just showed that GANs basically work. If you look back at the samples we had now, they look terrible. On the CIFAR 10 data set, you can't even recognize objects in them. Your paper, sorry, you used CIFAR 10? We used MNIST, which is little handwritten digits. We used the Toronto Face database, which is small grayscale photos of faces. We did have recognizable faces. My colleague Bing Xu put together the first GAN face model for that paper. We also had the CIFAR 10 data set, which is things like very small 32 by 32 pixels of cars and cats and dogs. For that, we didn't get recognizable objects, but all the deep learning people back then were really used to looking at these failed samples and kind of reading them like tea leaves. And people who are used to reading the tea leaves recognize that our tea leaves at least look different. Maybe not necessarily better, but there was something unusual about them. And that got a lot of us excited. One of the next really big steps was LAPGAN by Emily Denton and Sumit Chintala at Facebook AI Research, where they actually got really good high resolution photos working with GANs for the first time. They had a complicated system where they generated the image starting at low res and then scaling up to high res, but they were able to get it to work. And then in 2015, I believe later that same year, Alec Radford and Sumit Chintala and Luke Metz published the DCGAN paper, which it stands for deep convolutional GAN. It's kind of a non unique name because these days basically all GANs and even some before that were deep and convolutional, but they just kind of picked a name for a really great recipe where they were able to actually using only one model instead of a multi step process, actually generate realistic images of faces and things like that. That was sort of like the beginning of the Cambrian explosion of GANs. Like once you had animals that had a backbone, you suddenly got lots of different versions of fish and four legged animals and things like that. So DCGAN became kind of the backbone for many different models that came out. It's used as a baseline even still. Yeah, yeah. And so from there, I would say some interesting things we've seen are there's a lot you can say about how just the quality of standard image generation GANs has increased, but what's also maybe more interesting on an intellectual level is how the things you can use GANs for has also changed. One thing is that you can use them to learn classifiers without having to have class labels for every example in your training set. So that's called semi supervised learning. My colleague at OpenAI, Tim Solomons, who's at Brain now, wrote a paper called Improve Techniques for Training GANs. I'm a coauthor on this paper, but I can't claim any credit for this particular part. One thing he showed in the paper is that you can take the GAN discriminator and use it as a classifier that actually tells you, this image is a cat, this image is a dog, this image is a car, this image is a truck, and so on. Not just to say whether the image is real or fake, but if it is real to say specifically what kind of object it is. And he found that you can train these classifiers with far fewer labeled examples than traditional classifiers. So if you supervise based on also not just your discrimination ability, but your ability to classify, you're going to do much, you're going to converge much faster to being effective at being a discriminator. Yeah. So for example, for the MNIST dataset, you want to look at an image of a handwritten digit and say whether it's a zero, a one, or a two, and so on. To get down to less than 1% accuracy required around 60,000 examples until maybe about 2014 or so. In 2016 with this semi supervised GAN project, Tim was able to get below 1% error using only 100 labeled examples. So that was about a 600X decrease in the amount of labels that he needed. He's still using more images than that, but he doesn't need to have each of them labeled as this one's a one, this one's a two, this one's a zero, and so on. Then to be able to, for GANs to be able to generate recognizable objects, so objects from a particular class, you still need labeled data because you need to know what it means to be a particular class cat, dog. How do you think we can move away from that? Yeah, some researchers at Brain Zurich actually just released a really great paper on semi supervised GANs where their goal isn't to classify, it's to make recognizable objects despite not having a lot of labeled data. They were working off of DeepMind's BigGAN project and they showed that they can match the performance of BigGAN using only 10%, I believe, of the labels. BigGAN was trained on the ImageNet data set, which is about 1.2 million images and had all of them labeled. This latest project from Brain Zurich shows that they're able to get away with only having about 10% of the images labeled. And they do that essentially using a clustering algorithm where the discriminator learns to assign the objects to groups and then this understanding that objects can be grouped into similar types helps it to form more realistic ideas of what should be appearing in the image because it knows that every image it creates has to come from one of these archetypal groups rather than just being some arbitrary image. If you train a GAN with no class labels, you tend to get things that look sort of like grass or water or brick or dirt, but without necessarily a lot going on in them. And I think that's partly because if you look at a large ImageNet image, the object doesn't necessarily occupy the whole image. And so you learn to create realistic sets of pixels, but you don't necessarily learn that the object is the star of the show and you want it to be in every image you make. Yeah, I've heard you talk about the horse, the zebra cycle GAN mapping and how it turns out, again, thought provoking that horses are usually on grass and zebras are usually on drier terrain. So when you're doing that kind of generation, you're going to end up generating greener horses or whatever, so those are connected together. It's not just, you're not able to segment, be able to generate in a segment away. So are there other types of games you come across in your mind that neural networks can play with each other to be able to solve problems? Yeah, the one that I spend most of my time on is in security. You can model most interactions as a game where there's attackers trying to break your system and you're the defender trying to build a resilient system. There's also domain adversarial learning, which is an approach to domain adaptation that looks really a lot like GANs. The authors had the idea before the GAN paper came out, their paper came out a little bit later and they're very nice and cited the GAN paper, but I know that they actually had the idea before it came out. Domain adaptation is when you want to train a machine learning model in one setting called a domain and then deploy it in another domain later. And you would like it to perform well in the new domain, even though the new domain is different from how it was trained. So for example, you might want to train on a really clean image data set like ImageNet, but then deploy on users phones where the user is taking pictures in the dark and pictures while moving quickly and just pictures that aren't really centered or composed all that well. When you take a normal machine learning model, it often degrades really badly when you move to the new domain because it looks so different from what the model was trained on. Domain adaptation algorithms try to smooth out that gap and the domain adversarial approach is based on training a feature extractor where the features have the same statistics regardless of which domain you extracted them on. So in the domain adversarial game, you have one player that's a feature extractor and another player that's a domain recognizer. The domain recognizer wants to look at the output of the feature extractor and guess which of the two domains the features came from. So it's a lot like the real versus fake discriminator in GANs and then the feature extractor, you can think of as loosely analogous to the generator in GANs, except what it's trying to do here is both fool the domain recognizer into not knowing which domain the data came from and also extract features that are good for classification. So at the end of the day, in the cases where it works out, you can actually get features that work about the same in both domains. Sometimes this has a drawback where in order to make things work the same in both domains, it just gets worse at the first one. But there are a lot of cases where it actually works out well on both. So do you think of GANs being useful in the context of data augmentation? Yeah, one thing you could hope for with GANs is you could imagine I've got a limited training set and I'd like to make more training data to train something else like a classifier. You could train the GAN on the training set and then create more data and then maybe the classifier would perform better on the test set after training on this bigger GAN generated data set. So that's the simplest version of something you might hope would work. I've never heard of that particular approach working, but I think there's some closely related things that I think could work in the future and some that actually already have worked. So if we think a little bit about what we'd be hoping for if we use the GAN to make more training data, we're hoping that the GAN will generalize to new examples better than the classifier would have generalized if it was trained on the same data. And I don't know of any reason to believe that the GAN would generalize better than the classifier would, but what we might hope for is that the GAN could generalize differently from a specific classifier. So one thing I think is worth trying that I haven't personally tried but someone could try is what if you trained a whole lot of different generative models on the same training set, create samples from all of them and then train a classifier on that? Because each of the generative models might generalize in a slightly different way. They might capture many different axes of variation that one individual model wouldn't and then the classifier can capture all of those ideas by training in all of their data. So it'd be a little bit like making an ensemble of classifiers. And I think that... Ensemble of GANs in a way. I think that could generalize better. The other thing that GANs are really good for is not necessarily generating new data that's exactly like what you already have, but by generating new data that has different properties from the data you already had. One thing that you can do is you can create differentially private data. So suppose that you have something like medical records and you don't want to train a classifier on the medical records and then publish the classifier because someone might be able to reverse engineer some of the medical records you trained on. There's a paper from Casey Green's lab that shows how you can train a GAN using differential privacy. And then the samples from the GAN still have the same differential privacy guarantees as the parameters of the GAN. So you can make fake patient data for other researchers to use. And they can do almost anything they want with that data because it doesn't come from real people. And the differential privacy mechanism gives you clear guarantees on how much the original people's data has been protected. That's really interesting, actually. I haven't heard you talk about that before. In terms of fairness, I've seen from AAAI, your talk, how can adversarial machine learning help models be more fair with respect to sensitive variables? Yeah, so there's a paper from Amos Starkey's lab about how to learn machine learning models that are incapable of using specific variables. So say, for example, you wanted to make predictions that are not affected by gender. It isn't enough to just leave gender out of the input to the model. You can often infer gender from a lot of other characteristics. Like say that you have the person's name, but you're not told their gender. Well, if their name is Ian, they're kind of obviously a man. So what you'd like to do is make a machine learning model that can still take in a lot of different attributes and make a really accurate informed prediction, but be confident that it isn't reverse engineering gender or another sensitive variable internally. You can do that using something very similar to the domain adversarial approach, where you have one player that's a feature extractor and another player that's a feature analyzer. And you want to make sure that the feature analyzer is not able to guess the value of the sensitive variable that you're trying to keep private. Right, that's, yeah, I love this approach. So yeah, with the feature, you're not able to infer the sensitive variables. Brilliant, that's quite brilliant and simple actually. Another way I think that GANs in particular could be used for fairness would be to make something like a CycleGAN, where you can take data from one domain and convert it into another. We've seen CycleGAN turning horses into zebras. We've seen other unsupervised GANs made by Mingyu Liu doing things like turning day photos into night photos. I think for fairness, you could imagine taking records for people in one group and transforming them into analogous people in another group and testing to see if they're treated equitably across those two groups. There's a lot of things that'd be hard to get right to make sure that the conversion process itself is fair. And I don't think it's anywhere near something that we could actually use yet, but if you could design that conversion process very carefully, it might give you a way of doing audits where you say, what if we took people from this group, converted them into equivalent people in another group, does the system actually treat them how it ought to? That's also really interesting. You know, in popular press and in general, in our imagination, you think, well, GANs are able to generate data and you start to think about deep fakes or being able to sort of maliciously generate data that fakes the identity of other people. Is this something of a concern to you? Is this something, if you look 10, 20 years into the future, is that something that pops up in your work, in the work of the community that's working on generating models? I'm a lot less concerned about 20 years from now than the next few years. I think there'll be a kind of bumpy cultural transition as people encounter this idea that there can be very realistic videos and audio that aren't real. I think 20 years from now, people will mostly understand that you shouldn't believe something is real just because you saw a video of it. People will expect to see that it's been cryptographically signed or have some other mechanism to make them believe that the content is real. There's already people working on this. Like there's a startup called Truepick that provides a lot of mechanisms for authenticating that an image is real. They're maybe not quite up to having a state actor try to evade their verification techniques, but it's something that people are already working on and I think we'll get right eventually. So you think authentication will eventually win out. So being able to authenticate that this is real and this is not. Yeah. As opposed to GANs just getting better and better or generative models being able to get better and better to where the nature of what is real is normal. I don't think we'll ever be able to look at the pixels of a photo and tell you for sure that it's real or not real. And I think it would actually be somewhat dangerous to rely on that approach too much. If you make a really good fake detector and then someone's able to fool your fake detector and your fake detector says this image is not fake, then it's even more credible than if you've never made a fake detector in the first place. What I do think we'll get to is systems that we can kind of use behind the scenes to make estimates of what's going on and maybe not like use them in court for a definitive analysis. I also think we will likely get better authentication systems where, imagine that every phone cryptographically signs everything that comes out of it. You wouldn't be able to conclusively tell that an image was real, but you would be able to tell somebody who knew the appropriate private key for this phone was actually able to sign this image and upload it to this server at this timestamp. Okay, so you could imagine maybe you make phones that have the private keys hardware embedded in them. If like a state security agency really wants to infiltrate the company, they could probably plant a private key of their choice or break open the chip and learn the private key or something like that. But it would make it a lot harder for an adversary with fewer resources to fake things. For most of us it would be okay. So you mentioned the beer and the bar and the new ideas. You were able to implement this or come up with this new idea pretty quickly and implement it pretty quickly. Do you think there's still many such groundbreaking ideas in deep learning that could be developed so quickly? Yeah, I do think that there are a lot of ideas that can be developed really quickly. GANs were probably a little bit of an outlier on the whole like one hour timescale. But just in terms of like low resource ideas where you do something really different on the algorithm scale and get a big payback. I think it's not as likely that you'll see that in terms of things like core machine learning technologies like a better classifier or a better reinforcement learning algorithm or a better generative model. If I had the GAN idea today, it would be a lot harder to prove that it was useful than it was back in 2014 because I would need to get it running on something like ImageNet or Celeb A at high resolution. You know, those take a while to train. You couldn't train it in an hour and know that it was something really new and exciting. Back in 2014, training on MNIST was enough. But there are other areas of machine learning where I think a new idea could actually be developed really quickly with low resources. What's your intuition about what areas of machine learning are ripe for this? Yeah, so I think fairness and interpretability are areas where we just really don't have any idea how anything should be done yet. Like for interpretability, I don't think we even have the right definitions. And even just defining a really useful concept, you don't even need to run any experiments, could have a huge impact on the field. We've seen that, for example, in differential privacy that Cynthia Dwork and her collaborators made this technical definition of privacy where before a lot of things were really mushy. And then with that definition, you could actually design randomized algorithms for accessing databases and guarantee that they preserved individual people's privacy in like a mathematical quantitative sense. Right now, we all talk a lot about how interpretable different machine learning algorithms are, but it's really just people's opinion. And everybody probably has a different idea of what interpretability means in their head. If we could define some concept related to interpretability that's actually measurable, that would be a huge leap forward even without a new algorithm that increases that quantity. And also once we had the definition of differential privacy, it was fast to get the algorithms that guaranteed it. So you could imagine once we have definitions of good concepts and interpretability, we might be able to provide the algorithms that have the interpretability guarantees quickly too. So what do you think it takes to build a system with human level intelligence as we quickly venture into the philosophical? So artificial general intelligence, what do you think it takes? I think that it definitely takes better environments than we currently have for training agents that we want them to have a really wide diversity of experiences. I also think it's gonna take really a lot of computation. It's hard to imagine exactly how much. So you're optimistic about simulation, simulating a variety of environments as the path forward? I think it's a necessary ingredient. Yeah, I don't think that we're going to get to artificial general intelligence by training on fixed data sets or by thinking really hard about the problem. I think that the agent really needs to interact and have a variety of experiences within the same lifespan. And today we have many different models that can each do one thing. And we tend to train them on one data set or one RL environment. Sometimes there are actually papers about getting one set of parameters to perform well in many different RL environments. But we don't really have anything like an agent that goes seamlessly from one type of experience to another and really integrates all the different things that it does over the course of its life. When we do see multi agent environments, they tend to be, or so many multi environment agents, they tend to be similar environments. Like all of them are playing like an action based video game. We don't really have an agent that goes from playing a video game to like reading the Wall Street Journal to predicting how effective a molecule will be as a drug or something like that. What do you think is a good test for intelligence in your view? There's been a lot of benchmarks started with the, with Alan Turing, natural conversation being a good benchmark for intelligence. What would Ian Goodfellow sit back and be really damn impressed if a system was able to accomplish? Something that doesn't take a lot of glue from human engineers. So imagine that instead of having to go to the CIFAR website and download CIFAR 10 and then write a Python script to parse it and all that, you could just point an agent at the CIFAR 10 problem and it downloads and extracts the data and trains a model and starts giving you predictions. I feel like something that doesn't need to have every step of the pipeline assembled for it, definitely understands what it's doing. Is AutoML moving into that direction or are you thinking way even bigger? AutoML has mostly been moving toward, once we've built all the glue, can the machine learning system design the architecture really well? And so I'm more of saying like, if something knows how to pre process the data so that it successfully accomplishes the task, then it would be very hard to argue that it doesn't truly understand the task in some fundamental sense. And I don't necessarily know that that's like the philosophical definition of intelligence, but that's something that would be really cool to build that would be really useful and would impress me and would convince me that we've made a step forward in real AI. So you give it like the URL for Wikipedia and then next day expect it to be able to solve CIFAR 10. Or like you type in a paragraph explaining what you want it to do and it figures out what web searches it should run and downloads all the necessary ingredients. So you have a very clear, calm way of speaking, no ums, easy to edit. I've seen comments for both you and I have been identified as both potentially being robots. If you have to prove to the world that you are indeed human, how would you do it? I can understand thinking that I'm a robot. It's the flip side of the Turing test, I think. Yeah, yeah, the prove your human test. Intellectually, so you have to... Is there something that's truly unique in your mind? Does it go back to just natural language again? Just being able to talk the way out of it. Proving that I'm not a robot with today's technology. Yeah, that's pretty straightforward. Like my conversation today hasn't veered off into talking about the stock market or something because of my training data. But I guess more generally trying to prove that something is real from the content alone is incredibly hard. That's one of the main things I've gotten out of my GAN research, that you can simulate almost anything. And so you have to really step back to a separate channel to prove that something is real. So like, I guess I should have had myself stamped on a blockchain when I was born or something, but I didn't do that. So according to my own research methodology, there's just no way to know at this point. So what, last question, problem stands out for you that you're really excited about challenging in the near future? So I think resistance to adversarial examples, figuring out how to make machine learning secure against an adversary who wants to interfere and control it, that is one of the most important things researchers today could solve. In all domains, image, language, driving, and everything. I guess I'm most concerned about domains we haven't really encountered yet. Like imagine 20 years from now, when we're using advanced AIs to do things we haven't even thought of yet. Like if you ask people, what are the important problems in security of phones in like 2002? I don't think we would have anticipated that we're using them for nearly as many things as we're using them for today. I think it's gonna be like that with AI that you can kind of try to speculate about where it's going, but really the business opportunities that end up taking off would be hard to predict ahead of time. What you can predict ahead of time is that almost anything you can do with machine learning, you would like to make sure that people can't get it to do what they want rather than what you want, just by showing it a funny QR code or a funny input pattern. And you think that the set of methodology to do that can be bigger than any one domain? I think so, yeah. Yeah, like one methodology that I think is, not a specific methodology, but like a category of solutions that I'm excited about today is making dynamic models that change every time they make a prediction. So right now we tend to train models and then after they're trained, we freeze them and we just use the same rule to classify everything that comes in from then on. That's really a sitting duck from a security point of view. If you always output the same answer for the same input, then people can just run inputs through until they find a mistake that benefits them. And then they use the same mistake over and over and over again. I think having a model that updates its predictions so that it's harder to predict what you're gonna get will make it harder for an adversary to really take control of the system and make it do what they want it to do. Yeah, models that maintain a bit of a sense of mystery about them, because they always keep changing. Ian, thanks so much for talking today, it was awesome. Thank you for coming in, it's great to see you.
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
The following is a conversation with Ariel Vinales. He's a senior research scientist at Google DeepMind, and before that, he was at Google Brain and Berkeley. His research has been cited over 39,000 times. He's truly one of the most brilliant and impactful minds in the field of deep learning. He's behind some of the biggest papers and ideas in AI, including sequence to sequence learning, audio generation, image captioning, neural machine translation, and, of course, reinforcement learning. He's a lead researcher of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Ariel Vinales. You spearheaded the DeepMind team behind AlphaStar that recently beat a top professional player at StarCraft. So you have an incredible wealth of work in deep learning and a bunch of fields, but let's talk about StarCraft first. Let's go back to the very beginning, even before AlphaStar, before DeepMind, before deep learning first. What came first for you, a love for programming or a love for video games? I think for me, it definitely came first the drive to play video games. I really liked computers. I didn't really code much, but what I would do is I would just mess with the computer, break it and fix it. That was the level of skills, I guess, that I gained in my very early days, I mean, when I was 10 or 11. And then I really got into video games, especially StarCraft, actually, the first version. I spent most of my time just playing kind of pseudo professionally, as professionally as you could play back in 98 in Europe, which was not a very main scene like what's called nowadays eSports. Right, of course, in the 90s. So how'd you get into StarCraft? What was your favorite race? How did you develop your skill? What was your strategy? All that kind of thing. So as a player, I tended to try to play not many games, not to kind of disclose the strategies that I kind of developed. And I like to play random, actually, not in competitions, but just to... I think in StarCraft, there's three main races and I found it very useful to play with all of them. And so I would choose random many times, even sometimes in tournaments, to gain skill on the three races because it's not how you play against someone, but also if you understand the race because you played, you also understand what's annoying, then when you're on the other side, what to do to annoy that person, to try to gain advantages here and there and so on. So I actually played random, although I must say in terms of favorite race, I really liked Zerg. I was probably best at Zerg and that's probably what I tend to use towards the end of my career before starting university. So let's step back a little bit. Could you try to describe StarCraft to people that may never have played video games, especially the massively online variety like StarCraft? So StarCraft is a real time strategy game. And the way to think about StarCraft, perhaps if you understand a bit chess, is that there's a board which is called map or the map where people play against each other. There's obviously many ways you can play, but the most interesting one is the one versus one setup where you just play against someone else or even the built in AI, right? Blizzard put a system that can play the game reasonably well if you don't know how to play. And then in this board, you have again, pieces like in chess, but these pieces are not there initially like they are in chess. You actually need to decide to gather resources to decide which pieces to build. So in a way you're starting almost with no pieces. You start gathering resources in StarCraft. There's minerals and gas that you can gather. And then you must decide how much do you wanna focus for instance, on gathering more resources or starting to build units or pieces. And then once you have enough pieces or maybe like attack, a good attack composition, then you go and attack the other side of the map. And now the other main difference with chess is that you don't see the other side of the map. So you're not seeing the moves of the enemy. It's what we call partially observable. So as a result, you must not only decide trading off economy versus building your own units, but you also must decide whether you wanna scout to gather information, but also by scouting, you might be giving away some information that you might be hiding from the enemy. So there's a lot of complex decision making all in real time. There's also unlike chess, this is not a turn based game. You play basically all the time continuously and thus some skill in terms of speed and accuracy of clicking is also very important. And people that train for this really play this game at an amazing skill level. I've seen many times these and if you can witness this life, it's really, really impressive. So in a way, it's kind of a chess where you don't see the other side of the board, you're building your own pieces and you also need to gather resources to basically get some money to build other buildings, pieces, technology and so on. From the perspective of a human player, the difference between that and chess or maybe that and a game like turn based strategy like Heroes of Might and Magic is that there's an anxiety because you have to make these decisions really quickly. And if you are not actually aware of what decisions work, it's a very stressful balance. Everything you describe is actually quite stressful, difficult to balance for an amateur human player. I don't know if it gets easier at the professional level, like if they're fully aware of what they have to do, but at the amateur level, there's this anxiety. Oh crap, I'm being attacked. Oh crap, I have to build up resource. Oh, I have to probably expand. And all these, the time, the real time strategy aspect is really stressful and computationally I'm sure difficult. We'll get into it. But for me, Battle.net, so StarCraft was released in 98, 20 years ago, which is hard to believe. And Blizzard Battle.net with Diablo in 96 came out. And to me, it might be a narrow perspective, but it changed online gaming and perhaps society forever. Yeah. But I may have made way too narrow viewpoint, but from your perspective, can you talk about the history of gaming over the past 20 years? Is this, how transformational, how important is this line of games? Right, so I think I kind of was an active gamer whilst this was developing, the internet, online gaming. So for me, the way it came was I played other games, strategy related, I played a bit of Common and Conquer, and then I played Warcraft II, which is from Blizzard. But at the time, I didn't know, I didn't understand about what Blizzard was or anything. Warcraft II was just a game, which was actually very similar to StarCraft in many ways. It's also real time strategy game where there's orcs and humans, so there's only two races. But it was offline. And it was offline, right? So I remember a friend of mine came to school, say, oh, there's this new cool game called StarCraft. And I just said, oh, this sounds like just a copy of Warcraft II, until I kind of installed it. And at the time, I am from Spain, so we didn't have very good internet, right? So there was, for us, StarCraft became first kind of an offline experience where you kind of start to play these missions, right? You play against some sort of scripted things to develop the story of the characters in the game. And then later on, I start playing against the built in AI, and I thought it was impossible to defeat it. Then eventually you defeat one and you can actually play against seven built in AIs at the same time, which also felt impossible. But actually, it's not that hard to beat seven built in AIs at once. So once we achieved that, also we discovered that we could play, as I said, internet wasn't that great, but we could play with the LAN, right? Like basically against each other if we were in the same place because you could just connect machines with like cables, right? So we started playing in LAN mode and as a group of friends, and it was really, really like much more entertaining than playing against AIs. And later on, as internet was starting to develop and being a bit faster and more reliable, then it's when I started experiencing Battle.net, which is this amazing universe, not only because of the fact that you can play the game against anyone in the world, but you can also get to know more people. You just get exposed to now like this vast variety of, it's kind of a bit when the chats came about, right? There was a chat system. You could play against people, but you could also chat with people, not only about Stalker, but about anything. And that became a way of life for kind of two years. And obviously then it became like kind of, it exploded in me in that I started to play more seriously, going to tournaments and so on and so forth. Do you have a sense on a societal, sociological level, what's this whole part of society that many of us are not aware of and it's a huge part of society, which is gamers. I mean, every time I come across that in YouTube or streaming sites, I mean, this is the huge number of people play games religiously. Do you have a sense of those folks, especially now that you've returned to that realm a little bit on the AI side? Yeah, so in fact, even after Stalker, I actually played World of Warcraft, which is maybe the main sort of online worlds or in presence that you get to interact with lots of people. So I played that for a little bit. It was to me, it was a bit less stressful than StarCraft because winning was kind of a given. You just put in this world and you can always complete missions. But I think it was actually the social aspect of especially StarCraft first and then games like World of Warcraft really shaped me in a very interesting ways because what you get to experience is just people you wouldn't usually interact with, right? So even nowadays, I still have many Facebook friends from the area where I played online and their ways of thinking is even political. They just, we don't live in, like we don't interact in the real world, but we were connected by basically fiber. And that way I actually get to understand a bit better that we live in a diverse world. And these were just connections that were made by, because, you know, I happened to go in a city in a virtual city as a priest and I met this warrior and we became friends and then we start like playing together, right? So I think it's transformative and more and more and more people are more aware of it. I mean, it's becoming quite mainstream, but back in the day, as you were saying in 2000, 2005, even it was very, still very strange thing to do, especially in Europe. I think there were exceptions like Korea, for instance, it was amazing that everything happened so early in terms of cybercafes, like if you go to Seoul, it's a city that back in the day, StarCraft was kind of, you could be a celebrity by playing StarCraft, but this was like 99, 2000, right? It's not like recently. So yeah, it's quite interesting to look back and yeah, I think it's changing society. The same way, of course, like technology and social networks and so on are also transforming things. And a quick tangent, let me ask, you're also one of the most productive people in your particular chosen passion and path in life. And yet you're also appreciate and enjoy video games. Do you think it's possible to do, to enjoy video games in moderation? Someone told me that you could choose two out of three. When I was playing video games, you could choose having a girlfriend, playing video games or studying. And I think for the most part, it was relatively true. These things do take time. Games like StarCraft, if you take the game pretty seriously and you wanna study it, then you obviously will dedicate more time to it. And I definitely took gaming and obviously studying very seriously. I love learning science and et cetera. So to me, especially when I started university undergrad, I kind of step off StarCraft. I actually fully stopped playing. And then World of Warcraft was a bit more casual. You could just connect online. And I mean, it was fun. But as I said, that was not as much time investment as it was for me in StarCraft. Okay, so let's get into AlphaStar. What are the, you're behind the team. So DeepMind has been working on StarCraft and released a bunch of cool open source agents and so on the past few years. But AlphaStar really is the moment where the first time you beat a world class player. So what are the parameters of the challenge in the way that AlphaStar took it on and how did you and David and the rest of the DeepMind team get into it? Consider that you can even beat the best in the world or top players. I think it all started back in 2015. Actually, I'm lying. I think it was 2014 when DeepMind was acquired by Google. And I at the time was at Google Brain, which was in California, is still in California. We had this summit where we got together, the two groups. So Google Brain and Google DeepMind got together and we gave a series of talks. And given that they were doing deep reinforcement learning for games, I decided to bring up part of my past, which I had developed at Berkeley, like this thing which we call Berkeley OverMind, which is really just a StarCraft one bot, right? So I talked about that. And I remember Demis just came to me and said, well, maybe not now, it's perhaps a bit too early, but you should just come to DeepMind and do this again with deep reinforcement learning, right? And at the time it sounded very science fiction for several reasons. But then in 2016, when I actually moved to London and joined DeepMind transferring from Brain, it became apparent that because of the AlphaGo moment and kind of Blizzard reaching out to us to say, wait, like, do you want the next challenge? And also me being full time at DeepMind, so sort of kind of all these came together. And then I went to Irvine in California, to the Blizzard headquarters to just chat with them and try to explain how would it all work before you do anything. And the approach has always been about the learning perspective, right? So in Berkeley, we did a lot of rule based conditioning and if you have more than three units, then go attack. And if the other has more units than me, I retreat and so on and so forth. And of course, the point of deep reinforcement learning, deep learning, machine learning in general is that all these should be learned behavior. So that kind of was the DNA of the project since its inception in 2016, where we just didn't even have an environment to work with. And so that's how it all started really. So if you go back to that conversation with Demis or even in your own head, how far away did you, because we're talking about Atari games, we're talking about Go, which is kind of, if you're honest about it, really far away from StarCraft. In, well, now that you've beaten it, maybe you could say it's close, but it's much, it seems like StarCraft is way harder than Go philosophically and mathematically speaking. So how far away did you think you were? Do you think it's 2019 and 18 you could be doing as well as you have? Yeah, when I kind of thought about, okay, I'm gonna dedicate a lot of my time and focus on this. And obviously I do a lot of different research in deep learning. So spending time on it, I mean, I really had to kind of think there's gonna be something good happening out of this. So really I thought, well, this sounds impossible. And it probably is impossible to do the full thing, like the full game where you play one versus one and it's only a neural network playing and so on. So it really felt like, I just didn't even think it was possible. But on the other hand, I could see some stepping stones towards that goal. Clearly you could define sub problems in StarCraft and sort of dissect it a bit and say, okay, here is a part of the game, here's another part. And also obviously the fact, so this was really also critical to me, the fact that we could access human replays, right? So Blizzard was very kind. And in fact, they open source these for the whole community where you can just go and it's not every single StarCraft game ever played, but it's a lot of them you can just go and download. And every day they will, you can just query a data set and say, well, give me all the games that were played today. And given my kind of experience with language and sequences and supervised learning, I thought, well, that's definitely gonna be very helpful and something quite unique now, because ever before we had such a large data set of replays, of people playing the game at this scale of such a complex video game, right? So that to me was a precious resource. And as soon as I knew that Blizzard was able to kind of give this to the community, I started to feel positive about something non trivial happening. But I also thought the full thing, like really no rules, no single line of code that tries to say, well, I mean, if you see this unit, build a detector, all these, not having any of these specializations seemed really, really, really difficult to me. Intuitively. I do also like that Blizzard was teasing or even trolling you, sort of almost, yeah, pulling you in into this really difficult challenge. Do they have any awareness? What's the interest from the perspective of Blizzard, except just curiosity? Yeah, I think Blizzard has really understood and really bring forward this competitiveness of esports in games. The StarCraft really kind of sparked a lot of, like something that almost was never seen, especially as I was saying, back in Korea. So they just probably thought, well, this is such a pure one versus one setup that it would be great to see if something that can play Atari or Go and then later on chess could even tackle these kind of complex real time strategy game, right? So for them, they wanted to see first, obviously whether it was possible, if the game they created was in a way solvable to some extent. And I think on the other hand, they also are a pretty modern company that innovates a lot. So just starting to understand AI for them to how to bring AI into games is not AI for games, but games for AI, right? I mean, both ways I think can work. And we obviously at DeepMind use games for AI, right? To drive AI progress, but Blizzard might actually be able to do and many other companies to start to understand and do the opposite. So I think that is also something they can get out of these. And they definitely, we have brainstormed a lot about these, right? But one of the interesting things to me about StarCraft and Diablo and these games that Blizzard has created is the task of balancing classes, for example. Sort of making the game fair from the starting point and then let skill determine the outcome. Is there, I mean, can you first comment, there's three races, Zerg, Protoss and Terran. I don't know if I've ever said that out loud. Is that how you pronounce it? Terran? Yeah, Terran. Yeah. Yeah, I don't think I've ever in person interacted with anybody about StarCraft, that's funny. So they seem to be pretty balanced. I wonder if the AI, the work that you're doing with AlphaStar would help balance them even further. Is that something you think about? Is that something that Blizzard is thinking about? Right, so balancing when you add a new unit or a new spell type is obviously possible given that you can always train or pre train at scale some agent that might start using that in unintended ways. But I think actually, if you understand how StarCraft has kind of co evolved with players, in a way, I think it's actually very cool the ways that many of the things and strategies that people came up with, right? So I think we've seen it over and over in StarCraft that Blizzard comes up with maybe a new unit and then some players get creative and do something kind of unintentional or something that Blizzard designers that just simply didn't test or think about. And then after that becomes kind of mainstream in the community, Blizzard patches the game and then they kind of maybe weaken that strategy or make it actually more interesting but a bit more balanced. So these kind of continual talk between players and Blizzard is kind of what has defined them actually in actually most games in StarCraft but also in World of Warcraft, they would do that. There are several classes and it would be not good that everyone plays absolutely the same race and so on, right? So I think they do care about balancing of course and they do a fair amount of testing but it's also beautiful to also see how players get creative anyways. And I mean, whether AI can be more creative at this point, I don't think so, right? I mean, it's just sometimes something so amazing happens. Like I remember back in the days, like you have these drop ships that could drop the rivers and that was actually not thought about that you could drop this unit that has this what's called splash damage that would basically eliminate all the enemies workers at once. No one thought that you could actually put them in really early game, do that kind of damage and then things change in the game. But I don't know, I think it's quite an amazing exploration process from both sides, players and Blizzard alike. Well, it's almost like a reinforcement learning exploration but the scale of humans that play Blizzard games is almost on the scale of a large scale deep mind RL experiment. I mean, if you look at the numbers, I mean, you're talking about, I don't know how many games but hundreds of thousands of games probably a month. Yeah. I mean, so it's almost the same as running RL agents. What aspect of the problem of Starcraft do you think is the hardest? Is it the, like you said, the imperfect information? Is it the fact they have to do longterm planning? Is it the real time aspects? We have to do stuff really quickly. Is it the fact that a large action space so you can do so many possible things? Or is it, you know, in the game theoretic sense there is no Nash equilibrium or at least you don't know what the optimal strategy is because there's way too many options. Right. Is there something that stands out as just like the hardest the most annoying thing? So when we sort of looked at the problem and start to define like the parameters of it, right? What are the observations? What are the actions? It became very apparent that, you know, the very first barrier that one would hit in Starcraft would be because of the action space being so large and as not being able to search like you could in chess or go even though the search space is vast. The main problem that we identified was that of exploration, right? So without any sort of human knowledge or human prior, if you think about Starcraft and you know how deep reinforcement learnings algorithm work which is essentially by issuing random actions and hoping that they will get some wins sometimes so they could learn. So if you think of the action space in Starcraft almost anything you can do in the early game is bad because any action involves taking workers which are mining minerals for free. That's something that the game does automatically sends them to mine. And you would immediately just take them out of mining and send them around. So just thinking how is it gonna be possible to get to understand these concepts but even more like expanding, right? There's these buildings you can place in other locations in the map to gather more resources but the location of the building is important and you have to select a worker, send it walking to that location, build the building, wait for the building to be built and then put extra workers there so they start mining. That feels like impossible if you just randomly click to produce that state, desirable state that then you could hope to learn from because eventually that may yield to an extra win, right? So for me, the exploration problem and due to the action space and the fact that there's not really turns, there's so many turns because the game essentially takes that 22 times per second. I mean, that's how they could discretize sort of time. Obviously you always have to discretize time but there's no such thing as real time but it's really a lot of time steps of things that could go wrong. And that definitely felt a priori like the hardest. You mentioned many good ones. I think partial observability and the fact that there is no perfect strategy because of the partial observability. Those are very interesting problems. We start seeing more and more now in terms of as we solve the previous ones but the core problem to me was exploration and solving it has been basically kind of the focus and how we saw the first breakthroughs. So exploration in a multi hierarchical way. So like 22 times a second exploration has a very different meaning than it does in terms of should I gather resources early or should I wait or so on. So how do you solve the longterm? Let's talk about the internals of AlphaStar. So first of all, how do you represent the state of the game as an input? How do you then do the longterm sequence modeling? How do you build a policy? What's the architecture like? So AlphaStar has obviously several components but everything passes through what we call the policy which is a neural network. And that's kind of the beauty of it. There is, I could just now give you a neural network and some weights. And if you fed the right observations and you understood the actions the same way we do you would have basically the agent playing the game. There's absolutely nothing else needed other than those weights that were trained. Now, the first step is observing the game and we've experimented with a few alternatives. The one that we currently use mixes both spatial sort of images that you would process from the game that is the zoomed out version of the map and also a zoomed in version of the camera or the screen as we call it. But also we give to the agent the list of units that it sees more of as a set of objects that it can operate on. That is not necessarily required to use it. And we have versions of the game that play well without this set vision that is a bit not like how humans perceive the game. But it certainly helps a lot because it's a very natural way to encode the game is by just looking at all the units that there are. They have properties like health, position, type of unit whether it's my unit or the enemies. And that sort of is kind of the summary of the state of the game, that list of units or set of units that you see all the time. But that's pretty close to the way humans see the game. Why do you say it's not, isn't that, you're saying the exactness of it is not similar to humans? The exactness of it is perhaps not the problem. I guess maybe the problem if you look at it from how actually humans play the game is that they play with a mouse and a keyboard and a screen and they don't see sort of a structured object with all the units. What they see is what they see on the screen, right? So. Remember that there's a, sorry to interrupt, there's a plot that you showed with camera base where you do exactly that, right? You move around and that seems to converge to similar performance. Yeah, I think that's what I, we're kind of experimenting with what's necessary or not, but using the set. So, actually, if you look at research in computer vision, where it makes a lot of sense to treat images as two dimensional arrays, there's actually a very nice paper from Facebook. I think, I forgot who the authors are, but I think it's part of Caming's group. And what they do is they take an image, which is this two dimensional signal, and they actually take pixel by pixel and scramble the image as if it was just a list of pixels. Crucially, they encode the position of the pixels with the X, Y coordinates. And this is just kind of a new architecture, which we incidentally also use in StarCraft called the Transformer, which is a very popular paper from last year, which yielded very nice result in machine translation. And if you actually believe in this kind of, oh, it's actually a set of pixels, as long as you encode X, Y, it's okay, then you could argue that the list of units that we see is precisely that, because we have each unit as a kind of pixel, if you will, and then their X, Y coordinates. So in that perspective, we, without knowing it, we use the same architecture that was shown to work very well on Pascal and ImageNet and so on. So the interesting thing here is putting it in that way it starts to move it towards the way you usually work with language. So what, and especially with your expertise and work in language, it seems like there's echoes of a lot of the way you would work with natural language in the way you've approached AlphaStar. Right. What's, does that help with the longterm sequence modeling there somehow? Exactly, so now that we understand what an observation for a given time step is, we need to move on to say, well, there's going to be a sequence of such observations and an agent will need to, given all that it's seen, not only the current time step, but all that it's seen, why? Because there is partial observability. We must remember whether we saw a worker going somewhere, for instance, right? Because then there might be an expansion on the top right of the map. So given that, what you must then think about is there is the problem of given all the observations, you have to predict the next action. And not only given all the observations, but given all the observations and given all the actions you've taken, predict the next action. And that sounds exactly like machine translation where, and that's exactly how kind of I saw the problem, especially when you are given supervised data or replays from humans, because the problem is exactly the same. You're translating essentially a prefix of observations and actions onto what's going to happen next, which is exactly how you would train a model to translate or to generate language as well, right? Do you have a certain prefix? You must remember everything that comes in the past because otherwise you might start having noncoherent text. And the same architectures we're using LSTMs and transformers to operate on across time to kind of integrate all that's happened in the past. Those architectures that work so well in translation or language modeling are exactly the same than what the agent is using to issue actions in the game. And the way we train it, moreover, for imitation, which is step one of AlphaStar is, take all the human experience and try to imitate it, much like you try to imitate translators that translated many pairs of sentences from French to English say, that sort of principle applies exactly the same. It's almost the same code, except that instead of words, you have a slightly more complicated objects, which are the observations and the actions are also a bit more complicated than a word. Is there a self play component then too? So once you run out of imitation? Right, so indeed you can bootstrap from human replays, but then the agents you get are actually not as good as the humans you imitated, right? So how do we imitate? Well, we take humans from 3000 MMR and higher. 3000 MMR is just a metric of human skill and 3000 MMR might be like 50% percentile, right? So it's just average human. What's that? So maybe quick pause, MMR is a ranking scale, the matchmaking rating for players. So it's 3000, I remember there's like a master and a grand master, what's 3000? So 3000 is pretty bad. I think it's kind of goals level. It just sounds really good relative to chess, I think. Oh yeah, yeah, no, the ratings, the best in the world are at 7,000 MMR. So 3000, it's a bit like Elo indeed, right? So 3,500 just allows us to not filter a lot of the data. So we like to have a lot of data in deep learning as you probably know. So we take these kind of 3,500 and above, but then we do a very interesting trick, which is we tell the neural network what level they are imitating. So we say, this replay you're gonna try to imitate to predict the next action for all the actions that you're gonna see is a 4,000 MMR replay. This one is a 6,000 MMR replay. And what's cool about this is then we take this policy that is being trained from human, and then we can ask it to play like a 3000 MMR player by setting a beat saying, well, okay, play like a 3000 MMR player or play like a 6,000 MMR player. And you actually see how the policy behaves differently. It gets worse economy if you play like a goal level player, it does less actions per minute, which is the number of clicks or number of actions that you will issue in a whole minute. And it's very interesting to see that it kind of imitates the skill level quite well. But if we ask it to play like a 6,000 MMR player, we tested, of course, these policies to see how well they do. They actually beat all the built in AIs that Blizzard put in the game, but they're nowhere near 6,000 MMR players, right? They might be maybe around goal level, platinum, perhaps. So there's still a lot of work to be done for the policy to truly understand what it means to win. So far, we only asked them, okay, here is the screen. And that's what's happened on the game until this point. What would the next action be if we ask a pro to now say, oh, you're gonna click here or here or there. And the point is experiencing wins and losses is very important to then start to refine. Otherwise the policy can get loose, can just go off policy as we call it. That's so interesting that you can at least hope eventually to be able to control a policy approximately to be at some MMR level. That's so interesting, especially given that you have ground truth for a lot of these cases. Can I ask you a personal question? What's your MMR? Well, I haven't played StarCraft II, so I am unranked, which is the kind of lowest league. So I used to play StarCraft, the first one. But you haven't seriously played StarCraft II. So the best player we have at DeepMind is about 5,000 MMR, which is high masters. It's not at grand master level. Grand master level will be the top 200 players in a certain region like Europe or America or Asia. But for me, it would be hard to say. I am very bad at the game. I actually played AlphaStar a bit too late and it beat me. I remember the whole team was, oh, Oreo, you should play. And I was, oh, it looks like it's not so good yet. And then I remember I kind of got busy and waited an extra week and I played and it really beat me very badly. Was that, I mean, how did that feel? Isn't that an amazing feeling? That's amazing, yeah. I mean, obviously I tried my best and I tried to also impress my, because I actually played the first game. So I'm still pretty good at micromanagement. The problem is I just don't understand StarCraft II. I understand StarCraft. And when I played StarCraft, I probably was consistently like for a couple of years, top 32 in Europe. So I was decent, but at the time we didn't have this kind of MMR system as well established. So it would be hard to know what it was back then. So what's the difference in interface between AlphaStar and StarCraft and a human player in StarCraft? Is there any significant differences between the way they both see the game? I would say the way they see the game, there's a few things that are just very hard to simulate. The main one perhaps, which is obvious in hindsight is what's called cloaked units, which are invisible units. So in StarCraft, you can make some units that you need to have a particular kind of unit to detect it. So these units are invisible. If you cannot detect them, you cannot target them. So they would just destroy your buildings or kill your workers. But despite the fact you cannot target the unit, there's a shimmer that as a human you observe. I mean, you need to train a little bit, you need to pay attention, but you would see this kind of space time distortion and you would know, okay, there are, yeah. Yeah, there's like a wave thing. Yeah, it's called shimmer. Space time distortion, I like it. That's really like, the Blizzard term is shimmer. Shimmer, okay. And so these shimmer professional players actually can see it immediately. They understand it very well, but it's still something that requires certain amount of attention and it's kind of a bit annoying to deal with. Whereas for AlphaStar, in terms of vision, it's very hard for us to simulate sort of, oh, are you looking at this pixel in the screen and so on? So the only thing we can do is, there is a unit that's invisible over there. So AlphaStar would know that immediately. Obviously still obeys the rules. You cannot attack the unit. You must have a detector and so on, but it's kind of one of the main things that it just doesn't feel there's a very proper way. I mean, you could imagine, oh, you don't have hypers. Maybe you don't know exactly where it is, or sometimes you see it, sometimes you don't, but it's just really, really complicated to get it so that everyone would agree, oh, that's the best way to simulate this, right? It seems like a perception problem. It is a perception problem. So the only problem is people, you ask, oh, what's the difference between how humans perceive the game? I would say they wouldn't be able to tell a shimmer immediately as it appears on the screen, whereas AlphaStar in principle sees it very sharply, right? It sees that the bit turned from zero to one, meaning there's now a unit there, although you don't know the unit, or you know that you cannot attack it and so on. So that from a vision standpoint, that probably is the one that is kind of the most obvious one. Then there are things humans cannot do perfectly, even professionals, which is they might miss a detail, or they might have not seen a unit. And obviously as a computer, if there's a corner of the screen that turns green because a unit enters the field of view, that can go into the memory of the agent, the LSTM, and persist there for a while, and for however long is relevant, right? And in terms of action, it seems like the rate of action from AlphaStar is comparative, if not slower than professional players, but it's more precise is what I read. So that's really probably the one that is causing us more issues for a couple of reasons, right? The first one is StarCraft has been an AI environment for quite a few years. In fact, I mean, I was participating in the very first competition back in 2010. And there's really not been a kind of a very clear set of rules how the actions per minute, the rate of actions that you can issue is. And as a result, these agents or bots that people build in a kind of almost very cool way, they do like 20,000, 40,000 actions per minute. Now, to put this in perspective, a very good professional human might do 300 to 800 actions per minute. They might not be as precise. That's why the range is a bit tricky to identify exactly. I mean, 300 actions per minute precisely is probably realistic. 800 is probably not, but you see humans doing a lot of actions because they warm up and they kind of select things and spam and so on just so that when they need, they have the accuracy. So we came into this by not having kind of a standard way to say, well, how do we measure whether an agent is at human level or not? On the other hand, we had a huge advantage, which is because we do imitation learning, agents turned out to act like humans in terms of rate of actions, even precisions and imprecisions of actions in the supervised policy. You could see all these. You could see how agents like to spam click, to move here. If you played especially Diablo, you wouldn't know what I mean. I mean, you just like spam, oh, move here, move here, move here. You're doing literally like maybe five actions in two seconds, but these actions are not very meaningful. One would have sufficed. So on the one hand, we start from this imitation policy that is at the ballpark of the actions per minutes of humans because it's actually statistically trying to imitate humans. So we see these very nicely in the curves that we showed in the blog post. There's these actions per minute, and the distribution looks very human like. But then, of course, as self play kicks in, and that's the part we haven't talked too much yet, but of course, the agent must play against itself to improve, then there's almost no guarantees that these actions will not become more precise or even the rate of actions is going to increase over time. So what we did, and this is probably the first attempt that we thought was reasonable, is we looked at the distribution of actions for humans for certain windows of time. And just to give a perspective, because I guess I mentioned that some of these agents that are programmatic, let's call them. They do 40,000 actions per minute. Professionals, as I said, do 300 to 800. So what we looked is we look at the distribution over professional gamers, and we took reasonably high actions per minute, but we kind of identify certain cutoffs after which, even if the agent wanted to act, these actions would be dropped. But the problem is this cutoff is probably set a bit too high. And what ends up happening, even though the games, and when we ask the professionals and the gamers, by and large, they feel like it's playing humanlike, there are some agents that developed maybe slightly too high APMs, which is actions per minute, combined with the precision, which made people start discussing a very interesting issue, which is, should we have limited these? Should we just let it lose and see what cool things it can come up with? Right? Interesting. So this is in itself an extremely interesting question, but the same way that modeling the shimmer would be so difficult, modeling absolutely all the details about muscles and precision and tiredness of humans would be quite difficult. So we're really here kind of innovating in this sense of, OK, what could be maybe the next iteration of putting more rules that makes the agents more humanlike in terms of restrictions? Yeah, putting constraints that. More constraints, yeah. That's really interesting. That's really innovative. So one of the constraints you put on yourself, or at least focused in, is on the Protoss race, as far as I understand. Can you tell me about the different races and how they, so Protoss, Terran, and Zerg, how do they compare? How do they interact? Why did you choose Protoss? Yeah, in the dynamics of the game seen from a strategic perspective. So Protoss, so in StarCraft there are three races. Indeed, in the demonstration, we saw only the Protoss race. So maybe let's start with that one. Protoss is kind of the most technologically advanced race. It has units that are expensive but powerful. So in general, you want to kind of conserve your units as you go attack. And then you want to utilize these tactical advantages of very fancy spells and so on and so forth. And at the same time, they're kind of, people say they're a bit easier to play perhaps. But that I actually didn't know. I mean, I just talked now a lot to the players that we work with, TLO and Mana, and they said, oh yeah, Protoss is actually, people think, is actually one of the easiest races. So perhaps the easier, that doesn't mean that it's obviously professional players excel at the three races. And there's never a race that dominates for a very long time anyway. So if you look at the top, I don't know, 100 in the world, is there one race that dominates that list? It would be hard to know because it depends on the regions. I think it's pretty equal in terms of distribution. And Blizzard wants it to be equal. They wouldn't want one race like Protoss to not be representative in the top place. So definitely, they tried it to be balanced. So then maybe the opposite race of Protoss is Zerg. Zerg is a race where you just kind of expand and take over as many resources as you can, and they have a very high capacity to regenerate their units. So if you have an army, it's not that valuable in terms of losing the whole army is not a big deal as Zerg because you can then rebuild it. And given that you generally accumulate a huge bank of resources, Zergs typically play by applying a lot of pressure, maybe losing their whole army, but then rebuilding it quickly. So although, of course, every race, I mean, there's never, I mean, they're pretty diverse. I mean, there are some units in Zerg that are technologically advanced, and they do some very interesting spells. And there's some units in Protoss that are less valuable, and you could lose a lot of them and rebuild them, and it wouldn't be a big deal. All right, so maybe I'm missing out. Maybe I'm going to say some dumb stuff, but summary of strategy. So first, there's collection of a lot of resources. That's one option. The other one is expanding, so building other bases. Then the other is obviously building units and attacking with those units. And then I don't know what else there is. Maybe there's the different timing of attacks, like do I attack early, attack late? What are the different strategies that emerged that you've learned about? I've read that a bunch of people are super happy that you guys have apparently, that Alpha Star apparently has discovered that it's really good to, what is it, saturate? Oh yeah, the mineral line. Yeah, the mineral line. Yeah, yeah. And that's for greedy amateur players like myself. That's always been a good strategy. You just build up a lot of money, and it just feels good to just accumulate and accumulate. So thank you for discovering that and validating all of us. But is there other strategies that you discovered that are interesting, unique to this game? Yeah, so if you look at the kind of, not being a StarCraft II player, but of course StarCraft and StarCraft II and real time strategy games in general are very similar. I would classify perhaps the openings of the game. They're very important. And generally I would say there's two kinds of openings. One that's a standard opening. That's generally how players find sort of a balance between risk and economy and building some units early on so that they could defend, but they're not too exposed basically, but also expanding quite quickly. So this would be kind of a standard opening. And within a standard opening, then what you do choose generally is what technology are you aiming towards? So there's a bit of rock, paper, scissors of you could go for spaceships or you could go for invisible units or you could go for, I don't know, like massive units that attack against certain kinds of units, but they're weak against others. So standard openings themselves have some choices like rock, paper, scissors style. Of course, if you scout and you're good at guessing what the opponent is doing, then you can play as an advantage because if you know you're gonna play rock, I mean, I'm gonna play paper obviously. So you can imagine that normal standard games in StarCraft looks like a continuous rock, paper, scissors game where you guess what the distribution of rock, paper, and scissors is from the enemy and reacting accordingly to try to beat it or put the paper out before he kind of changes his mind from rock to scissors, and then you would be in a weak position. So, sorry to pause on that. I didn't realize this element because I know it's true with poker. I know I looked at Labratus. So you're also estimating trying to guess the distribution, trying to better and better estimate the distribution of what the opponent is likely to be doing. Yeah, I mean, as a player, you definitely wanna have a belief state over what's up on the other side of the map. And when your belief state becomes inaccurate, when you start having that serious doubts, whether he's gonna play something that you must know, that's when you scout. You wanna then gather information, right? Is improving the accuracy of the belief or improving the belief state part of the loss that you're trying to optimize? Or is it just a side effect? It's implicit, but you could explicitly model it, and it would be quite good at probably predicting what's on the other side of the map. But so far, it's all implicit. There's no additional reward for predicting the enemy. So there's these standard openings, and then there's what people call cheese, which is very interesting. And AlphaStar sometimes really likes this kind of cheese. These cheeses, what they are is kind of an all in strategy. You're gonna do something sneaky. You're gonna hide your own buildings close to the enemy base, or you're gonna go for hiding your technological buildings so that you do invisible units and the enemy just cannot react to detect it and thus lose the game. And there's quite a few of these cheeses and variants of them. And there it's where actually the belief state becomes even more important. Because if I scout your base and I see no buildings at all, any human player knows something's up. They might know, well, you're hiding something close to my base. Should I build suddenly a lot of units to defend? Should I actually block my ramp with workers so that you cannot come and destroy my base? So there's all this is happening and defending against cheeses is extremely important. And in the AlphaStar League, many agents actually develop some cheesy strategies. And in the games we saw against TLO and Mana, two out of the 10 agents were actually doing these kind of strategies which are cheesy strategies. And then there's a variant of cheesy strategy which is called all in. So an all in strategy is not perhaps as drastic as, oh, I'm gonna build cannons on your base and then bring all my workers and try to just disrupt your base and game over, or GG as we say in StarCraft. There's these kind of very cool things that you can align precisely at a certain time mark. So for instance, you can generate exactly 10 unit composition that is perfect, like five of this type, five of this other type, and align the upgrade so that at four minutes and a half, let's say, you have these 10 units and the upgrade just finished. And at that point, that army is really scary. And unless the enemy really knows what's going on, if you push, you might then have an advantage because maybe the enemy is doing something more standard, it expanded too much, it developed too much economy, and it trade off badly against having defenses, and the enemy will lose. But it's called all in because if you don't win, then you're gonna lose. So you see players that do these kinds of strategies, if they don't succeed, game is not over. I mean, they still have a base and they still gathering minerals, but they will just GG out of the game because they know, well, game is over. I gambled and I failed. So if we start entering the game theoretic aspects of the game, it's really rich and it's really, that's why it also makes it quite entertaining to watch. Even if I don't play, I still enjoy watching the game. But the agents are trying to do this mostly implicitly. But one element that we improved in self play is creating the Alpha Star League. And the Alpha Star League is not pure self play. It's trying to create a different personalities of agents so that some of them will become cheesy agents. Some of them might become very economical, very greedy, like getting all the resources, but then being maybe early on, they're gonna be weak, but later on, they're gonna be very strong. And by creating this personality of agents, which sometimes it just happens naturally that you can see kind of an evolution of agents that given the previous generation, they train against all of them and then they generate kind of the perfect counter to that distribution. But these agents, you must have them in the populations because if you don't have them, you're not covered against these things. You wanna create all sorts of the opponents that you will find in the wild. So you can be exposed to these cheeses, early aggression, later aggression, more expansions, dropping units in your base from the side, all these things. And pure self play is getting a bit stuck at finding some subset of these, but not all of these. So the Alpha Star League is a way to kind of do an ensemble of agents that they're all playing in a league, much like people play on Battle.net, right? They play, you play against someone who does a new cool strategy and you immediately, oh my God, I wanna try it, I wanna play again. And this to me was another critical part of the problem, which was, can we create a Battle.net for agents? And that's kind of what the Alpha Star League really is. That's fascinating. And where they stick to their different strategies. Yeah, wow, that's really, really interesting. But that said, you were fortunate enough or just skilled enough to win five, zero. And so how hard is it to win? I mean, that's not the goal. I guess, I don't know what the goal is. The goal should be to win majority, not five, zero, but how hard is it in general to win all matchups on a one V one? So that's a very interesting question because once you see Alpha Star and superficially you think, well, okay, it won. Let's, if you sum all the games like 10 to one, right? It lost the game that it played with the camera interface. You might think, well, that's done, right? It's superhuman at the game. And that's not really the claim we really can make actually. The claim is we beat a professional gamer for the first time. StarCraft has really been a thing that has been going on for a few years, but a moment like this had not occurred before yet. But are these agents impossible to beat? Absolutely not, right? So that's a bit what's kind of the difference is the agents play at grandmaster level. They definitely understand the game enough to play extremely well, but are they unbeatable? Do they play perfect? No, and actually in StarCraft, because of these sneaky strategies, it's always possible that you might take a huge risk sometimes, but you might get wins, right? Out of this. So I think that as a domain, it still has a lot of opportunities, not only because of course we wanna learn with less experience, we would like to, I mean, if I learned to play Protoss, I can play Terran and learn it much quicker than Alpha Star can, right? So there are obvious interesting research challenges as well, but even as the raw performance goes, really the claim here can be we are at pro level or at high grandmaster level, but obviously the players also did not know what to expect, right? Their prior distribution was a bit off because they played this kind of new like alien brain as they like to say it, right? And that's what makes it exciting for them. But also I think if you look at the games closely, you see there were weaknesses in some points, maybe Alpha Star did not scout, or if it had invisible units going against at certain points, it wouldn't have known and it would have been bad. So there's still quite a lot of work to do, but it's really a very exciting moment for us to be seeing, wow, a single neural net on a GPU is actually playing against these guys who are amazing. I mean, you have to see them play in life. They're really, really amazing players. Yeah, I'm sure there must be a guy in Poland somewhere right now training his butt off to make sure that this never happens again with Alpha Star. So that's really exciting in terms of Alpha Star having some holes to exploit, which is great. And then we build on top of each other and it feels like StarCraft on let go, even if you win, it's still not, there's so many different dimensions in which you can explore. So that's really, really interesting. Do you think there's a ceiling to Alpha Star? You've said that it hasn't reached, you know, this is a big, wait, let me actually just pause for a second. How did it feel to come here to this point, to beat a top professional player? Like that night, I mean, you know, Olympic athletes have their gold medal, right? This is your gold medal in a sense. Sure, you're cited a lot, you've published a lot of prestigious papers, whatever, but this is like a win. How did it feel? I mean, it was, for me, it was unbelievable because first the win itself, I mean, it was so exciting. I mean, so looking back to those last days of 2018 really, that's when the games were played. I'm sure I look back at that moment, I'll say, oh my God, I want to be in a project like that. It's like, I already feel the nostalgia of like, yeah, that was huge in terms of the energy and the team effort that went into it. And so in that sense, as soon as it happened, I already knew it was kind of, I was losing it a little bit. So it is almost like sad that it happened and oh my God, but on the other hand, it also verifies the approach. But to me also, there's so many challenges and interesting aspects of intelligence that even though we can train a neural network to play at the level of the best humans, there's still so many challenges. So for me, it's also like, well, this is really an amazing achievement, but I already was also thinking about next steps. I mean, as I said, these Asians play Protoss versus Protoss, but they should be able to play a different race much quicker, right? So that would be an amazing achievement. Some people call this meta reinforcement learning, meta learning and so on, right? So there's so many possibilities after that moment, but the moment itself, it really felt great. We had this bet, so I'm kind of a pessimist in general. So I kind of send an email to the team. I said, okay, let's against TLO first, right? Like what's gonna be the result? And I really thought we would lose like five zero, right? We had some calibration made against the 5,000 MMR player. TLO was much stronger than that player, even if he played Protoss, which is his off race. But yeah, I was not imagining we would win. So for me, that was just kind of a test run or something. And then it really kind of, he was really surprised. And unbelievably, we went to this bar to celebrate and Dave tells me, well, why don't we invite someone who is a thousand MMR stronger in Protoss, like actual Protoss player, like that it turned up being Mana, right? And we had some drinks and I said, sure, why not? But then I thought, well, that's really gonna be impossible to beat. I mean, even because it's so much ahead, a thousand MMR is really like 99% probability that Mana would beat TLO as Protoss versus Protoss, right? So we did that. And to me, the second game was much more important, even though a lot of uncertainty kind of disappeared after we kind of beat TLO. I mean, he is a professional player. So that was kind of, oh, but that's really a very nice achievement. But Mana really was at the top and you could see he played much better, but our agents got much better too. So it's like, ah, and then after the first game, I said, if we take a single game, at least we can say we beat a game. I mean, even if we don't beat the series, for me, that was a huge relief. And I mean, I remember the hugging demis. And I mean, it was really like, this moment for me will resonate forever as a researcher. And I mean, as a person, and yeah, it's a really like great accomplishment. And it was great also to be there with the team in the room. I don't know if you saw like this. So it was really like. I mean, from my perspective, the other interesting thing is just like watching Kasparov, watching Mana was also interesting because he didn't, he has kind of a loss of words. I mean, whenever you lose, I've done a lot of sports. You sometimes say excuses, you look for reasons. And he couldn't really come up with reasons. I mean, so with the off race for Protoss, you could say, well, it felt awkward, it wasn't, but here it was just beaten. And it was beautiful to look at a human being being superseded by an AI system. I mean, it's a beautiful moment for researchers, so. Yeah, for sure it was. I mean, probably the highlight of my career so far because of its uniqueness and coolness. And I don't know, I mean, it's obviously, as you said, you can look at papers, citations and so on, but these really is like a testament of the whole machine learning approach and using games to advance technology. I mean, it really was, everything came together at that moment. That's really the summary. Also on the other side, it's a popularization of AI too, because it's just like traveling to the moon and so on. I mean, this is where a very large community of people that don't really know AI, they get to really interact with it. Which is very important. I mean, we must, you know, writing papers helps our peers, researchers, to understand what we're doing. But I think AI is becoming mature enough that we must sort of try to explain what it is. And perhaps through games is an obvious way because these games always had built in AI. So it may be everyone experience an AI playing a video game, even if they don't know, because there's always some scripted element and some people might even call that AI already, right? So what are other applications of the approaches underlying AlphaStar that you see happening? There's a lot of echoes of, you said, transformer of language modeling and so on. Have you already started thinking where the breakthroughs in AlphaStar get expanded to other applications? Right, so I thought about a few things for like kind of next month, next years. The main thing I'm thinking about actually is what's next as a kind of a grand challenge. Because for me, like we've seen Atari and then there's like the sort of three dimensional walls that we've seen also like pretty good performance from these capture the flag agents that also some people at DeepMind and elsewhere are working on. We've also seen some amazing results on like, for instance, Dota 2, which is also a very complicated game. So for me, like the main thing I'm thinking about is what's next in terms of challenge. So as a researcher, I see sort of two tensions between research and then applications or areas or domains where you apply them. So on the one hand, we've done, thanks to the application of StarCraft is very hard. We developed some techniques, some new research that now we could look at elsewhere. Like are there other applications where we can apply these? And the obvious ones, absolutely. You can think of feeding back to sort of the community we took from, which was mostly sequence modeling or natural language processing. So we've developed and extended things from the transformer and we use pointer networks. We combine LSTM and transformers in interesting ways. So that's perhaps the kind of lowest hanging fruit of feeding back to now a different field of machine learning that's not playing video games. Let me go old school and jump to Mr. Alan Turing. So the Turing test is a natural language test, a conversational test. What's your thought of it as a test for intelligence? Do you think it is a grand challenge that's worthy of undertaking? Maybe if it is, would you reformulate it or phrase it somehow differently? Right, so I really love the Turing test because I also like sequences and language understanding. And in fact, some of the early work we did in machine translation, we tried to apply to kind of a neural chatbot, which obviously would never pass the Turing test because it was very limited. But it is a very fascinating idea that you could really have an AI that would be indistinguishable from humans in terms of asking or conversing with it. So I think the test itself seems very nice. And it's kind of well defined, actually, like the passing it or not. I think there's quite a few rules that feel pretty simple. And I think they have these competitions every year. Yes, there's the Lebner Prize. But I don't know if you've seen the kind of bots that emerge from that competition. They're not quite as what you would. So it feels like that there's weaknesses with the way Turing formulated it. It needs to be that the definition of a genuine, rich, fulfilling human conversation, it needs to be something else. Like the Alexa Prize, which I'm not as well familiar with, has tried to define that more, I think, by saying you have to continue keeping a conversation for 30 minutes, something like that. So basically forcing the agent not to just fool, but to have an engaging conversation kind of thing. Have you thought about this problem richly? And if you have in general, how far away are we from? You worked a lot on language understanding, language generation, but the full dialogue, the conversation, just sitting at the bar having a couple of beers for an hour, that kind of conversation. Have you thought about it? Yeah, so I think you touched here on the critical point, which is feasibility. So there's a great essay by Hamming, which describes sort of grand challenges of physics. And he argues that, well, OK, for instance, teleportation or time travel are great grand challenges of physics, but there's no attacks. We really don't know or cannot kind of make any progress. So that's why most physicists and so on, they don't work on these in their PhDs and as part of their careers. So I see the Turing test, in the full Turing test, as a bit still too early. Like I think we're, especially with the current trend of deep learning language models, we've seen some amazing examples. I think GPT2 being the most recent one, which is very impressive. But to understand to fully solve passing or fooling a human to think that there's a human on the other side, I think we're quite far. So as a result, I don't see myself and I probably would not recommend people doing a PhD on solving the Turing test because it just feels it's kind of too early or too hard of a problem. Yeah, but that said, you said the exact same thing about StarCraft about a few years ago. Indeed. To Demis. So you'll probably also be the person who passes the Turing test in three years. I mean, I think that, yeah. So we have this on record. This is nice. It's true. I mean, it's true that progress sometimes is a bit unpredictable. I really wouldn't have not. Even six months ago, I would not have predicted the level that we see that these agents can deliver at grandmaster level. But I have worked on language enough. And basically, my concern is not that something could happen, a breakthrough could happen that would bring us to solving or passing the Turing test, is that I just think the statistical approach to it is not going to cut it. So we need a breakthrough, which is great for the community. But given that, I think there's quite more uncertainty. Whereas for StarCraft, I knew what the steps would be to get us there. I think it was clear that using the imitation learning part and then using this battle net for agents were going to be key. And it turned out that this was the case. And a little more was needed, but not much more. For Turing test, I just don't know what the plan or execution plan would look like. So that's why I myself working on it as a grand challenge is hard. But there are quite a few sub challenges that are related that you could say, well, I mean, what if you create a great assistant like Google already has, like the Google Assistant. So can we make it better? And can we make it fully neural and so on? That I start to believe maybe we're reaching a point where we should attempt these challenges. I like this conversation so much because it echoes very much the StarCraft conversation. It's exactly how you approach StarCraft. Let's break it down into small pieces and solve those. And you end up solving the whole game. Great. But that said, you're behind some of the biggest pieces of work in deep learning in the last several years. So you mentioned some limits. What do you think of the current limits of deep learning? And how do we overcome those limits? So if I had to actually use a single word to define the main challenge in deep learning, it's a challenge that probably has been the challenge for many years. And it's that of generalization. So what that means is that all that we're doing is fitting functions to data. And when the data we see is not from the same distribution, or even if there are some times that it is very close to distribution, but because of the way we train it with limited samples, we then get to this stage where we just don't see generalization as much as we can generalize. And I think adversarial examples are a clear example of this. But if you study machine learning and literature, and the reason why SVMs came very popular were because they were dealing and they had some guarantees about generalization, which is unseen data or out of distribution, or even within distribution where you take an image adding a bit of noise, these models fail. So I think, really, I don't see a lot of progress on generalization in the strong generalization sense of the word. I think our neural networks, you can always find design examples that will make their outputs arbitrary, which is not good because we humans would never be fooled by these kind of images or manipulation of the image. And if you look at the mathematics, you kind of understand this is a bunch of matrices multiplied together. There's probably numerics and instability that you can just find corner cases. So I think that's really the underlying topic many times we see when even at the grand stage of Turing test generalization, if you start passing the Turing test, should it be in English or should it be in any language? As a human, if you ask something in a different language, you actually will go and do some research and try to translate it and so on. Should the Turing test include that? And it's really a difficult problem and very fascinating and very mysterious, actually. Yeah, absolutely. But do you think if you were to try to solve it, can you not grow the size of data intelligently in such a way that the distribution of your training set does include the entirety of the testing set? Is that one path? The other path is totally a new methodology. It's not statistical. So a path that has worked well, and it worked well in StarCraft and in machine translation and in languages, scaling up the data and the model. And that's kind of been maybe the only single formula that still delivers today in deep learning, right? It's that data scale and model scale really do more and more of the things that we thought, oh, there's no way it can generalize to these, or there's no way it can generalize to that. But I don't think fundamentally it will be solved with this. And for instance, I'm really liking some style or approach that would not only have neural networks, but it would have programs or some discrete decision making, because there is where I feel there's a bit more. I mean, the best example, I think, for understanding this is I also worked a bit on, oh, we can learn an algorithm with a neural network, right? So you give it many examples, and it's going to sort the input numbers or something like that. But really strong generalization is you give me some numbers or you ask me to create an algorithm that sorts numbers. And instead of creating a neural net, which will be fragile because it's going to go out of range at some point, you're going to give it numbers that are too large, too small, and whatnot, if you just create a piece of code that sorts the numbers, then you can prove that that will generalize to absolutely all the possible input you could give. So I think the problem comes with some exciting prospects. I mean, scale is a bit more boring, but it really works. And then maybe programs and discrete abstractions are a bit less developed. But clearly, I think they're quite exciting in terms of future for the field. Do you draw any insight wisdom from the 80s and expert systems and symbolic systems, symbolic computing? Do you ever go back to those reasoning, that kind of logic? Do you think that might make a comeback? You'll have to dust off those books? Yeah, I actually love actually adding more inductive biases. To me, the problem really is, what are you trying to solve? If what you're trying to solve is so important that try to solve it no matter what, then absolutely use rules, use domain knowledge, and then use a bit of the magic of machine learning to empower to make the system as the best system that will detect cancer or detect weather patterns, right? Or in terms of StarCraft, it also was a very big challenge. So I was definitely happy that if we had to cut a corner here and there, it could have been interesting to do. And in fact, in StarCraft, we start thinking about expert systems because it's a very, you know, you can define. I mean, people actually build StarCraft bots by thinking about those principles, like state machines and rule based. And then you could think of combining a bit of a rule based system, but that has also neural networks incorporated to make it generalize a bit better. So absolutely, I mean, we should definitely go back to those ideas. And anything that makes the problem simpler, as long as your problem is important, that's OK. And that's research driving a very important problem. And on the other hand, if you want to really focus on the limits of reinforcement learning, then of course, you must try not to look at imitation data or to look for some rules of the domain that would help a lot or even feature engineering, right? So this is a tension that depending on what you do, I think both ways are definitely fine. And I would never not do one or the other as long as what you're doing is important and needs to be solved, right? Right, so there's a bunch of different ideas that you developed that I really enjoy. But one is translating from image captioning, translating from image to text, just another beautiful idea, I think, that resonates throughout your work, actually. So the underlying nature of reality being language always, somehow. So what's the connection between images and text, or rather the visual world and the world of language in your view? Right, so I think a piece of research that's been central to, I would say, even extending into StarGraph is this idea of sequence to sequence learning, which what we really meant by that is that you can now really input anything to a neural network as the input x. And then the neural network will learn a function f that will take x as an input and produce any output y. And these x and y's don't need to be static or features, like fixed vectors or anything like that. It could be really sequences and now beyond data structures. So that paradigm was tested in a very interesting way when we moved from translating French to English to translating an image to its caption. But the beauty of it is that, really, and that's actually how it happened. I changed a line of code in this thing that was doing machine translation. And I came the next day, and I saw how it was producing captions that seemed like, oh my god, this is really, really working. And the principle is the same. So I think I don't see text, vision, speech, waveforms as something different as long as you basically learn a function that will vectorize these into. And then after we vectorize it, we can then use transformers, LSTMs, whatever the flavor of the month of the model is. And then as long as we have enough supervised data, really, this formula will work and will keep working, I believe, to some extent. Modulo these generalization issues that I mentioned before. But the task there is to vectorize, so to form a representation that's meaningful. And your intuition now, having worked with all this media, is that once you are able to form that representation, you could basically take any things, any sequence. Going back to StarCraft, is there limits on the length so that we didn't really touch on the long term aspect? How did you overcome the whole really long term aspect of things here? Is there some tricks? So the main trick, so StarCraft, if you look at absolutely every frame, you might think it's quite a long game. So we would have to multiply 22 times 60 seconds per minute times maybe at least 10 minutes per game on average. So there are quite a few frames. But the trick really was to only observe, in fact, which might be seen as a limitation, but it is also a computational advantage. Only observe when you act. And then what the neural network decides is what is the gap going to be until the next action. And if you look at most StarCraft games that we have in the data set that Blizzard provided, it turns out that most games are actually only, I mean, it is still a long sequence, but it's maybe like 1,000 to 1,500 actions, which if you start looking at LSTMs, large LSTMs, transformers, it's not that difficult, especially if you have supervised learning. If you had to do it with reinforcement learning, the credit assignment problem, what is it in this game that made you win? That would be really difficult. But thankfully, because of imitation learning, we didn't have to deal with these directly. Although if we had to, we tried it. And what happened is you just take all your workers and attack with them. And that is kind of obvious in retrospect because you start trying random actions. One of the actions will be a worker that goes to the enemy base. And because it's self play, it's not going to know how to defend because it basically doesn't know almost anything. And eventually, what you develop is this take all workers and attack because the credit assignment issue in a rally is really, really hard. I do believe we could do better. And that's maybe a research challenge for the future. But yeah, even in StarCraft, the sequences are maybe 1,000, which I believe is within the realm of what transformers can do. Yeah, I guess the difference between StarCraft and Go is in Go and Chess, stuff starts happening right away. So there's not, yeah, it's pretty easy to self play. Not easy, but to self play, it's possible to develop reasonable strategies quickly as opposed to StarCraft. I mean, in Go, there's only 400 actions. But one action is what people would call the God action. That would be if you had expanded the whole search tree, that's the best action if you did minimax or whatever algorithm you would do if you had the computational capacity. But in StarCraft, 400 is minuscule. Like in 400, you couldn't even click on the pixels around a unit. So I think the problem there is in terms of action space size is way harder. And that search is impossible. So there's quite a few challenges indeed that make this kind of a step up in terms of machine learning. For humans, maybe playing StarCraft seems more intuitive because it looks real. I mean, the graphics and everything moves smoothly, whereas I don't know how to. I mean, Go is a game that I would really need to study. It feels quite complicated. But for machines, kind of maybe it's the reverse, yes. Which shows you the gap actually between deep learning and however the heck our brains work. So you developed a lot of really interesting ideas. It's interesting to just ask, what's your process of developing new ideas? Do you like brainstorming with others? Do you like thinking alone? Do you like, what was it, Ian Goodfellow said he came up with GANs after a few beers. He thinks beers are essential for coming up with new ideas. We had beers to decide to play another game of StarCraft after a week. So it's really similar to that story. Actually, I explained this in a DeepMind retreat. And I said, this is the same as the GAN story. I mean, we were in a bar. And we decided, let's play a GAN next week. And that's what happened. I feel like we're giving the wrong message to young undergrads. Yeah, I know. But in general, do you like brainstorming? Do you like thinking alone, working stuff out? So I think throughout the years, also, things changed. So initially, I was very fortunate to be with great minds like Jeff Hinton, Jeff Dean, Ilya Sutskever. I was really fortunate to join Brain at a very good time. So at that point, ideas, I was just brainstorming with my colleagues and learned a lot. And keep learning is actually something you should never stop doing. So learning implies reading papers and also discussing ideas with others. It's very hard at some point to not communicate that being reading a paper from someone or actually discussing. So definitely, that communication aspect needs to be there, whether it's written or oral. Nowadays, I'm also trying to be a bit more strategic about what research to do. So I was describing a little bit this tension between research for the sake of research, and then you have, on the other hand, applications that can drive the research. And honestly, the formula that has worked best for me is just find a hard problem and then try to see how research fits into it, how it doesn't fit into it, and then you must innovate. So I think machine translation drove sequence to sequence. Then maybe learning algorithms that had to, combinatorial algorithms led to pointer networks. StarCraft led to really scaling up imitation learning and the AlphaStarLeague. So that's been a formula that I personally like. But the other one is also valid. And I've seen it succeed a lot of the times where you just want to investigate model based RL as a research topic. And then you must then start to think, well, how are the tests? How are you going to test these ideas? You need a minimal environment to try things. You need to read a lot of papers and so on. And that's also very fun to do and something I've also done quite a few times, both at Brain, at DeepMind, and obviously as a PhD. So I think besides the ideas and discussions, I think it's important also because you start sort of guiding not only your own goals, but other people's goals to the next breakthrough. So you must really kind of understand this feasibility also, as we were discussing before, whether this domain is ready to be tackled or not. And you don't want to be too early. You obviously don't want to be too late. So it's really interesting, this strategic component of research, which I think as a grad student, I just had no idea. I just read papers and discussed ideas. And I think this has been maybe the major change. And I recommend people kind of feed forward to success how it looks like and try to backtrack, other than just kind of looking, oh, this looks cool. This looks cool. And then you do a bit of random work, which sometimes you stumble upon some interesting things. But in general, it's also good to plan a bit. Yeah, I like it. Especially like your approach of taking a really hard problem, stepping right in, and then being super skeptical about being able to solve the problem. I mean, there's a balance of both, right? There's a silly optimism and a critical sort of skepticism that's good to balance, which is why it's good to have a team of people that balance that. You don't do that on your own. You have both mentors that have seen, or you obviously want to chat and discuss whether it's the right time. I mean, Demis came in 2014. And he said, maybe in a bit we'll do StarCraft. And maybe he knew. And I'm just following his lead, which is great, because he's brilliant, right? So these things are obviously quite important, that you want to be surrounded by people who are diverse. They have their knowledge. There's also important to, I mean, I've learned a lot from people who actually have an idea that I might not think it's good. But if I give them the space to try it, I've been proven wrong many, many times as well. So that's great. I think your colleagues are more important than yourself, I think. Sure. Now let's real quick talk about another impossible problem, AGI. Right. What do you think it takes to build a system that's human level intelligence? We talked a little bit about the Turing test, StarCraft. All of these have echoes of general intelligence. But if you think about just something that you would sit back and say, wow, this is really something that resembles human level intelligence. What do you think it takes to build that? So I find that AGI oftentimes is maybe not very well defined. So what I'm trying to then come up with for myself is what would be a result look like that you would start to believe that you would have agents or neural nets that no longer overfeed to a single task, but actually learn the skill of learning, so to speak. And that actually is a field that I am fascinated by, which is the learning to learn, or meta learning, which is about no longer learning about a single domain. So you can think about the learning algorithm itself is general. So the same formula we applied for AlphaStar or StarCraft, we can now apply to almost any video game, or you could apply to many other problems and domains. But the algorithm is what's generalizing. But the neural network, those weights are useless even to play another race. I train a network to play very well at Protos versus Protos. I need to throw away those weights. If I want to play now Terran versus Terran, I would need to retrain a network from scratch with the same algorithm. That's beautiful. But the network itself will not be useful. So I think if I see an approach that can absorb or start solving new problems without the need to kind of restart the process, I think that, to me, would be a nice way to define some form of AGI. Again, I don't know the grandiose like age. I mean, should Turing tests be solved before AGI? I mean, I don't know. I think concretely, I would like to see clearly that meta learning happen, meaning that there is an architecture or a network that as it sees new problem or new data, it solves it. And to make it kind of a benchmark, it should solve it at the same speed that we do solve new problems. When I define you a new object and you have to recognize it, when you start playing a new game, you played all the Atari games. But now you play a new Atari game. Well, you're going to be pretty quickly pretty good at the game. So that's perhaps what's the domain and what's the exact benchmark is a bit difficult. I think as a community, we might need to do some work to define it. But I think this first step, I could see it happen relatively soon. But then the whole what AGI means and so on, I am a bit more confused about what I think people mean different things. There's an emotional, psychological level that like even the Turing test, passing the Turing test is something that we just pass judgment on as human beings what it means to be as a dog in AGI system. Yeah. What level, what does it mean, what does it mean? But I like the generalization. And maybe as a community, we converge towards a group of domains that are sufficiently far away. That would be really damn impressive if it was able to generalize. So perhaps not as close as Protoss and Zerg, but like Wikipedia. That would be a step. Yeah, that would be a good step and then a really good step. But then like from StarCraft to Wikipedia and back. Yeah, that kind of thing. And that feels also quite hard and far. But I think as long as you put the benchmark out, as we discovered, for instance, with ImageNet, then tremendous progress can be had. So I think maybe there's a lack of benchmark, but I'm sure we'll find one and the community will then work towards that. And then beyond what AGI might mean or would imply, I really am hopeful to see basically machine learning or AI just scaling up and helping people that might not have the resources to hire an assistant or that they might not even know what the weather is like. So I think in terms of the positive impact of AI, I think that's maybe what we should also not lose focus. The research community building AGI, I mean, that's a real nice goal. But I think the way that DeepMind puts it is, and then use it to solve everything else. So I think we should paralyze. Yeah, we shouldn't forget about all the positive things that are actually coming out of AI already and are going to be coming out. Right. But on that note, let me ask relative to popular perception, do you have any worry about the existential threat of artificial intelligence in the near or far future that some people have? I think in the near future, I'm skeptical. So I hope I'm not wrong. But I'm not concerned, but I appreciate efforts, ongoing efforts, and even like whole research field on AI safety emerging and in conferences and so on. I think that's great. In the long term, I really hope we just can simply have the benefits outweigh the potential dangers. I am hopeful for that. But also, we must remain vigilant to monitor and assess whether the tradeoffs are there and we have enough also lead time to prevent or to redirect our efforts if need be. But I'm quite optimistic about the technology and definitely more fearful of other threats in terms of planetary level at this point. But obviously, that's the one I have more power on. So clearly, I do start thinking more and more about this. And it's grown in me actually to start reading more about AI safety, which is a field that so far I have not really contributed to. But maybe there's something to be done there as well. I think it's really important. I talk about this with a few folks. But it's important to ask you and shove it in your head because you're at the leading edge of actually what people are excited about in AI. The work with AlphaStar, it's arguably at the very cutting edge of the kind of thing that people are afraid of. And so you speaking to that fact and that we're actually quite far away to the kind of thing that people might be afraid of. But it's still worthwhile to think about. And it's also good that you're not as worried and you're also open to thinking about it. There's two aspects. I mean, me not being worried. But obviously, we should prepare for things that could go wrong, misuse of the technologies as with any technologies. So I think there's always trade offs. And as a society, we've kind of solved this to some extent in the past. So I'm hoping that by having the researchers and the whole community brainstorm and come up with interesting solutions to the new things that will happen in the future, that we can still also push the research to the avenue that I think is kind of the greatest avenue, which is to understand intelligence. How are we doing what we're doing? And obviously, from a scientific standpoint, that is kind of my personal drive of all the time that I spend doing what I'm doing, really. Where do you see the deep learning as a field heading? Where do you think the next big breakthrough might be? So I think deep learning, I discussed a little of this before. Deep learning has to be combined with some form of discretization, program synthesis. I think that's kind of as a research in itself is an interesting topic to expand and start doing more research. And then as kind of what will deep learning enable to do in the future? I don't think that's going to be what's going to happen this year. But also this idea of starting not to throw away all the weights, that this idea of learning to learn and really having these agents not having to restart their weights. And you can have an agent that is kind of solving or classifying images on ImageNet, but also generating speech if you ask it to generate some speech. And it should really be kind of almost the same network, but it might not be a neural network. It might be a neural network with an optimization algorithm attached to it. But I think this idea of generalization to new task is something that we first must define good benchmarks. But then I think that's going to be exciting. And I'm not sure how close we are. But I think if you have a very limited domain, I think we can start doing some progress. And much like how we did a lot of programs in computer vision, we should start thinking. I really like a talk that Leon Buto gave at ICML a few years ago, which is this train test paradigm should be broken. We should stop thinking about a training set and a test set. And these are closed things that are untouchable. I think we should go beyond these. And in meta learning, we call these the meta training set and the meta test set, which is really thinking about, if I know about ImageNet, why would that network not work on MNIST, which is a much simpler problem? But right now, it really doesn't. But it just feels wrong. So I think that's kind of the, on the application or the benchmark sites, we probably will see quite a few more interest and progress and hopefully people defining new and exciting challenges really. Do you have any hope or interest in knowledge graphs within this context? So this kind of constructing graph. So going back to graphs. Well, neural networks and graphs. But I mean, a different kind of knowledge graph, sort of like semantic graphs or those concepts. Yeah. So I think the idea of graphs is, so I've been quite interested in sequences first and then more interesting or different data structures like graphs. And I've studied graph neural networks in the last three years or so. I found these models just very interesting from deep learning sites standpoint. But then why do we want these models and why would we use them? What's the application? What's kind of the killer application of graphs? And perhaps if we could extract a knowledge graph from Wikipedia automatically, that would be interesting because then these graphs have this very interesting structure that also is a bit more compatible with this idea of programs and deep learning kind of working together, jumping neighborhoods and so on. You could imagine defining some primitives to go around graphs, right? So I think I really like the idea of a knowledge graph. And in fact, when we started or as part of the research we did for StarCraft, I thought, wouldn't it be cool to give the graph of all these buildings that depend on each other and units that have prerequisites of being built by that. And so this is information that the network can learn and extract. But it would have been great to see or to think of really StarCraft as a giant graph that even also as the game evolves, you start taking branches and so on. And we did a bit of research on these, nothing too relevant, but I really like the idea. And it has elements that are something you also worked with in terms of visualizing your networks. It has elements of having human interpretable, being able to generate knowledge representations that are human interpretable that maybe human experts can then tweak or at least understand. So there's a lot of interesting aspect there. And for me personally, I'm just a huge fan of Wikipedia. And it's a shame that our neural networks aren't taking advantage of all the structured knowledge that's on the web. What's next for you? What's next for DeepMind? What are you excited about for AlphaStar? Yeah, so I think the obvious next steps would be to apply AlphaStar to other races. I mean, that sort of shows that the algorithm works because we wouldn't want to have created by mistake something in the architecture that happens to work for Protoss but not for other races. So as verification, I think that's an obvious next step that we are working on. And then I would like to see so agents and players can specialize on different skill sets that allow them to be very good. I think we've seen AlphaStar understanding very well when to take battles and when to not to do that. Also very good at micromanagement and moving the units around and so on. And also very good at producing nonstop and trading off economy with building units. But I have not perhaps seen as much as I would like this idea of the poker idea that you mentioned, right? I'm not sure StarCraft or AlphaStar rather has developed a very deep understanding of what the opponent is doing and reacting to that and sort of trying to trick the player to do something else or that. So this kind of reasoning, I would like to see more. So I think purely from a research standpoint, there's perhaps also quite a few things to be done there in the domain of StarCraft. Yeah, in the domain of games, I've seen some interesting work in even auctions, manipulating other players, sort of forming a belief state and just messing with people. Yeah, it's called theory of mind, I guess. Theory of mind, yeah. So it's a fascinating. Theory of mind on StarCraft is kind of they're really made for each other. So that would be very exciting to see those techniques apply to StarCraft or perhaps StarCraft driving new techniques, right? As I said, this is always the tension between the two. Well, Orel, thank you so much for talking today. Awesome. It was great to be here. Thanks.
Oriol Vinyals: DeepMind AlphaStar, StarCraft, and Language | Lex Fridman Podcast #20
The following is a conversation with Chris Latner. Currently, he's a senior director at Google working on several projects, including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He's one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the Clang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as vice president of Autopilot software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in house software infrastructure for Autopilot. I could have easily talked to Chris for many more hours. Compiling code down across the levels of abstraction is one of the most fundamental and fascinating aspects of what computers do, and he is one of the world experts in this process. It's rigorous science, and it's messy, beautiful art. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Chris Ladner. What was the first program you've ever written? My first program. Back, and when was it? I think I started as a kid, and my parents got a basic programming book. And so when I started, it was typing out programs from a book, and seeing how they worked, and then typing them in wrong, and trying to figure out why they were not working right, that kind of stuff. So BASIC, what was the first language that you remember yourself maybe falling in love with, like really connecting with? I don't know. I mean, I feel like I've learned a lot along the way, and each of them have a different special thing about them. So I started in BASIC, and then went like GW BASIC, which was the thing back in the DOS days, and then upgraded to QBASIC, and eventually QuickBASIC, which are all slightly more fancy versions of Microsoft BASIC. Made the jump to Pascal, and started doing machine language programming and assembly in Pascal, which was really cool. Turbo Pascal was amazing for its day. Eventually got into C, C++, and then kind of did lots of other weird things. I feel like you took the dark path, which is the, you could have gone Lisp. Yeah. You could have gone higher level sort of functional philosophical hippie route. Instead, you went into like the dark arts of the C. It was straight into the machine. Straight to the machine. So I started with BASIC, Pascal, and then Assembly, and then wrote a lot of Assembly. And I eventually did Smalltalk and other things like that. But that was not the starting point. But so what is this journey to C? Is that in high school? Is that in college? That was in high school, yeah. And then that was really about trying to be able to do more powerful things than what Pascal could do, and also to learn a different world. So he was really confusing to me with pointers and the syntax and everything, and it took a while. But Pascal's much more principled in various ways. C is more, I mean, it has its historical roots, but it's not as easy to learn. With pointers, there's this memory management thing that you have to become conscious of. Is that the first time you start to understand that there's resources that you're supposed to manage? Well, so you have that in Pascal as well. But in Pascal, like the caret instead of the star, there's some small differences like that. But it's not about pointer arithmetic. And in C, you end up thinking about how things get laid out in memory a lot more. And so in Pascal, you have allocating and deallocating and owning the memory, but just the programs are simpler, and you don't have to. Well, for example, Pascal has a string type. And so you can think about a string instead of an array of characters which are consecutive in memory. So it's a little bit of a higher level abstraction. So let's get into it. Let's talk about LLVM, C lang, and compilers. Sure. So can you tell me first what LLVM and C lang are? And how is it that you find yourself the creator and lead developer, one of the most powerful compiler optimization systems in use today? Sure. So I guess they're different things. So let's start with what is a compiler? Is that a good place to start? What are the phases of a compiler? Where are the parts? Yeah, what is it? So what is even a compiler used for? So the way I look at this is you have a two sided problem of you have humans that need to write code. And then you have machines that need to run the program that the human wrote. And for lots of reasons, the humans don't want to be writing in binary and want to think about every piece of hardware. And so at the same time that you have lots of humans, you also have lots of kinds of hardware. And so compilers are the art of allowing humans to think at a level of abstraction that they want to think about. And then get that program, get the thing that they wrote, to run on a specific piece of hardware. And the interesting and exciting part of all this is that there's now lots of different kinds of hardware, chips like x86 and PowerPC and ARM and things like that. But also high performance accelerators for machine learning and other things like that are also just different kinds of hardware, GPUs. These are new kinds of hardware. And at the same time, on the programming side of it, you have basic, you have C, you have JavaScript, you have Python, you have Swift. You have lots of other languages that are all trying to talk to the human in a different way to make them more expressive and capable and powerful. And so compilers are the thing that goes from one to the other. End to end, from the very beginning to the very end. End to end. And so you go from what the human wrote and programming languages end up being about expressing intent, not just for the compiler and the hardware, but the programming language's job is really to capture an expression of what the programmer wanted that then can be maintained and adapted and evolved by other humans, as well as interpreted by the compiler. So when you look at this problem, you have, on the one hand, humans, which are complicated. And you have hardware, which is complicated. And so compilers typically work in multiple phases. And so the software engineering challenge that you have here is try to get maximum reuse out of the amount of code that you write, because these compilers are very complicated. And so the way it typically works out is that you have something called a front end or a parser that is language specific. And so you'll have a C parser, and that's what Clang is, or C++ or JavaScript or Python or whatever. That's the front end. Then you'll have a middle part, which is often the optimizer. And then you'll have a late part, which is hardware specific. And so compilers end up, there's many different layers often, but these three big groups are very common in compilers. And what LLVM is trying to do is trying to standardize that middle and last part. And so one of the cool things about LLVM is that there are a lot of different languages that compile through to it. And so things like Swift, but also Julia, Rust, Clang for C, C++, Subjective C, like these are all very different languages and they can all use the same optimization infrastructure, which gets better performance, and the same code generation infrastructure for hardware support. And so LLVM is really that layer that is common, that all these different specific compilers can use. And is it a standard, like a specification, or is it literally an implementation? It's an implementation. And so I think there's a couple of different ways of looking at it, right? Because it depends on which angle you're looking at it from. LLVM ends up being a bunch of code, okay? So it's a bunch of code that people reuse and they build compilers with. We call it a compiler infrastructure because it's kind of the underlying platform that you build a concrete compiler on top of. But it's also a community. And the LLVM community is hundreds of people that all collaborate. And one of the most fascinating things about LLVM over the course of time is that we've managed somehow to successfully get harsh competitors in the commercial space to collaborate on shared infrastructure. And so you have Google and Apple, you have AMD and Intel, you have Nvidia and AMD on the graphics side, you have Cray and everybody else doing these things. And all these companies are collaborating together to make that shared infrastructure really, really great. And they do this not out of the goodness of their heart, but they do it because it's in their commercial interest of having really great infrastructure that they can build on top of and facing the reality that it's so expensive that no one company, even the big companies, no one company really wants to implement it all themselves. Expensive or difficult? Both. That's a great point because it's also about the skill sets. And the skill sets are very hard to find. How big is the LLVM? It always seems like with open source projects, the kind, an LLVM is open source? Yes, it's open source. It's about, it's 19 years old now, so it's fairly old. It seems like the magic often happens within a very small circle of people. Yes. At least their early birth and whatever. Yes, so the LLVM came from a university project, and so I was at the University of Illinois. And there it was myself, my advisor, and then a team of two or three research students in the research group, and we built many of the core pieces initially. I then graduated and went to Apple, and at Apple brought it to the products, first in the OpenGL graphics stack, but eventually to the C compiler realm, and eventually built Clang, and eventually built Swift and these things. Along the way, building a team of people that are really amazing compiler engineers that helped build a lot of that. And so as it was gaining momentum and as Apple was using it, being open source and public and encouraging contribution, many others, for example, at Google, came in and started contributing. And in some cases, Google effectively owns Clang now because it cares so much about C++ and the evolution of that ecosystem, and so it's investing a lot in the C++ world and the tooling and things like that. And so likewise, NVIDIA cares a lot about CUDA. And so CUDA uses Clang and uses LLVM for graphics and GPGPU. And so when you first started as a master's project, I guess, did you think it was gonna go as far as it went? Were you crazy ambitious about it? No. It seems like a really difficult undertaking, a brave one. Yeah, no, no, no, it was nothing like that. So my goal when I went to the University of Illinois was to get in and out with a non thesis masters in a year and get back to work. So I was not planning to stay for five years and build this massive infrastructure. I got nerd sniped into staying. And a lot of it was because LLVM was fun and I was building cool stuff and learning really interesting things and facing both software engineering challenges, but also learning how to work in a team and things like that. I had worked at many companies as interns before that, but it was really a different thing to have a team of people that are working together and try and collaborate in version control. And it was just a little bit different. Like I said, I just talked to Don Knuth and he believes that 2% of the world population have something weird with their brain, that they're geeks, they understand computers, they're connected with computers. He put it at exactly 2%. Okay, so. He's a specific guy. It's very specific. Well, he says, I can't prove it, but it's very empirically there. Is there something that attracts you to the idea of optimizing code? And he seems like that's one of the biggest, coolest things about LLVM. Yeah, that's one of the major things it does. So I got into that because of a person, actually. So when I was in my undergraduate, I had an advisor, or a professor named Steve Vegdahl. And he, I went to this little tiny private school. There were like seven or nine people in my computer science department, students in my class. So it was a very tiny, very small school. It was kind of a wart on the side of the math department kind of a thing at the time. I think it's evolved a lot in the many years since then. But Steve Vegdahl was a compiler guy. And he was super passionate. And his passion rubbed off on me. And one of the things I like about compilers is that they're large, complicated software pieces. And so one of the culminating classes that many computer science departments, at least at the time, did was to say that you would take algorithms and data structures and all these core classes. But then the compilers class was one of the last classes you take because it pulls everything together. And then you work on one piece of code over the entire semester. And so you keep building on your own work, which is really interesting. And it's also very challenging because in many classes, if you don't get a project done, you just forget about it and move on to the next one and get your B or whatever it is. But here you have to live with the decisions you make and continue to reinvest in it. And I really like that. And so I did an extra study project with him the following semester. And he was just really great. And he was also a great mentor in a lot of ways. And so from him and from his advice, he encouraged me to go to graduate school. I wasn't super excited about going to grad school. I wanted the master's degree, but I didn't want to be an academic. But like I said, I kind of got tricked into saying and was having a lot of fun. And I definitely do not regret it. What aspects of compilers were the things you connected with? So LLVM, there's also the other part that's really interesting if you're interested in languages is parsing and just analyzing the language, breaking it down, parsing, and so on. Was that interesting to you, or were you more interested in optimization? For me, it was more so I'm not really a math person. I could do math. I understand some bits of it when I get into it. But math is never the thing that attracted me. And so a lot of the parser part of the compiler has a lot of good formal theories that Don, for example, knows quite well. I'm still waiting for his book on that. But I just like building a thing and seeing what it could do and exploring and getting it to do more things and then setting new goals and reaching for them. And in the case of LLVM, when I started working on that, my research advisor that I was working for was a compiler guy. And so he and I specifically found each other because we were both interested in compilers. And so I started working with him and taking his class. And a lot of LLVM initially was, it's fun implementing all the standard algorithms and all the things that people had been talking about and were well known. And they were in the curricula for advanced studies and compilers. And so just being able to build that was really fun. And I was learning a lot by, instead of reading about it, just building. And so I enjoyed that. So you said compilers are these complicated systems. Can you even just with language try to describe how you turn a C++ program into code? Like, what are the hard parts? Why is it so hard? So I'll give you examples of the hard parts along the way. So C++ is a very complicated programming language. It's something like 1,400 pages in the spec. So C++ by itself is crazy complicated. Can we just pause? What makes the language complicated in terms of what's syntactically? So it's what they call syntax. So the actual how the characters are arranged, yes. It's also semantics, how it behaves. It's also, in the case of C++, there's a huge amount of history. C++ is built on top of C. You play that forward. And then a bunch of suboptimal, in some cases, decisions were made, and they compound. And then more and more and more things keep getting added to C++, and it will probably never stop. But the language is very complicated from that perspective. And so the interactions between subsystems is very complicated. There's just a lot there. And when you talk about the front end, one of the major challenges, which clang as a project, the C, C++ compiler that I built, I and many people built, one of the challenges we took on was we looked at GCC. GCC, at the time, was a really good industry standardized compiler that had really consolidated a lot of the other compilers in the world and was a standard. But it wasn't really great for research. The design was very difficult to work with. And it was full of global variables and other things that made it very difficult to reuse in ways that it wasn't originally designed for. And so with clang, one of the things that we wanted to do is push forward on better user interface, so make error messages that are just better than GCC's. And that's actually hard, because you have to do a lot of bookkeeping in an efficient way to be able to do that. We want to make compile time better. And so compile time is about making it efficient, which is also really hard when you're keeping track of extra information. We wanted to make new tools available, so refactoring tools and other analysis tools that GCC never supported, also leveraging the extra information we kept, but enabling those new classes of tools that then get built into IDEs. And so that's been one of the areas that clang has really helped push the world forward in, is in the tooling for C and C++ and things like that. But C++ and the front end piece is complicated. And you have to build syntax trees. And you have to check every rule in the spec. And you have to turn that back into an error message to the human that the human can understand when they do something wrong. But then you start doing what's called lowering, so going from C++ and the way that it represents code down to the machine. And when you do that, there's many different phases you go through. Often, there are, I think LLVM has something like 150 different what are called passes in the compiler that the code passes through. And these get organized in very complicated ways, which affect the generated code and the performance and compile time and many other things. What are they passing through? So after you do the clang parsing, what's the graph? What does it look like? What's the data structure here? Yeah, so in the parser, it's usually a tree. And it's called an abstract syntax tree. And so the idea is you have a node for the plus that the human wrote in their code. Or the function call, you'll have a node for call with the function that they call and the arguments they pass, things like that. This then gets lowered into what's called an intermediate representation. And intermediate representations are like LLVM has one. And there, it's what's called a control flow graph. And so you represent each operation in the program as a very simple, like this is going to add two numbers. This is going to multiply two things. Maybe we'll do a call. But then they get put in what are called blocks. And so you get blocks of these straight line operations, where instead of being nested like in a tree, it's straight line operations. And so there's a sequence and an ordering to these operations. So within the block or outside the block? That's within the block. And so it's a straight line sequence of operations within the block. And then you have branches, like conditional branches, between blocks. And so when you write a loop, for example, in a syntax tree, you would have a for node, like for a for statement in a C like language, you'd have a for node. And you have a pointer to the expression for the initializer, a pointer to the expression for the increment, a pointer to the expression for the comparison, a pointer to the body. And these are all nested underneath it. In a control flow graph, you get a block for the code that runs before the loop, so the initializer code. And you have a block for the body of the loop. And so the body of the loop code goes in there, but also the increment and other things like that. And then you have a branch that goes back to the top and a comparison and a branch that goes out. And so it's more of an assembly level kind of representation. But the nice thing about this level of representation is it's much more language independent. And so there's lots of different kinds of languages with different kinds of, you know, JavaScript has a lot of different ideas of what is false, for example. And all that can stay in the front end. But then that middle part can be shared across all those. How close is that intermediate representation to neural networks, for example? Are they, because everything you describe is a kind of echoes of a neural network graph. Are they neighbors or what? They're quite different in details, but they're very similar in idea. So one of the things that neural networks do is they learn representations for data at different levels of abstraction. And then they transform those through layers, right? So the compiler does very similar things. But one of the things the compiler does is it has relatively few different representations. Where a neural network often, as you get deeper, for example, you get many different representations in each layer or set of ops. It's transforming between these different representations. In a compiler, often you get one representation and they do many transformations to it. And these transformations are often applied iteratively. And for programmers, there's familiar types of things. For example, trying to find expressions inside of a loop and pulling them out of a loop so they execute for times. Or find redundant computation. Or find constant folding or other simplifications, turning two times x into x shift left by one. And things like this are all the examples of the things that happen. But compilers end up getting a lot of theorem proving and other kinds of algorithms that try to find higher level properties of the program that then can be used by the optimizer. Cool. So what's the biggest bang for the buck with optimization? Today? Yeah. Well, no, not even today. At the very beginning, the 80s, I don't know. Yeah, so for the 80s, a lot of it was things like register allocation. So the idea of in a modern microprocessor, what you'll end up having is you'll end up having memory, which is relatively slow. And then you have registers that are relatively fast. But registers, you don't have very many of them. And so when you're writing a bunch of code, you're just saying, compute this, put in a temporary variable, compute this, compute this, compute this, put in a temporary variable. I have a loop. I have some other stuff going on. Well, now you're running on an x86, like a desktop PC or something. Well, it only has, in some cases, some modes, eight registers. And so now the compiler has to choose what values get put in what registers at what points in the program. And this is actually a really big deal. So if you think about, you have a loop, an inner loop that executes millions of times maybe. If you're doing loads and stores inside that loop, then it's going to be really slow. But if you can somehow fit all the values inside that loop in registers, now it's really fast. And so getting that right requires a lot of work, because there's many different ways to do that. And often what the compiler ends up doing is it ends up thinking about things in a different representation than what the human wrote. You wrote into x. Well, the compiler thinks about that as four different values, each which have different lifetimes across the function that it's in. And each of those could be put in a register or memory or different memory or maybe in some parts of the code recomputed instead of stored and reloaded. And there are many of these different kinds of techniques that can be used. So it's adding almost like a time dimension to it's trying to optimize across time. So it's considering when you're programming, you're not thinking in that way. Yeah, absolutely. And so the RISC era made things. So RISC chips, R I S C. The RISC chips, as opposed to CISC chips. The RISC chips made things more complicated for the compiler, because what they ended up doing is ending up adding pipelines to the processor, where the processor can do more than one thing at a time. But this means that the order of operations matters a lot. So one of the classical compiler techniques that you use is called scheduling. And so moving the instructions around so that the processor can keep its pipelines full instead of stalling and getting blocked. And so there's a lot of things like that that are kind of bread and butter compiler techniques that have been studied a lot over the course of decades now. But the engineering side of making them real is also still quite hard. And you talk about machine learning. This is a huge opportunity for machine learning, because many of these algorithms are full of these hokey, hand rolled heuristics, which work well on specific benchmarks that don't generalize, and full of magic numbers. And I hear there's some techniques that are good at handling that. So what would be the, if you were to apply machine learning to this, what's the thing you're trying to optimize? Is it ultimately the running time? You can pick your metric, and there's running time, there's memory use, there's lots of different things that you can optimize for. Code size is another one that some people care about in the embedded space. Is this like the thinking into the future, or has somebody actually been crazy enough to try to have machine learning based parameter tuning for the optimization of compilers? So this is something that is, I would say, research right now. There are a lot of research systems that have been applying search in various forms. And using reinforcement learning is one form, but also brute force search has been tried for quite a while. And usually, these are in small problem spaces. So find the optimal way to code generate a matrix multiply for a GPU, something like that, where you say, there, there's a lot of design space of, do you unroll loops a lot? Do you execute multiple things in parallel? And there's many different confounding factors here because graphics cards have different numbers of threads and registers and execution ports and memory bandwidth and many different constraints that interact in nonlinear ways. And so search is very powerful for that. And it gets used in certain ways, but it's not very structured. This is something that we need, we as an industry need to fix. So you said 80s, but like, so have there been like big jumps in improvement and optimization? Yeah. Yeah, since then, what's the coolest thing? It's largely been driven by hardware. So, well, it's hardware and software. So in the mid nineties, Java totally changed the world, right? And I'm still amazed by how much change was introduced by the way or in a good way. So like reflecting back, Java introduced things like, all at once introduced things like JIT compilation. None of these were novel, but it pulled it together and made it mainstream and made people invest in it. JIT compilation, garbage collection, portable code, safe code, like memory safe code, like a very dynamic dispatch execution model. Like many of these things, which had been done in research systems and had been done in small ways in various places, really came to the forefront, really changed how things worked and therefore changed the way people thought about the problem. JavaScript was another major world change based on the way it works. But also on the hardware side of things, multi core and vector instructions really change the problem space and are very, they don't remove any of the problems that compilers faced in the past, but they add new kinds of problems of how do you find enough work to keep a four wide vector busy, right? Or if you're doing a matrix multiplication, how do you do different columns out of that matrix at the same time? And how do you maximally utilize the arithmetic compute that one core has? And then how do you take it to multiple cores? How did the whole virtual machine thing change the compilation pipeline? Yeah, so what the Java virtual machine does is it splits, just like I was talking about before, where you have a front end that parses the code, and then you have an intermediate representation that gets transformed. What Java did was they said, we will parse the code and then compile to what's known as Java byte code. And that byte code is now a portable code representation that is industry standard and locked down and can't change. And then the back part of the compiler that does optimization and code generation can now be built by different vendors. Okay. And Java byte code can be shipped around across the wire. It's memory safe and relatively trusted. And because of that, it can run in the browser. And that's why it runs in the browser, right? And so that way you can be in, again, back in the day, you would write a Java applet and as a web developer, you'd build this mini app that would run on a webpage. Well, a user of that is running a web browser on their computer. You download that Java byte code, which can be trusted, and then you do all the compiler stuff on your machine so that you know that you trust that. Now, is that a good idea or a bad idea? It's a great idea. I mean, it's a great idea for certain problems. And I'm very much a believer that technology is itself neither good nor bad. It's how you apply it. You know, this would be a very, very bad thing for very low levels of the software stack. But in terms of solving some of these software portability and transparency, or portability problems, I think it's been really good. Now, Java ultimately didn't win out on the desktop. And like, there are good reasons for that. But it's been very successful on servers and in many places, it's been a very successful thing over decades. So what has been LLVMs and C langs improvements and optimization that throughout its history, what are some moments we had set back and really proud of what's been accomplished? Yeah, I think that the interesting thing about LLVM is not the innovations and compiler research. It has very good implementations of various important algorithms, no doubt. And a lot of really smart people have worked on it. But I think that the thing that's most profound about LLVM is that through standardization, it made things possible that otherwise wouldn't have happened, okay? And so interesting things that have happened with LLVM, for example, Sony has picked up LLVM and used it to do all the graphics compilation in their movie production pipeline. And so now they're able to have better special effects because of LLVM. That's kind of cool. That's not what it was designed for, right? But that's the sign of good infrastructure when it can be used in ways it was never designed for because it has good layering and software engineering and it's composable and things like that. Which is where, as you said, it differs from GCC. Yes, GCC is also great in various ways, but it's not as good as infrastructure technology. It's really a C compiler, or it's a Fortran compiler. It's not infrastructure in the same way. Now you can tell I don't know what I'm talking about because I keep saying C lang. You can always tell when a person has clues, by the way, to pronounce something. I don't think, have I ever used C lang? Entirely possible, have you? Well, so you've used code, it's generated probably. So C lang and LLVM are used to compile all the apps on the iPhone effectively and the OSs. It compiles Google's production server applications. It's used to build GameCube games and PlayStation 4 and things like that. So as a user, I have, but just everything I've done that I experienced with Linux has been, I believe, always GCC. Yeah, I think Linux still defaults to GCC. And is there a reason for that? Or is it because, I mean, is there a reason for that? It's a combination of technical and social reasons. Many Linux developers do use C lang, but the distributions, for lots of reasons, use GCC historically, and they've not switched, yeah. Because it's just anecdotally online, it seems that LLVM has either reached the level of GCC or superseded on different features or whatever. The way I would say it is that they're so close, it doesn't matter. Yeah, exactly. Like, they're slightly better in some ways, slightly worse than otherwise, but it doesn't actually really matter anymore, that level. So in terms of optimization breakthroughs, it's just been solid incremental work. Yeah, yeah, which describes a lot of compilers. The hard thing about compilers, in my experience, is the engineering, the software engineering, making it so that you can have hundreds of people collaborating on really detailed, low level work and scaling that. And that's really hard. And that's one of the things I think LLVM has done well. And that kind of goes back to the original design goals with it to be modular and things like that. And incidentally, I don't want to take all the credit for this, right? I mean, some of the best parts about LLVM is that it was designed to be modular. And when I started, I would write, for example, a register allocator, and then somebody much smarter than me would come in and pull it out and replace it with something else that they would come up with. And because it's modular, they were able to do that. And that's one of the challenges with GCC, for example, is replacing subsystems is incredibly difficult. It can be done, but it wasn't designed for that. And that's one of the reasons that LLVM's been very successful in the research world as well. But in a community sense, Guido van Rossum, right, from Python, just retired from, what is it? Benevolent Dictator for Life, right? So in managing this community of brilliant compiler folks, is there, did it, for a time at least, fall on you to approve things? Oh yeah, so I mean, I still have something like an order of magnitude more patches in LLVM than anybody else, and many of those I wrote myself. But you still write, I mean, you're still close to the, to the, I don't know what the expression is, to the metal, you still write code. Yeah, I still write code. Not as much as I was able to in grad school, but that's an important part of my identity. But the way that LLVM has worked over time is that when I was a grad student, I could do all the work and steer everything and review every patch and make sure everything was done exactly the way my opinionated sense felt like it should be done, and that was fine. But as things scale, you can't do that, right? And so what ends up happening is LLVM has a hierarchical system of what's called code owners. These code owners are given the responsibility not to do all the work, not necessarily to review all the patches, but to make sure that the patches do get reviewed and make sure that the right thing's happening architecturally in their area. And so what you'll see is you'll see that, for example, hardware manufacturers end up owning the hardware specific parts of their hardware. That's very common. Leaders in the community that have done really good work naturally become the de facto owner of something. And then usually somebody else is like, how about we make them the official code owner? And then we'll have somebody to make sure that all the patches get reviewed in a timely manner. And then everybody's like, yes, that's obvious. And then it happens, right? And usually this is a very organic thing, which is great. And so I'm nominally the top of that stack still, but I don't spend a lot of time reviewing patches. What I do is I help negotiate a lot of the technical disagreements that end up happening and making sure that the community as a whole makes progress and is moving in the right direction and doing that. So we also started a nonprofit six years ago, seven years ago, time's gone away. And the LLVM Foundation nonprofit helps oversee all the business sides of things and make sure that the events that the LLVM community has are funded and set up and run correctly and stuff like that. But the foundation is very much stays out of the technical side of where the project is going. Right, so it sounds like a lot of it is just organic. Yeah, well, LLVM is almost 20 years old, which is hard to believe. Somebody pointed out to me recently that LLVM is now older than GCC was when LLVM started, right? So time has a way of getting away from you. But the good thing about that is it has a really robust, really amazing community of people that are in their professional lives, spread across lots of different companies, but it's a community of people that are interested in similar kinds of problems and have been working together effectively for years and have a lot of trust and respect for each other. And even if they don't always agree that we're able to find a path forward. So then in a slightly different flavor of effort, you started at Apple in 2005 with the task of making, I guess, LLVM production ready. And then eventually 2013 through 2017, leading the entire developer tools department. We're talking about LLVM, Xcode, Objective C to Swift. So in a quick overview of your time there, what were the challenges? First of all, leading such a huge group of developers, what was the big motivator, dream, mission behind creating Swift, the early birth of it from Objective C and so on, and Xcode, what are some challenges? So these are different questions. Yeah, I know, but I wanna talk about the other stuff too. I'll stay on the technical side, then we can talk about the big team pieces, if that's okay. So it's to really oversimplify many years of hard work. LLVM started, joined Apple, became a thing, became successful and became deployed. But then there's a question about how do we actually parse the source code? So LLVM is that back part, the optimizer and the code generator. And LLVM was really good for Apple as it went through a couple of harder transitions. I joined right at the time of the Intel transition, for example, and 64 bit transitions, and then the transition to ARM with the iPhone. And so LLVM was very useful for some of these kinds of things. But at the same time, there's a lot of questions around developer experience. And so if you're a programmer pounding out at the time Objective C code, the error message you get, the compile time, the turnaround cycle, the tooling and the IDE, were not great, were not as good as they could be. And so, as I occasionally do, I'm like, well, okay, how hard is it to write a C compiler? And so I'm not gonna commit to anybody, I'm not gonna tell anybody, I'm just gonna just do it nights and weekends and start working on it. And then I built up in C, there's this thing called the preprocessor, which people don't like, but it's actually really hard and complicated and includes a bunch of really weird things like trigraphs and other stuff like that that are really nasty, and it's the crux of a bunch of the performance issues in the compiler. Started working on the parser and kind of got to the point where I'm like, ah, you know what, we could actually do this. Everybody's saying that this is impossible to do, but it's actually just hard, it's not impossible. And eventually told my manager about it, and he's like, oh, wow, this is great, we do need to solve this problem. Oh, this is great, we can get you one other person to work with you on this, you know? And slowly a team is formed and it starts taking off. And C++, for example, huge, complicated language. People always assume that it's impossible to implement and it's very nearly impossible, but it's just really, really hard. And the way to get there is to build it one piece at a time incrementally. And that was only possible because we were lucky to hire some really exceptional engineers that knew various parts of it very well and could do great things. Swift was kind of a similar thing. So Swift came from, we were just finishing off the first version of C++ support in Clang. And C++ is a very formidable and very important language, but it's also ugly in lots of ways. And you can't influence C++ without thinking there has to be a better thing, right? And so I started working on Swift, again, with no hope or ambition that would go anywhere, just let's see what could be done, let's play around with this thing. It was me in my spare time, not telling anybody about it, kind of a thing, and it made some good progress. I'm like, actually, it would make sense to do this. At the same time, I started talking with the senior VP of software at the time, a guy named Bertrand Serlet. And Bertrand was very encouraging. He was like, well, let's have fun, let's talk about this. And he was a little bit of a language guy, and so he helped guide some of the early work and encouraged me and got things off the ground. And eventually told my manager and told other people, and it started making progress. The complicating thing with Swift was that the idea of doing a new language was not obvious to anybody, including myself. And the tone at the time was that the iPhone was successful because of Objective C. Oh, interesting. Not despite of or just because of. And you have to understand that at the time, Apple was hiring software people that loved Objective C. And it wasn't that they came despite Objective C. They loved Objective C, and that's why they got hired. And so you had a software team that the leadership, in many cases, went all the way back to Next, where Objective C really became real. And so they, quote unquote, grew up writing Objective C. And many of the individual engineers all were hired because they loved Objective C. And so this notion of, OK, let's do new language was kind of heretical in many ways. Meanwhile, my sense was that the outside community wasn't really in love with Objective C. Some people were, and some of the most outspoken people were. But other people were hitting challenges because it has very sharp corners and it's difficult to learn. And so one of the challenges of making Swift happen that was totally non technical is the social part of what do we do? If we do a new language, which at Apple, many things happen that don't ship. So if we ship it, what is the metrics of success? Why would we do this? Why wouldn't we make Objective C better? If Objective C has problems, let's file off those rough corners and edges. And one of the major things that became the reason to do this was this notion of safety, memory safety. And the way Objective C works is that a lot of the object system and everything else is built on top of pointers in C. Objective C is an extension on top of C. And so pointers are unsafe. And if you get rid of the pointers, it's not Objective C anymore. And so fundamentally, that was an issue that you could not fix safety or memory safety without fundamentally changing the language. And so once we got through that part of the mental process and the thought process, it became a design process of saying, OK, well, if we're going to do something new, what is good? How do we think about this? And what do we like? And what are we looking for? And that was a very different phase of it. So what are some design choices early on in Swift? Like we're talking about braces, are you making a typed language or not, all those kinds of things. Yeah, so some of those were obvious given the context. So a typed language, for example, Objective C is a typed language. And going with an untyped language wasn't really seriously considered. We wanted the performance, and we wanted refactoring tools and other things like that that go with typed languages. Quick, dumb question. Was it obvious, I think this would be a dumb question, but was it obvious that the language has to be a compiled language? Yes, that's not a dumb question. Earlier, I think late 90s, Apple had seriously considered moving its development experience to Java. But Swift started in 2010, which was several years after the iPhone. It was when the iPhone was definitely on an upward trajectory. And the iPhone was still extremely, and is still a bit memory constrained. And so being able to compile the code and then ship it and then having standalone code that is not JIT compiled is a very big deal and is very much part of the Apple value system. Now, JavaScript's also a thing. I mean, it's not that this is exclusive, and technologies are good depending on how they're applied. But in the design of Swift, saying, how can we make Objective C better? Objective C is statically compiled, and that was the contiguous, natural thing to do. Just skip ahead a little bit, and we'll go right back. Just as a question, as you think about today in 2019 in your work at Google, TensorFlow and so on, is, again, compilations, static compilation still the right thing? Yeah, so the funny thing after working on compilers for a really long time is that, and this is one of the things that LLVM has helped with, is that I don't look at compilations being static or dynamic or interpreted or not. This is a spectrum. And one of the cool things about Swift is that Swift is not just statically compiled. It's actually dynamically compiled as well, and it can also be interpreted. Though, nobody's actually done that. And so what ends up happening when you use Swift in a workbook, for example in Colab or in Jupyter, is it's actually dynamically compiling the statements as you execute them. And so this gets back to the software engineering problems, where if you layer the stack properly, you can actually completely change how and when things get compiled because you have the right abstractions there. And so the way that a Colab workbook works with Swift is that when you start typing into it, it creates a process, a Unix process. And then each line of code you type in, it compiles it through the Swift compiler, the front end part, and then sends it through the optimizer, JIT compiles machine code, and then injects it into that process. And so as you're typing new stuff, it's like squirting in new code and overwriting and replacing and updating code in place. And the fact that it can do this is not an accident. Swift was designed for this. But it's an important part of how the language was set up and how it's layered, and this is a nonobvious piece. And one of the things with Swift that was, for me, a very strong design point is to make it so that you can learn it very quickly. And so from a language design perspective, the thing that I always come back to is this UI principle of progressive disclosure of complexity. And so in Swift, you can start by saying print, quote, hello world, quote. And there's no slash n, just like Python, one line of code, no main, no header files, no public static class void, blah, blah, blah, string like Java has, one line of code. And you can teach that, and it works great. Then you can say, well, let's introduce variables. And so you can declare a variable with var. So var x equals 4. What is a variable? You can use x, x plus 1. This is what it means. Then you can say, well, how about control flow? Well, this is what an if statement is. This is what a for statement is. This is what a while statement is. Then you can say, let's introduce functions. And many languages like Python have had this kind of notion of let's introduce small things, and then you can add complexity. Then you can introduce classes. And then you can add generics, in the case of Swift. And then you can build in modules and build out in terms of the things that you're expressing. But this is not very typical for compiled languages. And so this was a very strong design point, and one of the reasons that Swift, in general, is designed with this factoring of complexity in mind so that the language can express powerful things. You can write firmware in Swift if you want to. But it has a very high level feel, which is really this perfect blend, because often you have very advanced library writers that want to be able to use the nitty gritty details. But then other people just want to use the libraries and work at a higher abstraction level. It's kind of cool that I saw that you can just interoperability. I don't think I pronounced that word enough. But you can just drag in Python. It's just strange. You can import, like I saw this in the demo. How do you make that happen? What's up with that? Is that as easy as it looks, or is it? Yes, as easy as it looks. That's not a stage magic hack or anything like that. I don't mean from the user perspective. I mean from the implementation perspective to make it happen. So it's easy once all the pieces are in place. The way it works, so if you think about a dynamically typed language like Python, you can think about it in two different ways. You can say it has no types, which is what most people would say. Or you can say it has one type. And you can say it has one type, and it's the Python object. And the Python object gets passed around. And because there's only one type, it's implicit. And so what happens with Swift and Python talking to each other, Swift has lots of types. It has arrays, and it has strings, and all classes, and that kind of stuff. But it now has a Python object type. So there is one Python object type. And so when you say import NumPy, what you get is a Python object, which is the NumPy module. And then you say np.array. It says, OK, hey, Python object, I have no idea what you are. Give me your array member. OK, cool. And it just uses dynamic stuff, talks to the Python interpreter, and says, hey, Python, what's the.array member in that Python object? It gives you back another Python object. And now you say parentheses for the call and the arguments you're going to pass. And so then it says, hey, a Python object that is the result of np.array, call with these arguments. Again, calling into the Python interpreter to do that work. And so right now, this is all really simple. And if you dive into the code, what you'll see is that the Python module in Swift is something like 1,200 lines of code or something. It's written in pure Swift. It's super simple. And it's built on top of the C interoperability because it just talks to the Python interpreter. But making that possible required us to add two major language features to Swift to be able to express these dynamic calls and the dynamic member lookups. And so what we've done over the last year is we've proposed, implement, standardized, and contributed new language features to the Swift language in order to make it so it is really trivial. And this is one of the things about Swift that is critical to the Swift for TensorFlow work, which is that we can actually add new language features. And the bar for adding those is high, but it's what makes it possible. So you're now at Google doing incredible work on several things, including TensorFlow. So TensorFlow 2.0 or whatever leading up to 2.0 has, by default, in 2.0, has eager execution. And yet, in order to make code optimized for GPU or TPU or some of these systems, computation needs to be converted to a graph. So what's that process like? What are the challenges there? Yeah, so I am tangentially involved in this. But the way that it works with Autograph is that you mark your function with a decorator. And when Python calls it, that decorator is invoked. And then it says, before I call this function, you can transform it. And so the way Autograph works is, as far as I understand, is it actually uses the Python parser to go parse that, turn it into a syntax tree, and now apply compiler techniques to, again, transform this down into TensorFlow graphs. And so you can think of it as saying, hey, I have an if statement. I'm going to create an if node in the graph, like you say tf.cond. You have a multiply. Well, I'll turn that into a multiply node in the graph. And it becomes this tree transformation. So where does the Swift for TensorFlow come in, which is parallels? For one, Swift is an interface. Like, Python is an interface to TensorFlow. But it seems like there's a lot more going on in just a different language interface. There's optimization methodology. So the TensorFlow world has a couple of different what I'd call front end technologies. And so Swift and Python and Go and Rust and Julia and all these things share the TensorFlow graphs and all the runtime and everything that's later. And so Swift for TensorFlow is merely another front end for TensorFlow, just like any of these other systems are. There's a major difference between, I would say, three camps of technologies here. There's Python, which is a special case, because the vast majority of the community effort is going to the Python interface. And Python has its own approaches for automatic differentiation. It has its own APIs and all this kind of stuff. There's Swift, which I'll talk about in a second. And then there's kind of everything else. And so the everything else are effectively language bindings. So they call into the TensorFlow runtime, but they usually don't have automatic differentiation or they usually don't provide anything other than APIs that call the C APIs in TensorFlow. And so they're kind of wrappers for that. Swift is really kind of special. And it's a very different approach. Swift for TensorFlow, that is, is a very different approach. Because there we're saying, let's look at all the problems that need to be solved in the full stack of the TensorFlow compilation process, if you think about it that way. Because TensorFlow is fundamentally a compiler. It takes models, and then it makes them go fast on hardware. That's what a compiler does. And it has a front end, it has an optimizer, and it has many back ends. And so if you think about it the right way, or if you look at it in a particular way, it is a compiler. And so Swift is merely another front end. But it's saying, and the design principle is saying, let's look at all the problems that we face as machine learning practitioners and what is the best possible way we can do that, given the fact that we can change literally anything in this entire stack. And Python, for example, where the vast majority of the engineering and effort has gone into, is constrained by being the best possible thing you can do with a Python library. There are no Python language features that are added because of machine learning that I'm aware of. They added a matrix multiplication operator with that, but that's as close as you get. And so with Swift, it's hard, but you can add language features to the language. And there's a community process for that. And so we look at these things and say, well, what is the right division of labor between the human programmer and the compiler? And Swift has a number of things that shift that balance. So because it has a type system, for example, that makes certain things possible for analysis of the code, and the compiler can automatically build graphs for you without you thinking about them. That's a big deal for a programmer. You just get free performance. You get clustering and fusion and optimization, things like that, without you as a programmer having to manually do it because the compiler can do it for you. Automatic differentiation is another big deal. And I think one of the key contributions of the Swift TensorFlow project is that there's this entire body of work on automatic differentiation that dates back to the Fortran days. People doing a tremendous amount of numerical computing in Fortran used to write these what they call source to source translators, where you take a bunch of code, shove it into a mini compiler, and it would push out more Fortran code. But it would generate the backwards passes for your functions for you, the derivatives. And so in that work in the 70s, a tremendous number of optimizations, a tremendous number of techniques for fixing numerical instability, and other kinds of problems were developed. But they're very difficult to port into a world where, in eager execution, you get an op by op at a time. You need to be able to look at an entire function and be able to reason about what's going on. And so when you have a language integrated automatic differentiation, which is one of the things that the Swift project is focusing on, you can open all these techniques and reuse them in familiar ways. But the language integration piece has a bunch of design room in it, and it's also complicated. The other piece of the puzzle here that's kind of interesting is TPUs at Google. So we're in a new world with deep learning. It constantly is changing, and I imagine, without disclosing anything, I imagine you're still innovating on the TPU front, too. Indeed. So how much interplay is there between software and hardware in trying to figure out how to together move towards an optimized solution? There's an incredible amount. So we're on our third generation of TPUs, which are now 100 petaflops in a very large liquid cooled box, virtual box with no cover. And as you might imagine, we're not out of ideas yet. The great thing about TPUs is that they're a perfect example of hardware software co design. And so it's about saying, what hardware do we build to solve certain classes of machine learning problems? Well, the algorithms are changing. The hardware takes some cases years to produce. And so you have to make bets and decide what is going to happen and what is the best way to spend the transistors to get the maximum performance per watt or area per cost or whatever it is that you're optimizing for. And so one of the amazing things about TPUs is this numeric format called bfloat16. bfloat16 is a compressed 16 bit floating point format, but it puts the bits in different places. And in numeric terms, it has a smaller mantissa and a larger exponent. That means that it's less precise, but it can represent larger ranges of values, which in the machine learning context is really important and useful because sometimes you have very small gradients you want to accumulate and very, very small numbers that are important to move things as you're learning. But sometimes you have very large magnitude numbers as well. And bfloat16 is not as precise. The mantissa is small. But it turns out the machine learning algorithms actually want to generalize. And so there's theories that this actually increases the ability for the network to generalize across data sets. And regardless of whether it's good or bad, it's much cheaper at the hardware level to implement because the area and time of a multiplier is n squared in the number of bits in the mantissa, but it's linear with size of the exponent. And you're connected to both efforts here both on the hardware and the software side? Yeah, and so that was a breakthrough coming from the research side and people working on optimizing network transport of weights across the network originally and trying to find ways to compress that. But then it got burned into silicon. And it's a key part of what makes TPU performance so amazing and great. Now, TPUs have many different aspects that are important. But the co design between the low level compiler bits and the software bits and the algorithms is all super important. And it's this amazing trifecta that only Google can do. Yeah, that's super exciting. So can you tell me about MLIR project, previously the secretive one? Yeah, so MLIR is a project that we announced at a compiler conference three weeks ago or something at the Compilers for Machine Learning conference. Basically, again, if you look at TensorFlow as a compiler stack, it has a number of compiler algorithms within it. It also has a number of compilers that get embedded into it. And they're made by different vendors. For example, Google has XLA, which is a great compiler system. NVIDIA has TensorRT. Intel has NGRAPH. There's a number of these different compiler systems. And they're very hardware specific. And they're trying to solve different parts of the problems. But they're all kind of similar in a sense of they want to integrate with TensorFlow. Now, TensorFlow has an optimizer. And it has these different code generation technologies built in. The idea of MLIR is to build a common infrastructure to support all these different subsystems. And initially, it's to be able to make it so that they all plug in together and they can share a lot more code and can be reusable. But over time, we hope that the industry will start collaborating and sharing code. And instead of reinventing the same things over and over again, that we can actually foster some of that working together to solve common problem energy that has been useful in the compiler field before. Beyond that, MLIR is some people have joked that it's kind of LLVM too. It learns a lot about what LLVM has been good and what LLVM has done wrong. And it's a chance to fix that. And also, there are challenges in the LLVM ecosystem as well, where LLVM is very good at the thing it was designed to do. But 20 years later, the world has changed. And people are trying to solve higher level problems. And we need some new technology. And what's the future of open source in this context? Very soon. So it is not yet open source. But it will be hopefully in the next couple months. So you still believe in the value of open source in these kinds of contexts? Oh, yeah. Absolutely. And I think that the TensorFlow community at large fully believes in open source. So I mean, there is a difference between Apple, where you were previously, and Google now, in spirit and culture. And I would say the open source in TensorFlow was a seminal moment in the history of software, because here's this large company releasing a very large code base that's open sourcing. What are your thoughts on that? Happy or not, were you to see that kind of degree of open sourcing? So between the two, I prefer the Google approach, if that's what you're saying. The Apple approach makes sense, given the historical context that Apple came from. But that's been 35 years ago. And I think that Apple is definitely adapting. And the way I look at it is that there's different kinds of concerns in the space. It is very rational for a business to care about making money. That fundamentally is what a business is about. But I think it's also incredibly realistic to say, it's not your string library that's the thing that's going to make you money. It's going to be the amazing UI product differentiating features and other things like that that you built on top of your string library. And so keeping your string library proprietary and secret and things like that is maybe not the important thing anymore. Where before, platforms were different. And even 15 years ago, things were a little bit different. But the world is changing. So Google strikes a very good balance, I think. And I think that TensorFlow being open source really changed the entire machine learning field and caused a revolution in its own right. And so I think it's amazingly forward looking because I could have imagined, and I wasn't at Google at the time, but I could imagine a different context and different world where a company says, machine learning is critical to what we're doing. We're not going to give it to other people. And so that decision is a profoundly brilliant insight that I think has really led to the world being better and better for Google as well. And has all kinds of ripple effects. I think it is really, I mean, you can't understate Google deciding how profound that is for software. It's awesome. Well, and again, I can understand the concern about if we release our machine learning software, our competitors could go faster. But on the other hand, I think that open sourcing TensorFlow has been fantastic for Google. And I'm sure that decision was very nonobvious at the time, but I think it's worked out very well. So let's try this real quick. You were at Tesla for five months as the VP of autopilot software. You led the team during the transition from H hardware one to hardware two. I have a couple of questions. So one, first of all, to me, that's one of the bravest engineering decisions undertaking really ever in the automotive industry to me, software wise, starting from scratch. It's a really brave engineering decision. So my one question there is, what was that like? What was the challenge of that? Do you mean the career decision of jumping from a comfortable good job into the unknown, or? That combined, so at the individual level, you making that decision. And then when you show up, it's a really hard engineering problem. So you could just stay, maybe slow down, say hardware one, or those kinds of decisions. Just taking it full on, let's do this from scratch. What was that like? Well, so I mean, I don't think Tesla has a culture of taking things slow and seeing how it goes. And one of the things that attracted me about Tesla is it's very much a gung ho, let's change the world, let's figure it out kind of a place. And so I have a huge amount of respect for that. Tesla has done very smart things with hardware one in particular. And the hardware one design was originally designed to be very simple automation features in the car for like traffic aware cruise control and things like that. And the fact that they were able to effectively feature creep it into lane holding and a very useful driver assistance feature is pretty astounding, particularly given the details of the hardware. Hardware two built on that in a lot of ways. And the challenge there was that they were transitioning from a third party provided vision stack to an in house built vision stack. And so for the first step, which I mostly helped with, was getting onto that new vision stack. And that was very challenging. And it was time critical for various reasons, and it was a big leap. But it was fortunate that it built on a lot of the knowledge and expertise and the team that had built hardware one's driver assistance features. So you spoke in a collected and kind way about your time at Tesla, but it was ultimately not a good fit. Elon Musk, we've talked on this podcast, several guests to the course, Elon Musk continues to do some of the most bold and innovative engineering work in the world, at times at the cost some of the members of the Tesla team. What did you learn about working in this chaotic world with Elon? Yeah, so I guess I would say that when I was at Tesla, I experienced and saw the highest degree of turnover I'd ever seen in a company, which was a bit of a shock. But one of the things I learned and I came to respect is that Elon's able to attract amazing talent because he has a very clear vision of the future, and he can get people to buy into it because they want that future to happen. And the power of vision is something that I have a tremendous amount of respect for. And I think that Elon is fairly singular in the world in terms of the things he's able to get people to believe in. And there are many people that stand in the street corner and say, ah, we're going to go to Mars, right? But then there are a few people that can get others to buy into it and believe and build the path and make it happen. And so I respect that. I don't respect all of his methods, but I have a huge amount of respect for that. You've mentioned in a few places, including in this context, working hard. What does it mean to work hard? And when you look back at your life, what were some of the most brutal periods of having to really put everything you have into something? Yeah, good question. So working hard can be defined a lot of different ways, so a lot of hours, and so that is true. The thing to me that's the hardest is both being short term focused on delivering and executing and making a thing happen while also thinking about the longer term and trying to balance that. Because if you are myopically focused on solving a task and getting that done and only think about that incremental next step, you will miss the next big hill you should jump over to. And so I've been really fortunate that I've been able to kind of oscillate between the two. And historically at Apple, for example, that was made possible because I was able to work with some really amazing people and build up teams and leadership structures and allow them to grow in their careers and take on responsibility, thereby freeing up me to be a little bit crazy and thinking about the next thing. And so it's a lot of that. But it's also about with experience, you make connections that other people don't necessarily make. And so I think that's a big part as well. But the bedrock is just a lot of hours. And that's OK with me. There's different theories on work life balance. And my theory for myself, which I do not project onto the team, but my theory for myself is that I want to love what I'm doing and work really hard. And my purpose, I feel like, and my goal is to change the world and make it a better place. And that's what I'm really motivated to do. So last question, LLVM logo is a dragon. You explain that this is because dragons have connotations of power, speed, intelligence. It can also be sleek, elegant, and modular, though you remove the modular part. What is your favorite dragon related character from fiction, video, or movies? So those are all very kind ways of explaining it. Do you want to know the real reason it's a dragon? Yeah. Is that better? So there is a seminal book on compiler design called The Dragon Book. And so this is a really old now book on compilers. And so the dragon logo for LLVM came about because at Apple, we kept talking about LLVM related technologies and there's no logo to put on a slide. And so we're like, what do we do? And somebody's like, well, what kind of logo should a compiler technology have? And I'm like, I don't know. I mean, the dragon is the best thing that we've got. And Apple somehow magically came up with the logo. And it was a great thing. And the whole community rallied around it. And then it got better as other graphic designers got involved. But that's originally where it came from. The story. Is there dragons from fiction that you connect with, that Game of Thrones, Lord of the Rings, that kind of thing? Lord of the Rings is great. I also like role playing games and things like computer role playing games. And so dragons often show up in there. But really, it comes back to the book. Oh, no, we need a thing. And hilariously, one of the funny things about LLVM is that my wife, who's amazing, runs the LLVM Foundation. And she goes to Grace Hopper and is trying to get more women involved in the. She's also a compiler engineer. So she's trying to get other women to get interested in compilers and things like this. And so she hands out the stickers. And people like the LLVM sticker because of Game of Thrones. And so sometimes culture has this helpful effect to get the next generation of compiler engineers engaged with the cause. OK, awesome. Chris, thanks so much for talking with us. It's been great talking with you.
Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Lex Fridman Podcast #21
The following is a conversation with Rajat Manga. He's an engineer and director of Google, leading the TensorFlow team. TensorFlow is an open source library at the center of much of the work going on in the world in deep learning, both the cutting edge research and the large scale application of learning based approaches. But it's quickly becoming much more than a software library. It's now an ecosystem of tools for the deployment of machine learning in the cloud, on the phone, in the browser, on both generic and specialized hardware. TPU, GPU, and so on. Plus, there's a big emphasis on growing a passionate community of developers. Rajat, Jeff Dean, and a large team of engineers at Google Brain are working to define the future of machine learning with TensorFlow 2.0, which is now in alpha. I think the decision to open source TensorFlow is a definitive moment in the tech industry. It showed that open innovation can be successful and inspire many companies to open source their code, to publish, and in general engage in the open exchange of ideas. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Rajat Manga. You were involved with Google Brain since its start in 2011 with Jeff Dean. It started with this belief, the proprietary machine learning library, and turned into TensorFlow in 2014, the open source library. So what were the early days of Google Brain like? What were the goals, the missions? How do you even proceed forward once there's so much possibilities before you? It was interesting back then when I started, or when you were even just talking about it, the idea of deep learning was interesting and intriguing in some ways. It hadn't yet taken off, but it held some promise. It had shown some very promising and early results. I think the idea where Andrew and Jeff had started was, what if we can take this work people are doing in research and scale it to what Google has in terms of the compute power, and also put that kind of data together? What does it mean? And so far, the results had been, if you scale the compute, scale the data, it does better. And would that work? And so that was the first year or two, can we prove that out? And with this belief, when we started the first year, we got some early wins, which is always great. What were the wins like? What was the wins where you were, there's some problems to this, this is going to be good? I think there are two early wins where one was speech, that we collaborated very closely with the speech research team, who was also getting interested in this. And the other one was on images, where the cat paper, as we call it, that was covered by a lot of folks. And the birth of Google Brain was around neural networks. So it was deep learning from the very beginning. That was the whole mission. So what would, in terms of scale, what was the sort of dream of what this could become? Were there echoes of this open source TensorFlow community that might be brought in? Was there a sense of TPUs? Was there a sense of machine learning is now going to be at the core of the entire company, is going to grow into that direction? Yeah, I think, so that was interesting. And if I think back to 2012 or 2011, and first was can we scale it in the year or so, we had started scaling it to hundreds and thousands of machines. In fact, we had some runs even going to 10,000 machines. And all of those shows great promise. In terms of machine learning at Google, the good thing was Google's been doing machine learning for a long time. Deep learning was new, but as we scaled this up, we showed that, yes, that was possible. And it was going to impact lots of things. Like we started seeing real products wanting to use this. Again, speech was the first, there were image things that photos came out of and then many other products as well. So that was exciting. As we went into that a couple of years, externally also academia started to, there was lots of push on, okay, deep learning is interesting, we should be doing more and so on. And so by 2014, we were looking at, okay, this is a big thing, it's going to grow. And not just internally, externally as well. Yes, maybe Google's ahead of where everybody is, but there's a lot to do. So a lot of this started to make sense and come together. So the decision to open source, I was just chatting with Chris Glatner about this. The decision to go open source with TensorFlow, I would say sort of for me personally, seems to be one of the big seminal moments in all of software engineering ever. I think that's when a large company like Google decides to take a large project that many lawyers might argue has a lot of IP, just decide to go open source with it, and in so doing lead the entire world and saying, you know what, open innovation is a pretty powerful thing, and it's okay to do. That was, I mean, that's an incredible moment in time. So do you remember those discussions happening? Whether open source should be happening? What was that like? I would say, I think, so the initial idea came from Jeff, who was a big proponent of this. I think it came off of two big things. One was research wise, we were a research group. We were putting all our research out there. If you wanted to, we were building on others research and we wanted to push the state of the art forward. And part of that was to share the research. That's how I think deep learning and machine learning has really grown so fast. So the next step was, okay, now, would software help with that? And it seemed like they were existing a few libraries out there, Tiano being one, Torch being another, and a few others, but they were all done by academia and so the level was significantly different. The other one was from a software perspective, Google had done lots of software or that we used internally, you know, and we published papers. Often there was an open source project that came out of that that somebody else picked up that paper and implemented and they were very successful. Back then it was like, okay, there's Hadoop, which has come off of tech that we've built. We know the tech we've built is way better for a number of different reasons. We've invested a lot of effort in that. And turns out we have Google Cloud and we are now not really providing our tech, but we are saying, okay, we have Bigtable, which is the original thing. We are going to now provide H base APIs on top of that, which isn't as good, but that's what everybody's used to. So there's like, can we make something that is better and really just provide, helps the community in lots of ways, but also helps push a good standard forward. So how does Cloud fit into that? There's a TensorFlow open source library and how does the fact that you can use so many of the resources that Google provides and the Cloud fit into that strategy? So TensorFlow itself is open and you can use it anywhere, right? And we want to make sure that continues to be the case. On Google Cloud, we do make sure that there's lots of integrations with everything else and we want to make sure that it works really, really well there. You're leading the TensorFlow effort. Can you tell me the history and the timeline of TensorFlow project in terms of major design decisions, so like the open source decision, but really what to include and not? There's this incredible ecosystem that I'd like to talk about. There's all these parts, but what if just some sample moments that defined what TensorFlow eventually became through its, I don't know if you're allowed to say history when it's just, but in deep learning, everything moves so fast and just a few years is already history. Yes, yes, so looking back, we were building TensorFlow. I guess we open sourced it in 2015, November 2015. We started on it in summer of 2014, I guess. And somewhere like three to six, late 2014, by then we had decided that, okay, there's a high likelihood we'll open source it. So we started thinking about that and making sure we're heading down that path. At that point, by that point, we had seen a few, lots of different use cases at Google. So there were things like, okay, yes, you wanna run it at large scale in the data center. Yes, we need to support different kind of hardware. We had GPUs at that point. We had our first GPU at that point or was about to come out roughly around that time. So the design sort of included those. We had started to push on mobile. So we were running models on mobile. At that point, people were customizing code. So we wanted to make sure TensorFlow could support that as well. So that sort of became part of that overall design. When you say mobile, you mean like a pretty complicated algorithms running on the phone? That's correct. So when you have a model that you deploy on the phone and run it there, right? So already at that time, there was ideas of running machine learning on the phone. That's correct. We already had a couple of products that were doing that by then. And in those cases, we had basically customized handcrafted code or some internal libraries that we're using. So I was actually at Google during this time in a parallel, I guess, universe, but we were using Theano and Caffe. Was there some degree to which you were bouncing, like trying to see what Caffe was offering people, trying to see what Theano was offering that you want to make sure you're delivering on whatever that is? Perhaps the Python part of thing, maybe did that influence any design decisions? Totally. So when we built this belief and some of that was in parallel with some of these libraries coming up, I mean, Theano itself is older, but we were building this belief focused on our internal thing because our systems were very different. By the time we got to this, we looked at a number of libraries that were out there. Theano, there were folks in the group who had experience with Torch, with Lua. There were folks here who had seen Caffe. I mean, actually, Yang Jing was here as well. There's what other libraries? I think we looked at a number of things. Might even have looked at JNR back then. I'm trying to remember if it was there. In fact, yeah, we did discuss ideas around, okay, should we have a graph or not? So putting all these together was definitely, they were key decisions that we wanted. We had seen limitations in our prior disbelief things. A few of them were just in terms of research was moving so fast, we wanted the flexibility. The hardware was changing fast. We expected to change that so that those probably were two things. And yeah, I think the flexibility in terms of being able to express all kinds of crazy things was definitely a big one then. So what, the graph decisions though, with moving towards TensorFlow 2.0, there's more, by default, there'll be eager execution. So sort of hiding the graph a little bit because it's less intuitive in terms of the way people develop and so on. What was that discussion like in terms of using graphs? It seemed, it's kind of the Theano way. Did it seem the obvious choice? So I think where it came from was our disbelief had a graph like thing as well. A much more simple, it wasn't a general graph, it was more like a straight line thing. More like what you might think of cafe, I guess in that sense. But the graph was, and we always cared about the production stuff. Like even with disbelief, we were deploying a whole bunch of stuff in production. So graph did come from that when we thought of, okay, should we do that in Python? And we experimented with some ideas where it looked a lot simpler to use, but not having a graph meant, okay, how do you deploy now? So that was probably what tilted the balance for us and eventually we ended up with a graph. And I guess the question there is, did you, I mean, so production seems to be the really good thing to focus on, but did you even anticipate the other side of it where there could be, what is it? What are the numbers? It's been crazy, 41 million downloads. Yep. I mean, was that even like a possibility in your mind that it would be as popular as it became? So I think we did see a need for this a lot from the research perspective and like early days of deep learning in some ways. 41 million, no, I don't think I imagined this number. Then it seemed like there's a potential future where lots more people would be doing this and how do we enable that? I would say this kind of growth, I probably started seeing somewhat after the open sourcing where it was like, okay, deep learning is actually growing way faster for a lot of different reasons. And we are in just the right place to push on that and leverage that and deliver on lots of things that people want. So what changed once you open sourced? Like how this incredible amount of attention from a global population of developers, how did the project start changing? I don't even actually remember during those times. I know looking now, there's really good documentation, there's an ecosystem of tools, there's a community, there's a blog, there's a YouTube channel now, right? Yeah. It's very community driven. Back then, I guess 0.1 version, is that the version? I think we call it 0.6 or five, something like that, I forget. What changed leading into 1.0? It's interesting. I think we've gone through a few things there. When we started out, when we first came out, people loved the documentation we have because it was just a huge step up from everything else because all of those were academic projects, people doing, who don't think about documentation. I think what that changed was, instead of deep learning being a research thing, some people who were just developers could now suddenly take this out and do some interesting things with it, right? Who had no clue what machine learning was before then. And that I think really changed how things started to scale up in some ways and pushed on it. Over the next few months as we looked at how do we stabilize things, as we look at not just researchers, now we want stability, people want to deploy things. That's how we started planning for 1.0 and there are certain needs for that perspective. And so again, documentation comes up, designs, more kinds of things to put that together. And so that was exciting to get that to a stage where more and more enterprises wanted to buy in and really get behind that. And I think post 1.0 and over the next few releases, that enterprise adoption also started to take off. I would say between the initial release and 1.0, it was, okay, researchers of course, then a lot of hobbies and early interest, people excited about this who started to get on board and then over the 1.x thing, lots of enterprises. I imagine anything that's below 1.0 gives pressure to be, the enterprise probably wants something that's stable. Exactly. And do you have a sense now that TensorFlow is stable? Like it feels like deep learning in general is extremely dynamic field, so much is changing. And TensorFlow has been growing incredibly. Do you have a sense of stability at the helm of it? I mean, I know you're in the midst of it, but. Yeah, I think in the midst of it, it's often easy to forget what an enterprise wants and what some of the people on that side want. There are still people running models that are three years old, four years old. So Inception is still used by tons of people. Even ResNet 50 is what, couple of years old now or more, but there are tons of people who use that and they're fine. They don't need the last couple of bits of performance or quality, they want some stability in things that just work. And so there is value in providing that with that kind of stability and making it really simpler because that allows a lot more people to access it. And then there's the research crowd which wants, okay, they wanna do these crazy things exactly like you're saying, right? Not just deep learning in the straight up models that used to be there, they want RNNs and even RNNs are maybe old, they are transformers now. And now it needs to combine with RL and GANs and so on. So there's definitely that area that like the boundary that's shifting and pushing the state of the art. But I think there's more and more of the past that's much more stable and even stuff that was two, three years old is very, very usable by lots of people. So that part makes it a lot easier. So I imagine, maybe you can correct me if I'm wrong, one of the biggest use cases is essentially taking something like ResNet 50 and doing some kind of transfer learning on a very particular problem that you have. It's basically probably what majority of the world does. And you wanna make that as easy as possible. So I would say for the hobbyist perspective, that's the most common case, right? In fact, the apps and phones and stuff that you'll see, the early ones, that's the most common case. I would say there are a couple of reasons for that. One is that everybody talks about that. It looks great on slides. That's a presentation, yeah, exactly. What enterprises want is that is part of it, but that's not the big thing. Enterprises really have data that they wanna make predictions on. This is often what they used to do with the people who were doing ML was just regression models, linear regression, logistic regression, linear models, or maybe gradient booster trees and so on. Some of them still benefit from deep learning, but they want that's the bread and butter, or like the structured data and so on. So depending on the audience you look at, they're a little bit different. And they just have, I mean, the best of enterprise probably just has a very large data set, or deep learning can probably shine. That's correct, that's right. And then I think the other pieces that they wanted, again, with 2.0, the developer summit we put together is the whole TensorFlow Extended piece, which is the entire pipeline. They care about stability across doing their entire thing. They want simplicity across the entire thing. I don't need to just train a model. I need to do that every day again, over and over again. I wonder to which degree you have a role in, I don't know, so I teach a course on deep learning. I have people like lawyers come up to me and say, when is machine learning gonna enter legal, the legal realm? The same thing in all kinds of disciplines, immigration, insurance, often when I see what it boils down to is these companies are often a little bit old school in the way they organize the data. So the data is just not ready yet, it's not digitized. Do you also find yourself being in the role of an evangelist for like, let's get, organize your data, folks, and then you'll get the big benefit of TensorFlow. Do you get those, have those conversations? Yeah, yeah, you know, I get all kinds of questions there from, okay, what do I need to make this work, right? Do we really need deep learning? I mean, there are all these things, I already use this linear model, why would this help? I don't have enough data, let's say, or I wanna use machine learning, but I have no clue where to start. So it varies, that to all the way to the experts to why support very specific things, it's interesting. Is there a good answer? It boils down to oftentimes digitizing data. So whatever you want automated, whatever data you want to make prediction based on, you have to make sure that it's in an organized form. Like within the TensorFlow ecosystem, there's now, you're providing more and more data sets and more and more pre trained models. Are you finding yourself also the organizer of data sets? Yes, I think the TensorFlow data sets that we just released, that's definitely come up where people want these data sets, can we organize them and can we make that easier? So that's definitely one important thing. The other related thing I would say is I often tell people, you know what, don't think of the most fanciest thing that the newest model that you see, make something very basic work and then you can improve it. There's just lots of things you can do with it. Yeah, start with the basics, true. One of the big things that makes TensorFlow even more accessible was the appearance whenever that happened of Keras, the Keras standard sort of outside of TensorFlow. I think it was Keras on top of Tiano at first only and then Keras became on top of TensorFlow. Do you know when Keras chose to also add TensorFlow as a backend, who was the, was it just the community that drove that initially? Do you know if there was discussions, conversations? Yeah, so Francois started the Keras project before he was at Google and the first thing was Tiano. I don't remember if that was after TensorFlow was created or way before. And then at some point, when TensorFlow started becoming popular, there were enough similarities that he decided to create this interface and put TensorFlow as a backend. I believe that might still have been before he joined Google. So we weren't really talking about that. He decided on his own and thought that was interesting and relevant to the community. In fact, I didn't find out about him being at Google until a few months after he was here. He was working on some research ideas and doing Keras on his nights and weekends project. Oh, interesting. He wasn't like part of the TensorFlow. He didn't join initially. He joined research and he was doing some amazing research. He has some papers on that and research, so he's a great researcher as well. And at some point we realized, oh, he's doing this good stuff. People seem to like the API and he's right here. So we talked to him and he said, okay, why don't I come over to your team and work with you for a quarter and let's make that integration happen. And we talked to his manager and he said, sure, quarter's fine. And that quarter's been something like two years now. And so he's fully on this. So Keras got integrated into TensorFlow in a deep way. And now with 2.0, TensorFlow 2.0, sort of Keras is kind of the recommended way for a beginner to interact with TensorFlow. Which makes that initial sort of transfer learning or the basic use cases, even for an enterprise, super simple, right? That's correct, that's right. So what was that decision like? That seems like it's kind of a bold decision as well. We did spend a lot of time thinking about that one. We had a bunch of APIs, some built by us. There was a parallel layers API that we were building. And when we decided to do Keras in parallel, so there were like, okay, two things that we are looking at. And the first thing we was trying to do is just have them look similar, like be as integrated as possible, share all of that stuff. There were also like three other APIs that others had built over time because we didn't have a standard one. But one of the messages that we kept hearing from the community, okay, which one do we use? And they kept seeing like, okay, here's a model in this one and here's a model in this one, which should I pick? So that's sort of like, okay, we had to address that straight on with 2.0. The whole idea was we need to simplify. We had to pick one. Based on where we were, we were like, okay, let's see what are the people like? And Keras was clearly one that lots of people loved. There were lots of great things about it. So we settled on that. Organically, that's kind of the best way to do it. It was great. It was surprising, nevertheless, to sort of bring in an outside. I mean, there was a feeling like Keras might be almost like a competitor in a certain kind of, to TensorFlow. And in a sense, it became an empowering element of TensorFlow. That's right. Yeah, it's interesting how you can put two things together, which can align. In this case, I think Francois, the team, and a bunch of us have chatted, and I think we all want to see the same kind of things. We all care about making it easier for the huge set of developers out there, and that makes a difference. So Python has Guido van Rossum, who until recently held the position of benevolent dictator for life. All right, so there's a huge successful open source project like TensorFlow need one person who makes a final decision. So you've did a pretty successful TensorFlow Dev Summit just now, last couple of days. There's clearly a lot of different new features being incorporated, an amazing ecosystem, so on. Who's, how are those design decisions made? Is there a BDFL in TensorFlow, or is it more distributed and organic? I think it's somewhat different, I would say. I've always been involved in the key design directions, but there are lots of things that are distributed where there are a number of people, Martin Wick being one, who has really driven a lot of our open source stuff, a lot of the APIs, and there are a number of other people who've been, you know, pushed and been responsible for different parts of it. We do have regular design reviews. Over the last year, we've had a lot of we've really spent a lot of time opening up to the community and adding transparency. We're setting more processes in place, so RFCs, special interest groups, to really grow that community and scale that. I think the kind of scale that ecosystem is in, I don't think we could scale with having me as the lone point of decision maker. I got it. So, yeah, the growth of that ecosystem, maybe you can talk about it a little bit. First of all, it started with Andrej Karpathy when he first did ComNetJS. The fact that you can train and you'll network in the browser was, in JavaScript, was incredible. So now TensorFlow.js is really making that a serious, like a legit thing, a way to operate, whether it's in the backend or the front end. Then there's the TensorFlow Extended, like you mentioned. There's TensorFlow Lite for mobile. And all of it, as far as I can tell, it's really converging towards being able to save models in the same kind of way. You can move around, you can train on the desktop and then move it to mobile and so on. That's right. So there's that cohesiveness. So can you maybe give me, whatever I missed, a bigger overview of the mission of the ecosystem that's trying to be built and where is it moving forward? Yeah. So in short, the way I like to think of this is our goals to enable machine learning. And in a couple of ways, you know, one is we have lots of exciting things going on in ML today. We started with deep learning, but we now support a bunch of other algorithms too. So one is to, on the research side, keep pushing on the state of the art. Can we, you know, how do we enable researchers to build the next amazing thing? So BERT came out recently, you know, it's great that people are able to do new kinds of research. And there are lots of amazing research that happens across the world. So that's one direction. The other is how do you take that across all the people outside who want to take that research and do some great things with it and integrate it to build real products, to have a real impact on people. And so if that's the other axes in some ways, you know, at a high level, one way I think about it is there are a crazy number of compute devices across the world. And we often used to think of ML and training and all of this as, okay, something you do either in the workstation or the data center or cloud. But we see things running on the phones. We see things running on really tiny chips. I mean, we had some demos at the developer summit. And so the way I think about this ecosystem is how do we help get machine learning on every device that has a compute capability? And that continues to grow and so in some ways this ecosystem is looked at, you know, various aspects of that and grown over time to cover more of those. And we continue to push the boundaries. In some areas we've built more tooling and things around that to help you. I mean, the first tool we started was TensorBoard. You wanted to learn just the training piece, the effects or TensorFlow extended to really do your entire ML pipelines. If you're, you know, care about all that production stuff, but then going to the edge, going to different kinds of things. And it's not just us now. We are a place where there are lots of libraries being built on top. So there are some for research, maybe things like TensorFlow agents or TensorFlow probability that started as research things or for researchers for focusing on certain kinds of algorithms, but they're also being deployed or used by, you know, production folks. And some have come from within Google, just teams across Google who wanted to build these things. Others have come from just the community because there are different pieces that different parts of the community care about. And I see our goal as enabling even that, right? It's not, we cannot and won't build every single thing. That just doesn't make sense. But if we can enable others to build the things that they care about, and there's a broader community that cares about that, and we can help encourage that, and that's great. That really helps the entire ecosystem, not just those. One of the big things about 2.0 that we're pushing on is, okay, we have these so many different pieces, right? How do we help make all of them work well together? So there are a few key pieces there that we're pushing on, one being the core format in there and how we share the models themselves through save model and TensorFlow hub and so on. And a few of the pieces that we really put this together. I was very skeptical that that's, you know, when TensorFlow.js came out, it didn't seem, or deep learning JS as it was earlier. Yeah, that was the first. It seems like technically very difficult project. As a standalone, it's not as difficult, but as a thing that integrates into the ecosystem, it seems very difficult. So, I mean, there's a lot of aspects of this you're making look easy, but, and the technical side, how many challenges have to be overcome here? A lot. And still have to be overcome. That's the question here too. There are lots of steps to it, right? And we've iterated over the last few years, so there's a lot we've learned. I, yeah, and often when things come together well, things look easy and that's exactly the point. It should be easy for the end user, but there are lots of things that go behind that. If I think about still challenges ahead, there are, you know, we have a lot more devices coming on board, for example, from the hardware perspective. How do we make it really easy for these vendors to integrate with something like TensorFlow, right? So there's a lot of compiler stuff that others are working on. There are things we can do in terms of our APIs and so on that we can do. As we, you know, TensorFlow started as a very monolithic system and to some extent it still is. There are less, lots of tools around it, but the core is still pretty large and monolithic. One of the key challenges for us to scale that out is how do we break that apart with clearer interfaces? It's, you know, in some ways it's software engineering 101, but for a system that's now four years old, I guess, or more, and that's still rapidly evolving and that we're not slowing down with, it's hard to change and modify and really break apart. It's sort of like, as people say, right, it's like changing the engine with a car running or trying to fix that. That's exactly what we're trying to do. So there's a challenge here because the downside of so many people being excited about TensorFlow and coming to rely on it in many of their applications is that you're kind of responsible, like it's the technical debt. You're responsible for previous versions to some degree still working. So when you're trying to innovate, I mean, it's probably easier to just start from scratch every few months. Absolutely. So do you feel the pain of that? 2.0 does break some back compatibility, but not too much. It seems like the conversion is pretty straightforward. Do you think that's still important given how quickly deep learning is changing? Can you just, the things that you've learned, can you just start over or is there pressure to not? It's a tricky balance. So if it was just a researcher writing a paper who a year later will not look at that code again, sure, it doesn't matter. There are a lot of production systems that rely on TensorFlow, both at Google and across the world. And people worry about this. I mean, these systems run for a long time. So it is important to keep that compatibility and so on. And yes, it does come with a huge cost. There's, we have to think about a lot of things as we do new things and make new changes. I think it's a trade off, right? You can, you might slow certain kinds of things down, but the overall value you're bringing because of that is much bigger because it's not just about breaking the person yesterday. It's also about telling the person tomorrow that, you know what, this is how we do things. We're not gonna break you when you come on board because there are lots of new people who are also gonna come on board. And, you know, one way I like to think about this, and I always push the team to think about it as well, when you wanna do new things, you wanna start with a clean slate. Design with a clean slate in mind, and then we'll figure out how to make sure all the other things work. And yes, we do make compromises occasionally, but unless you design with the clean slate and not worry about that, you'll never get to a good place. Oh, that's brilliant, so even if you are responsible when you're in the idea stage, when you're thinking of new, just put all that behind you. Okay, that's really, really well put. So I have to ask this because a lot of students, developers ask me how I feel about PyTorch versus TensorFlow. So I've recently completely switched my research group to TensorFlow. I wish everybody would just use the same thing, and TensorFlow is as close to that, I believe, as we have. But do you enjoy competition? So TensorFlow is leading in many ways, on many dimensions in terms of ecosystem, in terms of number of users, momentum, power, production levels, so on, but a lot of researchers are now also using PyTorch. Do you enjoy that kind of competition or do you just ignore it and focus on making TensorFlow the best that it can be? So just like research or anything people are doing, it's great to get different kinds of ideas. And when we started with TensorFlow, like I was saying earlier, one, it was very important for us to also have production in mind. We didn't want just research, right? And that's why we chose certain things. Now PyTorch came along and said, you know what, I only care about research. This is what I'm trying to do. What's the best thing I can do for this? And it started iterating and said, okay, I don't need to worry about graphs. Let me just run things. And I don't care if it's not as fast as it can be, but let me just make this part easy. And there are things you can learn from that, right? They, again, had the benefit of seeing what had come before, but also exploring certain different kinds of spaces. And they had some good things there, building on say things like JNR and so on before that. So competition is definitely interesting. It made us, you know, this is an area that we had thought about, like I said, way early on. Over time we had revisited this a couple of times, should we add this again? At some point we said, you know what, it seems like this can be done well, so let's try it again. And that's how we started pushing on eager execution. How do we combine those two together? Which has finally come very well together in 2.0, but it took us a while to get all the things together and so on. So let me ask, put another way, I think eager execution is a really powerful thing that was added. Do you think it wouldn't have been, you know, Muhammad Ali versus Frasier, right? Do you think it wouldn't have been added as quickly if PyTorch wasn't there? It might have taken longer. No longer? Yeah, it was, I mean, we had tried some variants of that before, so I'm sure it would have happened, but it might have taken longer. I'm grateful that TensorFlow is finally in the way they did. It's doing some incredible work last couple years. What other things that we didn't talk about are you looking forward in 2.0? That comes to mind. So we talked about some of the ecosystem stuff, making it easily accessible to Keras, eager execution. Is there other things that we missed? Yeah, so I would say one is just where 2.0 is, and you know, with all the things that we've talked about, I think as we think beyond that, there are lots of other things that it enables us to do and that we're excited about. So what it's setting us up for, okay, here are these really clean APIs. We've cleaned up the surface for what the users want. What it also allows us to do a whole bunch of stuff behind the scenes once we are ready with 2.0. So for example, in TensorFlow with graphs and all the things you could do, you could always get a lot of good performance if you spent the time to tune it, right? And we've clearly shown that, lots of people do that. With 2.0, with these APIs, where we are, we can give you a lot of performance just with whatever you do. You know, because we see these, it's much cleaner. We know most people are gonna do things this way. We can really optimize for that and get a lot of those things out of the box. And it really allows us, you know, both for single machine and distributed and so on, to really explore other spaces behind the scenes after 2.0 in the future versions as well. So right now the team's really excited about that, that over time I think we'll see that. The other piece that I was talking about in terms of just restructuring the monolithic thing into more pieces and making it more modular, I think that's gonna be really important for a lot of the other people in the ecosystem, other organizations and so on that wanted to build things. Can you elaborate a little bit what you mean by making TensorFlow ecosystem more modular? So the way it's organized today is there's one, there are lots of repositories in the TensorFlow organization at GitHub. The core one where we have TensorFlow, it has the execution engine, it has the key backends for CPUs and GPUs, it has the work to do distributed stuff. And all of these just work together in a single library or binary. There's no way to split them apart easily. I mean, there are some interfaces, but they're not very clean. In a perfect world, you would have clean interfaces where, okay, I wanna run it on my fancy cluster with some custom networking, just implement this and do that. I mean, we kind of support that, but it's hard for people today. I think as we are starting to see more interesting things in some of these spaces, having that clean separation will really start to help. And again, going to the large size of the ecosystem and the different groups involved there, enabling people to evolve and push on things more independently just allows it to scale better. And by people, you mean individual developers and? And organizations. And organizations. That's right. So the hope is that everybody sort of major, I don't know, Pepsi or something uses, like major corporations go to TensorFlow to this kind of. Yeah, if you look at enterprises like Pepsi or these, I mean, a lot of them are already using TensorFlow. They are not the ones that do the development or changes in the core. Some of them do, but a lot of them don't. I mean, they touch small pieces. There are lots of these, some of them being, let's say, hardware vendors who are building their custom hardware and they want their own pieces. Or some of them being bigger companies, say, IBM. I mean, they're involved in some of our special interest groups, and they see a lot of users who want certain things and they want to optimize for that. So folks like that often. Autonomous vehicle companies, perhaps. Exactly, yes. So, yeah, like I mentioned, TensorFlow has been downloaded 41 million times, 50,000 commits, almost 10,000 pull requests, and 1,800 contributors. So I'm not sure if you can explain it, but what does it take to build a community like that? In retrospect, what do you think, what is the critical thing that allowed for this growth to happen, and how does that growth continue? Yeah, yeah, that's an interesting question. I wish I had all the answers there, I guess, so you could replicate it. I think there are a number of things that need to come together, right? One, just like any new thing, it is about, there's a sweet spot of timing, what's needed, does it grow with, what's needed, so in this case, for example, TensorFlow's not just grown because it was a good tool, it's also grown with the growth of deep learning itself. So those factors come into play. Other than that, though, I think just hearing, listening to the community, what they do, what they need, being open to, like in terms of external contributions, we've spent a lot of time in making sure we can accept those contributions well, we can help the contributors in adding those, putting the right process in place, getting the right kind of community, welcoming them and so on. Like over the last year, we've really pushed on transparency, that's important for an open source project. People wanna know where things are going, and we're like, okay, here's a process where you can do that, here are our RFCs and so on. So thinking through, there are lots of community aspects that come into that you can really work on. As a small project, it's maybe easy to do because there's like two developers and you can do those. As you grow, putting more of these processes in place, thinking about the documentation, thinking about what two developers care about, what kind of tools would they want to use, all of these come into play, I think. So one of the big things I think that feeds the TensorFlow fire is people building something on TensorFlow, and implement a particular architecture that does something cool and useful, and they put that on GitHub. And so it just feeds this growth. Do you have a sense that with 2.0 and 1.0 that there may be a little bit of a partitioning like there is with Python 2 and 3, that there'll be a code base and in the older versions of TensorFlow, they will not be as compatible easily? Or are you pretty confident that this kind of conversion is pretty natural and easy to do? So we're definitely working hard to make that very easy to do. There's lots of tooling that we talked about at the developer summit this week, and we'll continue to invest in that tooling. It's, you know, when you think of these significant version changes, that's always a risk, and we are really pushing hard to make that transition very, very smooth. So I think, so at some level, people wanna move and they see the value in the new thing. They don't wanna move just because it's a new thing, and some people do, but most people want a really good thing. And I think over the next few months, as people start to see the value, we'll definitely see that shift happening. So I'm pretty excited and confident that we will see people moving. As you said earlier, this field is also moving rapidly, so that'll help because we can do more things and all the new things will clearly happen in 2.x, so people will have lots of good reasons to move. So what do you think TensorFlow 3.0 looks like? Is there, are things happening so crazily that even at the end of this year seems impossible to plan for? Or is it possible to plan for the next five years? I think it's tricky. There are some things that we can expect in terms of, okay, change, yes, change is gonna happen. Are there some things gonna stick around and some things not gonna stick around? I would say the basics of deep learning, the, you know, say convolution models or the basic kind of things, they'll probably be around in some form still in five years. Will RL and GAN stay? Very likely, based on where they are. Will we have new things? Probably, but those are hard to predict. And some directionally, some things that we can see is, you know, in things that we're starting to do, right, with some of our projects right now is just 2.0 combining eager execution and graphs where we're starting to make it more like just your natural programming language. You're not trying to program something else. Similarly, with Swift for TensorFlow, we're taking that approach. Can you do something ground up, right? So some of those ideas seem like, okay, that's the right direction. In five years, we expect to see more in that area. Other things we don't know is, will hardware accelerators be the same? Will we be able to train with four bits instead of 32 bits? And I think the TPU side of things is exploring that. I mean, TPU is already on version three. It seems that the evolution of TPU and TensorFlow are sort of, they're coevolving almost in terms of both are learning from each other and from the community and from the applications where the biggest benefit is achieved. That's right. You've been trying to sort of, with Eager, with Keras, to make TensorFlow as accessible and easy to use as possible. What do you think, for beginners, is the biggest thing they struggle with? Have you encountered that? Or is basically what Keras is solving is that Eager, like we talked about? Yeah, for some of them, like you said, right, the beginners want to just be able to take some image model, they don't care if it's Inception or ResNet or something else, and do some training or transfer learning on their kind of model. Being able to make that easy is important. So in some ways, if you do that by providing them simple models with say, in hub or so on, they don't care about what's inside that box, but they want to be able to use it. So we're pushing on, I think, different levels. If you look at just a component that you get, which has the layers already smooshed in, the beginners probably just want that. Then the next step is, okay, look at building layers with Keras. If you go out to research, then they are probably writing custom layers themselves or doing their own loops. So there's a whole spectrum there. And then providing the pre trained models seems to really decrease the time from you trying to start. You could basically in a Colab notebook achieve what you need. So I'm basically answering my own question because I think what TensorFlow delivered on recently is trivial for beginners. So I was just wondering if there was other pain points you're trying to ease, but I'm not sure there would. No, those are probably the big ones. I see high schoolers doing a whole bunch of things now, which is pretty amazing. It's both amazing and terrifying. Yes. In a sense that when they grow up, it's some incredible ideas will be coming from them. So there's certainly a technical aspect to your work, but you also have a management aspect to your role with TensorFlow leading the project, a large number of developers and people. So what do you look for in a good team? What do you think? Google has been at the forefront of exploring what it takes to build a good team and TensorFlow is one of the most cutting edge technologies in the world. So in this context, what do you think makes for a good team? It's definitely something I think a favorite about. I think in terms of the team being able to deliver something well, one of the things that's important is a cohesion across the team. So being able to execute together in doing things that's not an end, like at this scale, an individual engineer can only do so much. There's a lot more that they can do together, even though we have some amazing superstars across Google and in the team, but there's, you know, often the way I see it as the product of what the team generates is way larger than the whole or the individual put together. And so how do we have all of them work together, the culture of the team itself, hiring good people is important. But part of that is it's not just that, okay, we hire a bunch of smart people and throw them together and let them do things. It's also people have to care about what they're building, people have to be motivated for the right kind of things. That's often an important factor. And, you know, finally, how do you put that together with a somewhat unified vision of where we wanna go? So are we all looking in the same direction or each of us going all over? And sometimes it's a mix. Google's a very bottom up organization in some sense, also research even more so, and that's how we started. But as we've become this larger product and ecosystem, I think it's also important to combine that well with a mix of, okay, here's the direction we wanna go in. There is exploration we'll do around that, but let's keep staying in that direction, not just all over the place. And is there a way you monitor the health of the team? Sort of like, is there a way you know you did a good job? The team is good? Like, I mean, you're sort of, you're saying nice things, but it's sometimes difficult to determine how aligned. Yes. Because it's not binary. It's not like there's tensions and complexities and so on. And the other element of the mission of superstars, there's so much, even at Google, such a large percentage of work is done by individual superstars too. So there's a, and sometimes those superstars can be against the dynamic of a team and those tensions. I mean, I'm sure in TensorFlow it might be a little bit easier because the mission of the project is so sort of beautiful. You're at the cutting edge, so it's exciting. But have you had struggle with that? Has there been challenges? There are always people challenges in different kinds of ways. That said, I think we've been what's good about getting people who care and are, you know, have the same kind of culture, and that's Google in general to a large extent. But also, like you said, given that the project has had so many exciting things to do, there's been room for lots of people to do different kinds of things and grow, which does make the problem a bit easier, I guess. And it allows people, depending on what they're doing, if there's room around them, then that's fine. But yes, we do care about whether a superstar or not, that they need to work well with the team across Google. That's interesting to hear. So it's like superstar or not, the productivity broadly is about the team. Yeah, yeah. I mean, they might add a lot of value, but if they're hurting the team, then that's a problem. So in hiring engineers, it's so interesting, right, the hiring process. What do you look for? How do you determine a good developer or a good member of a team from just a few minutes or hours together? Again, no magic answers, I'm sure. Yeah, I mean, Google has a hiring process that we've refined over the last 20 years, I guess, and that you've probably heard and seen a lot about. So we do work with the same hiring process and that's really helped. For me in particular, I would say, in addition to the core technical skills, what does matter is their motivation in what they wanna do. Because if that doesn't align well with where we wanna go, that's not gonna lead to long term success for either them or the team. And I think that becomes more important the more senior the person is, but it's important at every level. Like even the junior most engineer, if they're not motivated to do well at what they're trying to do, however smart they are, it's gonna be hard for them to succeed. Does the Google hiring process touch on that passion? So like trying to determine, because I think as far as I understand, maybe you can speak to it, that the Google hiring process sort of helps in the initial like determines the skill set there, is your puzzle solving ability, problem solving ability good? But like, I'm not sure, but it seems that the determining whether the person is like fire inside them, that burns to do anything really, it doesn't really matter. It's just some cool stuff, I'm gonna do it. Is that something that ultimately ends up when they have a conversation with you or once it gets closer to the team? So one of the things we do have as part of the process is just a culture fit, like part of the interview process itself, in addition to just the technical skills and each engineer or whoever the interviewer is, is supposed to rate the person on the culture and the culture fit with Google and so on. So that is definitely part of the process. Now, there are various kinds of projects and different kinds of things. So there might be variants and of the kind of culture you want there and so on. And yes, that does vary. So for example, TensorFlow has always been a fast moving project and we want people who are comfortable with that. But at the same time now, for example, we are at a place where we are also very full fledged product and we wanna make sure things that work really, really work, right? You can't cut corners all the time. So balancing that out and finding the people who are the right fit for those is important. And I think those kinds of things do vary a bit across projects and teams and product areas across Google. And so you'll see some differences there in the final checklist. But a lot of the core culture, it comes along with just the engineering excellence and so on. What is the hardest part of your job? I'll take your pick, I guess. It's fun, I would say, right? Hard, yes. I mean, lots of things at different times. I think that does vary. So let me clarify that difficult things are fun when you solve them, right? So it's fun in that sense. I think the key to a successful thing across the board and in this case, it's a large ecosystem now, but even a small product, is striking that fine balance across different aspects of it. Sometimes it's how fast do you go versus how perfect it is. Sometimes it's how do you involve this huge community? Who do you involve or do you decide, okay, now is not a good time to involve them because it's not the right fit. Sometimes it's saying no to certain kinds of things. Those are often the hard decisions. Some of them you make quickly because you don't have the time. Some of them you get time to think about them, but they're always hard. So both choices are pretty good, those decisions. What about deadlines? Is this, do you find TensorFlow, to be driven by deadlines to a degree that a product might? Or is there still a balance to where it's less deadline? You had the Dev Summit today that came together incredibly. Looked like there's a lot of moving pieces and so on. So did that deadline make people rise to the occasion releasing TensorFlow 2.0 alpha? I'm sure that was done last minute as well. I mean, up to the last point. Again, it's one of those things that you need to strike the good balance. There's some value that deadlines bring that does bring a sense of urgency to get the right things together. Instead of getting the perfect thing out, you need something that's good and works well. And the team definitely did a great job in putting that together. So I was very amazed and excited by everything how that came together. That said, across the year, we try not to put out official deadlines. We focus on key things that are important, figure out how much of it's important. And we are developing in the open, both internally and externally, everything's available to everybody. So you can pick and look at where things are. We do releases at a regular cadence. So fine, if something doesn't necessarily end up this month, it'll end up in the next release in a month or two. And that's okay, but we want to keep moving as fast as we can in these different areas. Because we can iterate and improve on things, sometimes it's okay to put things out that aren't fully ready. We'll make sure it's clear that okay, this is experimental, but it's out there if you want to try and give feedback. That's very, very useful. I think that quick cycle and quick iteration is important. That's what we often focus on rather than here's a deadline where you get everything else. Is 2.0, is there pressure to make that stable? Or like, for example, WordPress 5.0 just came out and there was no pressure to, it was a lot of build updates delivered way too late, but, and they said, okay, well, but we're gonna release a lot of updates really quickly to improve it. Do you see TensorFlow 2.0 in that same kind of way or is there this pressure to once it hits 2.0, once you get to the release candidate and then you get to the final, that's gonna be the stable thing? So it's gonna be stable in, just like when NodeX was where every API that's there is gonna remain in work. It doesn't mean we can't change things under the covers. It doesn't mean we can't add things. So there's still a lot more for us to do and we'll continue to have more releases. So in that sense, there's still, I don't think we'll be done in like two months when we release this. I don't know if you can say, but is there, there's not external deadlines for TensorFlow 2.0, but is there internal deadlines, the artificial or otherwise, that you're trying to set for yourself or is it whenever it's ready? So we want it to be a great product, right? And that's a big important piece for us. TensorFlow's already out there. We have 41 million downloads for 1.0 X. So it's not like we have to have this. Yeah, exactly. So it's not like, a lot of the features that we've really polishing and putting them together are there. We don't have to rush that just because. So in that sense, we wanna get it right and really focus on that. That said, we have said that we are looking to get this out in the next few months, in the next quarter. And as far as possible, we'll definitely try to make that happen. Yeah, my favorite line was, spring is a relative concept. I love it. Yes. Spoken like a true developer. So something I'm really interested in and your previous line of work is, before TensorFlow, you led a team at Google on search ads. I think this is a very interesting topic on every level, on a technical level, because at their best, ads connect people to the things they want and need. So, and at their worst, they're just these things that annoy the heck out of you to the point of ruining the entire user experience of whatever you're actually doing. So they have a bad rep, I guess. And on the other end, so that this connecting users to the thing they need and want is a beautiful opportunity for machine learning to shine. Like huge amounts of data that's personalized and you kind of map to the thing they actually want won't get annoyed. So what have you learned from this, Google that's leading the world in this aspect, what have you learned from that experience and what do you think is the future of ads? Take you back to that. Yeah, yes, it's been a while, but I totally agree with what you said. I think the search ads, the way it was always looked at and I believe it still is, is it's an extension of what search is trying to do. And the goal is to make the information and make the world's information accessible. That's it's not just information, but maybe products or other things that people care about. And so it's really important for them to align with what the users need. And in search ads, there's a minimum quality level before that ad would be shown. If you don't have an ad that hits that quality, but it will not be shown even if we have it and okay, maybe we lose some money there, that's fine. That is really, really important. And I think that that is something I really liked about being there. Advertising is a key part. I mean, as a model, it's been around for ages, right? It's not a new model, it's been adapted to the web and became a core part of search and many other search engines across the world. And I do hope, like you said, there are aspects of ads that are annoying and I go to a website and if it just keeps popping an ad in my face not to let me read, that's gonna be annoying clearly. So I hope we can strike that balance between showing a good ad where it's valuable to the user and provides the monetization to the service. And this might be search, this might be a website, all of these, they do need the monetization for them to provide that service. But if it's done in a good balance between showing just some random stuff that's distracting versus showing something that's actually valuable. So do you see it moving forward as to continue being a model that funds businesses like Google, that's a significant revenue stream? Because that's one of the most exciting things but also limiting things in the internet is nobody wants to pay for anything. And advertisements, again, coupled at their best, are actually really useful and not annoying. Do you see that continuing and growing and improving or is there, do you see sort of more Netflix type models where you have to start to pay for content? I think it's a mix. I think it's gonna take a long while for everything to be paid on the internet, if at all, probably not. I mean, I think there's always gonna be things that are sort of monetized with things like ads. But over the last few years, I would say we've definitely seen that transition towards more paid services across the web and people are willing to pay for them because they do see the value. I mean, Netflix is a great example. I mean, we have YouTube doing things. People pay for the apps they buy. More people I find are willing to pay for newspaper content for the good news websites across the web. That wasn't the case a few years, even a few years ago, I would say. And I just see that change in myself as well and just lots of people around me. So definitely hopeful that we'll transition to that mix model where maybe you get to try something out for free, maybe with ads, but then there's a more clear revenue model that sort of helps go beyond that. So speaking of revenue, how is it that a person can use the TPU in a Google call app for free? So what's the, I guess the question is, what's the future of TensorFlow in terms of empowering, say, a class of 300 students? And I'm asked by MIT, what is going to be the future of them being able to do their homework in TensorFlow? Like, where are they going to train these networks, right? What's that future look like with TPUs, with cloud services, and so on? I think a number of things there. I mean, any TensorFlow open source, you can run it wherever, you can run it on your desktop and your desktops always keep getting more powerful, so maybe you can do more. My phone is like, I don't know how many times more powerful than my first desktop. You'll probably train it on your phone though, yeah, that's true. Right, so in that sense, the power you have in your hands is a lot more. Clouds are actually very interesting from, say, students or courses perspective, because they make it very easy to get started. I mean, Colab, the great thing about it is, go to a website and it just works. No installation needed, nothing to, you're just there and things are working. That's really the power of cloud as well. And so I do expect that to grow. Again, Colab is a free service. It's great to get started, to play with things, to explore things. That said, with free, you can only get so much. You'd be, yeah. So just like we were talking about, free versus paid, yeah, there are services you can pay for and get a lot more. Great, so if I'm a complete beginner interested in machine learning and TensorFlow, what should I do? Probably start with going to our website and playing there. So just go to TensorFlow.org and start clicking on things. Yep, check out tutorials and guides. There's stuff you can just click there and go to a Colab and do things. No installation needed, you can get started right there. Okay, awesome, Rajit, thank you so much for talking today. Thank you, Lex, it was great.
Rajat Monga: TensorFlow | Lex Fridman Podcast #22
The following is a conversation with Gavin Miller, he's the head of Adobe Research. Adobe has empowered artists, designers, and creative minds from all professions working in the digital medium for over 30 years with software such as Photoshop, Illustrator, Premiere, After Effects, InDesign, Audition, software that work with images, video, and audio. Adobe Research is working to define the future evolution of these products in a way that makes the life of creatives easier, automates the tedious tasks, and gives more and more time to operate in the idea space instead of pixel space. This is where the cutting edge, deep learning methods of the past decade can really shine more than perhaps any other application. Gavin is the embodiment of combining tech and creativity. Outside of Adobe Research, he writes poetry and builds robots, both things that are near and dear to my heart as well. This conversation is part of the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lux Friedman spelled F R I D. And now, here's my conversation with Gavin Miller. You're head of Adobe Research, leading a lot of innovative efforts and applications of AI, creating images, video, audio, language, but you're also yourself an artist, a poet, a writer, and even a roboticist. So, while I promised to everyone listening that I will not spend the entire time we have together reading your poetry, which I love, I have to sprinkle it in at least a little bit. So, some of them are pretty deep and profound and some are light and silly. Let's start with a few lines from the silly variety. You write in Je Ne Vinaigrette Rien, a poem that beautifully parodies both Edith Piaf's Je Ne Vinaigrette Rien and My Way by Frank Sinatra. So, it opens with, and now dessert is near. It's time to pay the final total. I've tried to slim all year, but my diets have been anecdotal. So, where does that love for poetry come from for you? And if we dissect your mind, how does it all fit together in the bigger puzzle of Dr. Gavin Miller? Oh, well, interesting you chose that one. That was a poem I wrote when I'd been to my doctor and he said you really need to lose some weight and go on a diet. And whilst the rational part of my brain wanted to do that, the irrational part of my brain was protesting and sort of embraced the opposite idea. I regret nothing hence. Yes, exactly. Taken to an extreme, I thought it would be funny. Obviously, it's a serious topic for some people. But I think for me, I've always been interested in writing since I was in high school, as well as doing technology and invention. And sometimes there are parallel strands in your life that carry on. And one is more about your private life and one's more about your technological career. And then at sort of happy moments along the way, sometimes the two things touch. One idea informs the other. And we can talk about that as we go. Do you think your writing, the art, the poetry contribute indirectly or directly to your research, to your work in Adobe? Well, sometimes it does if I say, imagine a future in a science fiction kind of way. And then once it exists on paper, I think, well, why shouldn't I just build that? There was an example where when realistic voice synthesis first started in the 90s at Apple, where I worked in research, it was done by a friend of mine. I sort of sat down and started writing a poem, which each line I would enter into the voice synthesizer and see how it sounded, and sort of wrote it for that voice. And at the time, the agents weren't very sophisticated. So they'd sort of add random intonation. And I kind of made up the poem to sort of match the tone of the voice. And it sounded slightly sad and depressed. So I pretended it was a poem written by an intelligent agent, sort of telling the user to go home and leave them alone. But at the same time, they were lonely and wanted to have company and learn from what the user was saying. And at the time, it was way beyond anything that AI could possibly do. But since then, it's becoming more within the bounds of possibility. And then at the same time, I had a project at home where I did sort of a smart home. This was probably 93, 94. And I had the talking voice who'd remind me when I walked in the door of what things I had to do. I had buttons on my washing machine because I was a bachelor and I'd leave the clothes in there for three days and they got moldy. So as I got up in the morning, it would say, don't forget the washing and so on. I made photo albums that use light sensors to know which page you were looking at would send that over wireless radio to the agent who would then play sounds that match the image you were looking at in the book. So I was kind of in love with this idea of magical realism and whether it was possible to do that with technology. So that was a case where the sort of the agent sort of intrigued me from a literary point of view and became a personality. I think more recently, I've also written plays and when plays you write dialogue and obviously you write a fixed set of dialogue that follows a linear narrative. But with modern agents, as you design a personality or a capability for conversation, you're sort of thinking of, I kind of have imaginary dialogue in my head. And then I think, what would it take not only to have that be real, but for it to really know what it's talking about. So it's easy to fall into the uncanny valley with AI where it says something it doesn't really understand, but it sounds good to the person. But you rapidly realize that it's kind of just stimulus response. It doesn't really have real world knowledge about the thing it's describing. And so when you get to that point, it really needs to have multiple ways of talking about the same concept. So it sounds as though it really understands it. Now, what really understanding means is in the eye of the beholder, right? But if it only has one way of referring to something, it feels like it's a canned response. But if it can reason about it, or you can go at it from multiple angles and give a similar kind of response that people would, then it starts to seem more like there's something there that's sentient. You can say the same thing, multiple things from different perspectives. I mean, with the automatic image captioning that I've seen the work that you're doing, there's elements of that, right? Being able to generate different kinds of statements about the same picture. Right. So in my team, there's a lot of work on turning a medium from one form to another, whether it's auto tagging imagery or making up full sentences about what's in the image, then changing the sentence, finding another image that matches the new sentence or vice versa. And in the modern world of GANs, you sort of give it a description and it synthesizes an asset that matches the description. So I've sort of gone on a journey. My early days in my career were about 3D computer graphics, the sort of pioneering work, sort of before movies had special effects done with 3D graphics, and sort of rode that revolution. And that was very much like the Renaissance where people would model light and color and shape and everything. And now we're kind of in another wave where it's more impressionistic and it's sort of the idea of something can be used to generate an image directly, which is sort of the new frontier in computer image generation using AI algorithms. So the creative process is more in the space of ideas or becoming more in the space of ideas versus in the raw pixels? Well, it's interesting. It depends. I think at Adobe, we really want to span the entire range from really, really good, what you might call low level tools by low level as close to say, analog workflows as possible. So what we do there is we make up systems that do really realistic oil paint and watercolor simulations. So if you want every bristle to behave as it would in the real world and leave a beautiful analog trail of water and then flow after you've made the brushstroke, you can do that. And that's really important for people who want to create something really expressive or really novel because they have complete control. And then as certain other tasks become automated, it frees the artists up to focus on the inspiration and less of the perspiration. So thinking about different ideas, obviously. Once you finish the design, there's a lot of work to, say, do it for all the different aspect ratio of phones or websites and so on. And that used to take up an awful lot of time for artists. It still does for many what we call content velocity. And one of the targets of AI is actually to reason about from the first example of what are the likely intent for these other formats? Maybe if you change the language to German and the words are longer, how do you reflow everything so that it looks nicely artistic in that way? And so the person can focus on the really creative bit in the middle, which is what is the look and style and feel and what's the message and what's the story and the human element? So I think creativity is changing. So that's one way in which we're trying to just make it easier and faster and cheaper to do so that there can be more of it, more demand because it's less expensive. So everyone wants beautiful artwork for everything from a school website to Hollywood movie. On the other side, as some of these things have automatic versions of them, people will possibly change role from being the hands on artisan to being either the art director or the conceptual artist. And then the computer will be a partner to help create polished examples of the idea that they're exploring. Let's talk about Adobe products, AI and Adobe products. Just so you know where I'm coming from, I'm a huge fan of Photoshop for images, Premiere for video, Audition for audio. I'll probably use Photoshop to create the thumbnail for this video, Premiere to edit the video, Audition to do the audio. That said, everything I do is really manually and I set up, I use this old school Kinesis keyboard and I have auto hotkey that just, it's really about optimizing the flow. Of just making sure there's as few clicks as possible, so just being extremely efficient, something you started to speak to. So before we get into the fun sort of awesome deep learning things, where does AI, if you could speak a little more to it, AI or just automation in general, do you see in the coming months and years or in general, prior in 2018, fitting into making the life, the low level pixel work flow easier? Yeah, that's a great question. So we have a very rich array of algorithms already in Photoshop, just classical procedural algorithms as well as ones based on data. In some cases, they end up with a large number of sliders and degrees of freedom. So one way in which AI can help is just an auto button, which comes up with default settings based on the content itself rather than default values for the tool. At that point, you then start tweaking. So that's a very kind of make life easier for people whilst making use of common sense from other example images. So like smart defaults. Smart defaults, absolutely. Another one is something we've spent a lot of work over the last 20 years I've been at Adobe, or 19, thinking about selection, for instance, where, you know, with quick select, you would look at color boundaries and figure out how to sort of flood fill into regions that you thought were physically connected in the real world. But that algorithm had no visual common sense about what a cat looks like or a dog. It would just do it based on rules of thumb, which were applied to graph theory. And it was a big improvement over the previous work where you had sort of almost click everything by hand. Or if it just did similar colors, it would do little tiny regions that wouldn't be connected. But in the future, using neural nets to actually do a great job with, say, a single click or even in the case of well known categories like people or animals, no click where you just say select the object and it just knows the dominant object is a person in the middle of the photograph. Those kinds of things are really valuable if they can be robust enough to give you good quality results or they can be a great start for like tweaking it. So, for example, background removal. Correct. Like one thing I'll, in a thumbnail, I'll take a picture of you right now and essentially remove the background behind you. And I want to make that as easy as possible. You don't have flowing hair, like rich at the moment. I had it in the past. It may come again in the future. So that sometimes makes it a little more challenging to remove the background. How difficult do you think is that problem for AI for basically making the quick selection tool smarter and smarter and smarter? Well, we have a lot of research on that already. If you want a sort of quick, cheap and cheerful, look, I'm pretending I'm in Hawaii, but it's sort of a joke, then you don't need perfect boundaries. And you can do that today with a single click with the algorithms we have. We have other algorithms where with a little bit more guidance on the boundaries, like you might need to touch it up a little bit. We have other algorithms that can pull a nice mat from a crude selection. So we have combinations of tools that can do all of that. And at our recent Max conference at Adobe Max, we demonstrated how very quickly, just by drawing a simple polygon around the object of interest, we could not only do it for a single still, but we could pull a mat, well, pull at least a selection mask from a moving target, like a person dancing in front of a brick wall or something. And so it's going from hours to a few seconds for workflows that are really nice, and then you might go in and touch up a little. So that's a really interesting question. You mentioned the word robust. You know, there's like a journey for an idea, right? And what you presented probably at Max has elements of just sort of, it inspires the concept, it can work pretty well in a majority of cases. But how do you make something that works, well, in majority of cases, how do you make something that works, maybe in all cases, or it becomes a robust tool that can... Well, there are a couple of things. So that really touches on the difference between academic research and industrial research. So in academic research, it's really about who's the person to have the great new idea that shows promise. And we certainly love to be those people too. But we have sort of two forms of publishing. One is academic peer review, which we do a lot of, and we have great success there as much as some universities. But then we also have shipping, which is a different type of... And then we get customer review, as well as, you know, product critics. And that might be a case where it's not about being perfect every single time, but perfect enough of the time, plus a mechanism to intervene and recover where you do have mistakes. So we have the luxury of very talented customers. We don't want them to be overly taxed doing it every time. But if they can go in and just take it from 99 to 100 with the touch of a mouse or something, then for the professional end, that's something that we definitely want to support as well. And for them, it went from having to do that tedious task all the time to much less often. So I think that gives us an out. If it had to be 100% automatic all the time, then that would delay the time at which we could get to market. So on that thread, maybe you can untangle something. Again, I'm sort of just speaking to my own experience. Maybe that is the most useful. Absolutely. So I think Photoshop, as an example, or Premiere, has a lot of amazing features that I haven't touched. And so in terms of AI helping make my life or the life of creatives easier, this collaboration between human and machine, how do you learn to collaborate better? How do you learn the new algorithms? Is it something where you have to watch tutorials and you have to watch videos and so on? Or do you think about the experience itself through exploration, being the teacher? We absolutely do. So I'm glad that you brought this up. We sort of think about two things. One is helping the person in the moment to do the task that they need to do, but the other is thinking more holistically about their journey learning a tool. And when it's like, think of it as Adobe University, where you use the tool long enough, you become an expert. And not necessarily an expert in everything. It's like living in a city. You don't necessarily know every street, but you know the important ones you need to get to. So we have projects in research, which actually look at the thousands of hours of tutorials online and try to understand what's being taught in them. And then we had one publication at CHI where it was looking at, given the last three or four actions you did, what did other people in tutorials do next? So if you want some inspiration for what you might do next, or you just want to watch the tutorial and see, learn from people who are doing similar workflows to you, you can without having to go and search on keywords and everything. So really trying to use the context of your use of the app to make intelligent suggestions, either about choices that you might make, or in a more assistive way, where it could say, if you did this next, we could show you. And that's basically the frontier that we're exploring now, which is, if we really deeply understand the domain in which designers and creative people work, can we combine that with AI and pattern matching of behavior to make intelligent suggestions, either through, you know, verbal, possibilities, or just showing the results of if you try this. And that's really the sort of, you know, I was in a meeting today thinking about these things. Well, it's still a grand challenge. You know, we'd all love an artist over one shoulder and a teacher over the other, right? And we hope to get there. And the right thing to do is to give enough at each stage that it's useful in itself, but it builds a foundation for the next stage. Give enough at each stage that it's useful in itself, but it builds a foundation for the next level of expectation. Are you aware of this gigantic medium of YouTube that's creating just a bunch of creative people, both artists and teachers of different kinds? Absolutely. And the more we can understand those media types, both visually and in terms of transcripts and words, the more we can bring the wisdom that they embody into the guidance that's embedded in the tool. That would be brilliant to remove the barrier from having to yourself type in the keyword searching, so on. Absolutely. And then in the longer term, an interesting discussion is, does it ultimately not just assist with learning the interface we have, but does it modify the interface to be simpler? Or do you fragment into a variety of tools, each of which has a different level of visibility of the functionality? I like to say that if you add a feature to a GUI, you have to have yet more visual complexity confronting the new user. Whereas if you have an assistant with a new skill, if you know they have it, so you know to ask for it, then it's sort of additive without being more intimidating. So we definitely think about new users and how to onboard them. Many actually value the idea of being able to master that complex interface and keyboard shortcuts like you were talking about earlier, because with great familiarity, it becomes a musical instrument for expressing your visual ideas. And other people just want to get something done quickly in the simplest way possible. And that's where a more assistive version of the same technology might be useful, maybe on a different class of device, which is more in context for CAPTCHA, say. Whereas somebody who's in a deep post production workflow maybe want to be on a laptop or a big screen desktop and have more knobs and dials to really express the subtlety of what they want to do. So there's so many exciting applications of computer vision and machine learning that Adobe is working on, like scene stitching, sky replacement, foreground, background removal, spatial object based image search, automatic image captioning, like we mentioned, project cloak, project deep fill, filling in parts of the images, project scribbler, style transform video, style transform faces and video with project puppetron, best name ever. Can you talk through a favorite or some of them or examples that popped in mind? I'm sure I'll be able to provide links to other ones we don't talk about because there's visual elements to all of them that are exciting. Why they're interesting for different reasons might be a good way to go. So I think sky replace is interesting because we talked about selection being sort of an atomic operation. It's almost like if you think of an assembly language, it's like a single instruction. Whereas sky replace is a compound action where you automatically select the sky, you look for stock content that matches the geometry of the scene. You try to have variety in your choices so that you do coverage of different moods. It then mats in the sky behind the foreground. But then importantly, it uses the foreground of the other image that you just searched on to recolor the foreground of the image that you're editing. So if you say go from a midday sky to an evening sky, it will actually add sort of an orange glow to the foreground objects as well. I was a big fan in college of Magritte and he has a number of paintings where it's surrealism because he'll like do a composite, but the foreground building will be at night and the sky will be during the day. There's one called The Empire of Light, which was on my wall in college. And we're trying not to do surrealism. It can be a choice, but we'd rather have it be natural by default rather than it looking fake. And then you have to do a whole bunch of post production to fix it. So that's a case where we're kind of capturing an entire workflow into a single action and doing it in about a second rather than a minute or two. And when you do that, you can not just do it once, but you can do it for say like 10 different backgrounds. And then you're almost back to this inspiration idea of I don't know quite what I want, but I'll know it when I see it. And you can just explore the design space as close to final production value as possible. And then when you really pick one, you might go back and slightly tweak the selection mask just to make it perfect and do that kind of polish that professionals like to bring to their work. So then there's this idea of, you mentioned the sky, replacing it to different stock images of the sky. But in general, you have this idea. Or it could be on your disc or whatever. Disc, right. But making even more intelligent choices about ways to search stock images, which is really interesting. It's kind of spatial. Absolutely. Right. So that was something we called concept canvas. So normally when you do a say an image search, you would I assuming it's just based on text, you would give the keywords of the things you want to be in the image, and it would find the nearest one that had those tags. For many tasks, you really want, you know, to be able to say I want a big person in the middle or in a dog to the right and umbrella above the left because you want to leave space for the text or whatever for the and so concept canvas lets you assign spatial regions to the keywords. And then we've already pre indexed the images to know where the important concepts are in the picture. So we then go through that index matching to assets. And even though it's just another form of search, because you're doing spatial design or layout, it starts to feel like design, you sort of feel oddly responsible for the image that comes back as if you invented it. Yeah. So it's, it's a it's a good example where giving enough control starts to make people have a sense of ownership over the outcome of the event. And then we also have technologies in Photoshop, we physically can move the dog in post as well. But for concept canvas, it was just a very fast way to sort of loop through and be able to lay things out. And in terms of being able to remove objects from a scene and fill in the background, right, automatically. I so that's extremely exciting. And that's so neural networks are stepping in there. I just talked this week, Ian Goodfellow, so the GANs for doing that is definitely one approach. So that is that is that a really difficult problem? Is it as difficult as it looks, again, to take it to a robust product level? Well, there are certain classes of image for which the traditional algorithms like content aware fill work really well, like if you have a naturalistic texture, like a gravel path or something, because it's patch based, it will make up a very plausible looking intermediate thing and fill in the hole. And then we use some algorithms to sort of smooth out the lighting so you don't see any brightness contrast in that region, or you've gradually ramped from one from dark to light, if it straddles the boundary, where it gets complicated as if you have to infer invisible structure behind behind the person in front. And that really requires a common sense knowledge of the world to know what, you know, if I see three quarters of a house, do I have a rough sense of what the rest of the house looks like? If you just fill it in with patches, it can end up sort of doing things that make sense locally, but you look at the global structure, and it looks like it's just sort of crumpled or messed up. And so what GANs and neural nets bring to the table is this common sense learned from the training set. And the challenge right now is that the generative methods that can make up missing holes using that kind of technology are still only stable at low resolutions. And so you either need to then go from a low resolution to a high resolution using some other algorithm, or we need to push the state of the art and it's still in research to get to that point. Of course, if you show it something, say it's trained on houses, and then you show it an octopus, it's not going to do a very good job of showing common sense about octopuses. So again, you're asking about how you know that it's ready for primetime. You really need a very diverse training set of images. And ultimately, that may be a case where you put it out there with some guardrails where you might do a detector which looks at the image and sort of estimates its own competence of how well a job could this algorithm do. So eventually, there may be this idea of what we call an ensemble of experts where any particular expert is specialized in certain things. And then there's sort of a, either they vote to say how confident they are about what to do, this is sort of more future looking, or there's some dispatcher which says you're good at houses, you're good at trees. So I mean, all this adds up to a lot of work because each of those models will be a whole bunch of work. But I think over time, you'd gradually fill out the set and initially focus on certain workflows and then sort of branch out as you get more capable. You mentioned workflows, and have you considered maybe looking far into the future? First of all, using the fact that there is a huge amount of people that use Photoshop, for example, and have certain workflows, being able to collect the information by which they, you know, basically get information about their workflows, about what they need, the ways to help them, whether it is houses or octopus that people work on more, you know, like basically getting a beat on what kind of data is needed to be annotated and collected for people to build tools that actually work well for people. Right, absolutely. And this is a big topic in the whole world of AI is what data can you gather and why? Right. At one level, a way to think about it is we not only want to train our customers in how to use our products, but we want them to teach us what's important and what's useful. At the same time, we want to respect their privacy. And obviously, we wouldn't do things without their explicit permission. And I think the modern spirit of the age around this is you have to demonstrate to somebody how they're benefiting from sharing their data with the tool. Either it's helping in the short term to understand their intent, so you can make better recommendations, or if they're friendly to your cause, or your tool, or they want to help you evolve quickly, because they depend on you for their livelihood, they may be willing to share some of their workflows or choices with the data set to be then trained. There are technologies for looking at learning without necessarily storing all the information permanently, so that you can sort of learn on the fly, but not keep a record of what somebody did. So we're definitely exploring all of those possibilities. And I think Adobe exists in a space where Photoshop, like if I look at the data I've created and own, you know, I'm less comfortable sharing data with social networks than I am with Adobe, because there's a, just exactly as you said, there's an obvious benefit for sharing for sharing the data that I use to create in Photoshop, because it's helping improve the workflow in the future, as opposed to it's not clear what the benefit is in social networks. It's nice for you to say that. I mean, I think there are some professional workflows where people might be very protective of what they're doing, such as if I was preparing evidence for a legal case, I wouldn't want any of that, you know, phoning home to help train the algorithm or anything. There may be other cases where people are, say, having a trial version, or they're doing some, I'm not saying we're doing this today, but there's a future scenario where somebody has a more permissive relationship with Adobe, where they explicitly say, I'm fine, I'm only doing hobby projects, or things which are non confidential. And in exchange for some benefit, tangible or otherwise, I'm willing to share very fine grained data. So another possible scenario is to capture relatively crude, high level things from more people, and then more detailed knowledge from people who are willing to participate. We do that today with explicit customer studies where, you know, we go and visit somebody and ask them to try the tool and we human observe what they're doing. In the future, to be able to do that enough to be able to train an algorithm, we'd need a more systematic process. But we'd have to do it very consciously, because is one of the things people treasure about Adobe is a sense of trust. And we don't want to endanger that through overly aggressive data collection. So we have a chief privacy officer. And it's definitely front and center of thinking about AI rather than an afterthought. Well, when you start that program, sign me up. Okay, happy to. Is there other projects that you wanted to mention that that I didn't perhaps that pop into mind? Well, you covered the number, I think you mentioned Project Puppetron, I think that one is interesting, because it's, you might think of Adobe as only thinking in 2d. And that's a good example where we're actually thinking more three dimensionally about how to assign features to faces so that we can, you know, if you take so what puppet run does, it takes either a still or a video of a person talking, and then it can take a painting of somebody else and then apply the style of the painting to the person who's talking in the video. And it's unlike a sort of screen door post filter effect that you sometimes see online, it really looks as though it's sort of somehow attached or reflecting the motion of the face. And so that's the case where even to do a 2d workflow, like stylization, you really need to infer more about the 3d structure of the world. And I think, as 3d computer vision algorithms get better, initially, they'll focus on particular domains, like faces, where you have a lot of prior knowledge about structure, and you can maybe have a parameterized template that you fit to the image. But over time, this should be possible for more general content. And it might even be invisible to the user that you're doing 3d reconstruction, but under the hood, but it might then let you do edits much more reliably or correctly than you would otherwise. And, you know, the face is a very important application, right? Absolutely. So making things work. And a very sensitive one. If you do something uncanny, it's very disturbing. That's right. You have to get it right. So in the space of augmented reality and virtual reality, what do you think is the role of AR and VR and in the content we consume as people, as consumers, and the content we create as creators? Now, that's a great question. We think about this a lot, too. So I think VR and AR serve slightly different purposes. So VR can really transport you to an entire immersive world, no matter what your personal situation is. To that extent, it's a bit like a really, really widescreen television, where it sort of snaps you out of your context and puts you in a new one. And I think it's still evolving in terms of the hardware. I actually worked on VR in the 90s trying to solve the latency and sort of nausea problem, which we did, but it was very expensive and a bit early. There's a new wave of that now, I think. And increasingly, those devices are becoming all in one rather than something that's tethered to a box. I think the market seems to be bifurcating into things for consumers and things for professional use cases, like for architects and people designing where your product is a building and you really want to experience it better than looking at a scale model or a drawing, I think, or even than a video. So I think for that, where you need a sense of scale and spatial relationships, it's great. I think AR holds the promise of sort of taking digital assets off the screen and putting them in context in the real world on the table in front of you, on the wall behind you. And that has the corresponding need that the assets need to adapt to the physical context in which they're being placed. I mean, it's a bit like having a live theater troupe come to your house and put on Hamlet. My mother had a friend who used to do this at Stately Homes in England for the National Trust. And they would adapt the scenes and even they'd walk the audience through the rooms to see the action based on the country house they found themselves in for two days. And I think AR will have the same issue that, you know, if you have a tiny table and a big living room or something, it'll try to figure out what can you change and what's fixed. And there's a little bit of a tension between fidelity where if you captured, say, Nureyev doing a fantastic ballet, you'd want it to be sort of exactly reproduced. And maybe all you could do is scale it down. Whereas somebody telling you a story might be walking around the room doing some gestures and that could adapt to the room in which they were telling the story. And do you think fidelity is that important in that space or is it more about the storytelling? I think it may depend on the characteristic of the media. If it's a famous celebrity, then it may be that you want to catch every nuance and they don't want to be reanimated by some algorithm. It could be that if it's really, you know, a lovable frog telling you a story and it's about a princess and a frog, then it doesn't matter if the frog moves in a different way. I think a lot of the ideas that have sort of grown up in the game world will now come into the broader commercial sphere once they're needing adaptive characters in AR. Are you thinking of engineering tools that allow creators to create in the augmented world, basically making a Photoshop for the augmented world? Well, we have shown a few demos of sort of taking a Photoshop layer stack and then expanding it into 3D. That's actually been shown publicly as one example in AR. Where we're particularly excited at the moment is in 3D. 3D design is still a very challenging space. And we believe that it's a worthwhile experiment to try to figure out if AR or immersive makes 3D design more spontaneous. Can you give me an example of 3D design, just like applications? Literally, a simple one would be laying out objects, right? So on a conventional screen, you'd sort of have a plan view and a side view and a perspective view, and you'd sort of be dragging it around with a mouse. And if you're not careful, it would go through the wall and all that. Whereas if you were really laying out objects, say, in a VR headset, you could literally move your head to see a different viewpoint. They'd be in stereo. So you'd have a sense of depth because you're already wearing the depth glasses, right? So it would be those sort of big gross motor move things around kind of skills seem much more spontaneous, just like they are in the real world. The frontier for us, I think, is whether that same medium can be used to do fine grained design tasks, like very accurate constraints on, say, a CAD model or something that may be better done on a desktop, but it may just be a matter of inventing the right UI. So we're hopeful that because there will be this potential explosion of demand for 3D assets driven by AR and more real time animation on conventional screens, that those tools will also help with, or those devices will help with designing the content as well. You've mentioned quite a few interesting sort of new ideas. And at the same time, there's old timers like me that are stuck in their old ways and are... Well, I think I'm the old timer. Okay. All right. All right. But the opposed all change at all costs. Yes. When you're thinking about creating new interfaces, do you feel the burden of just this giant user base that loves the current product? So anything new you do, any new idea comes at a cost that you'll be resisted? Well, I think if you have to trade off control for convenience, then our existing user base would definitely be offended by that. I think if there are some things where you have more convenience and just as much control, that may be more welcome. We do think about not breaking well known metaphors for things. So things should sort of make sense. Photoshop has never been a static target. It's always been evolving and growing. And to some extent, there's been a lot of brilliant thought along the way of how it works today. So we don't want to just throw all that out. If there's a fundamental breakthrough, like a single click is good enough to select an object rather than having to do lots of strokes, that actually fits in quite nicely to the existing toolset, either as an optional mode or as a starting point. I think where we're looking at radical simplicity, where you could encapsulate an entire workflow with a much simpler UI, then sometimes that's easier to do in the context of either a different device, like a mobile device, where the affordances are naturally different. Or in a tool that's targeted at a different workflow, where it's about spontaneity and velocity rather than precision. And we have projects like Rush, which can let you do professional quality video editing for a certain class of media output that is targeted very differently in terms of users and the experience. And ideally, people would go, if I'm feeling like doing Premiere, big project, I'm doing a four part television series, that's definitely a Premiere thing. But if I want to do something to show my recent vacation, maybe I'll just use Rush because I can do it in the half an hour I have free at home rather than the four hours I need to do it at work. And for the use cases, which we can do well, it really is much faster to get the same output. But the more professional tools obviously have a much richer toolkit and more flexibility in what they can do. And then at the same time with the flexibility and control, I like this idea of smart defaults, of using AI to coach you to like what Google has, I'm feeling lucky button. Or one button kind of gives you a pretty good set of settings. And then that's almost an educational tool to show. Because sometimes when you have all this control, you're not sure about the correlation between the different bars that control different elements of the image and so on. And sometimes there's a degree of, you don't know what the optimal is. And then some things are sort of on demand, like help, right? Where I'm stuck, I need to know what to look for. I'm not quite sure what it's called. And something that was proactively making helpful suggestions or, you could imagine a make a suggestion button where you'd use all of that knowledge of workflows and everything to maybe suggest something to go and learn about or just to try or show the answer. And maybe it's not one intelligent default, but it's like a variety of defaults. And then you go, I like that one. Yeah. Yeah. Several options. So back to poetry. Ah, yes. We're going to interleave. So first few lines of a recent poem of yours before I ask the next question. This is about the smartphone. Today I left my phone at home and went down to the sea. The sand was soft, the ocean glass, but I was still just me. This is a poem about you leaving your phone behind and feeling quite liberated because of it. So this is kind of a difficult topic and let's see if we can talk about it, figure it out. But so with the help of AI more and more, we can create sort of versions of ourselves, versions of reality that are in some ways more beautiful than actual reality. And some of the creative ways that we can do that, some of the creative effort there is part of creating this illusion. So of course this is inevitable, but how do you think we should adjust as human beings to live in this digital world that's partly artificial, that's better than the world that we lived in a hundred years ago when you didn't have Instagram and Facebook versions of ourselves and the online Oh, this is sort of showing off better versions of ourselves. We're using the tooling of modifying the images or even with artificial intelligence ideas of deep fakes and creating adjusted or fake versions of ourselves and reality. I think it's an interesting question. You're all sort of historical bent on this. So I actually wonder if 18th century aristocrats who commissioned famous painters to paint portraits of them had portraits that were slightly nicer than they actually looked in practice. So human desire to put your best foot forward has always been true. I think it's interesting. You sort of framed it in two ways. One is if we can imagine alternate realities and visualize them, is that a good or bad thing? In the old days, you do it with storytelling and words and poetry, which still resides sometimes on websites, but we've become a very visual culture in particular. In the 19th century, we're very much a text based culture. People would read long tracks, political speeches were very long. Nowadays, everything's very kind of quick and visual and snappy. I think it depends on how harmless your intent. A lot of it's about intent. So if you have a somewhat flattering photo that you pick out of the photos that you have in your inbox to say, this is what I look like, it's probably fine. If someone's going to judge you by how you look, then they'll decide soon enough when they meet you whether the reality, you know. Yeah, right. I think where it can be harmful is if people hold themselves up to an impossible standard, which they then feel bad about themselves for not meeting. I think that definitely can be an issue. But I think the ability to imagine and visualize an alternate reality, which sometimes you then go off and build later, can be a wonderful thing too. People can imagine architectural styles, which they then, you know, have a startup, make a fortune, and then build a house that looks like their favorite video game. Is that a terrible thing? I think I used to worry about exploration, actually, that part of the joy of going to the moon. When I was a tiny child, I remember it in grainy black and white, was to know what it would look like when you got there. And I think now we have such good graphics for visualizing the experience before it happens, that I slightly worry that it may take the edge off actually wanting to go, you know what I mean? Because we've seen it on TV. We kind of, oh, you know, by the time we finally get to Mars, we'll go, yeah, yeah, so it's Mars. That's what it looks like. But then, you know, the outer exploration, I mean, I think Pluto was a fantastic recent discovery where nobody had any idea what it looked like. And it was just breathtakingly varied and beautiful. So I think expanding the ability of the human toolkit to imagine and communicate on balance is a good thing. I think there are abuses, we definitely take them seriously and try to discourage them. I think there's a parallel side where the public needs to know what's possible through events like this, right? So that you don't believe everything you read in print anymore. And it may over time become true of images as well. Or you need multiple sets of evidence to really believe something rather than a single media asset. So I think it's a constantly evolving thing. It's been true forever. There's a famous story about Anne of Cleves and Henry VIII where luckily for Anne, they didn't get married, right? So, or they got married and broke up in it. What's the story? Oh, so Holbein went and painted a picture and then Henry VIII wasn't pleased and, you know, history doesn't record whether Anne was pleased, but I think she was pleased not to be married more than a day or something. So, I mean, this has gone on for a long time, but I think it's just a part of the magnification of human capability. You've kind of built up an amazing research environment here, research culture, research lab, and you've written that the secret to a thriving research lab is interns. Can you unpack that a little bit? Oh, absolutely. So a couple of reasons. As you see looking at my personal history, there are certain ideas you bond with at a certain stage of your career and you tend to keep revisiting them through time. If you're lucky, you pick one that doesn't just get solved in the next five years and then you're sort of out of luck. So I think a constant influx of new people brings new ideas with it. From the point of view of industrial research, because a big part of what we do is really taking those ideas to the point where they can ship as very robust features, you end up investing a lot in a particular idea. And if you're not careful, people can get too conservative in what they choose to do next, knowing that the product teams will want it. And interns let you explore the more fanciful or unproven ideas in a relatively lightweight way, ideally leading to new publications for the intern and for the researcher. And it gives you then a portfolio from which to draw which idea am I going to then try to take all the way through to being robust in the next year or two to ship. So it sort of becomes part of the funnel. It's also a great way for us to identify future full time researchers. Many of our greatest researchers were former interns. It builds a bridge to university departments so we can get to know and build an enduring relationship with the professors whom we often do academic give funds to as well as an acknowledgement of the value the interns add in their own collaborations. So it's sort of a virtuous cycle. And then the long term legacy of a great research lab hopefully will be not only the people who stay, but the ones who move through and then go off and carry that same model to other companies. And so we believe strongly in industrial research and how it can complement academia. And we hope that this model will continue to propagate and be invested in by other companies, which makes it harder for us to recruit, of course, but that's a sign of success. And a rising tide lifts all ships in that sense. And where's the idea born with the interns? Is there brainstorming? Is there discussions about, you know, like what? Where do the ideas come from? Yeah. As I'm asking the question, I realize how dumb it is, but I'm hoping you have a better answer. A question I ask at the beginning of every summer. So what will happen is we'll send out a call for interns. They'll, we'll have a number of resumes come in. People will contact the candidates, talk to them about their interests. They'll usually try to find some, somebody who has a reasonably good match to what they're already doing, or just has a really interesting domain that they've been pursuing in their PhD. And we think we'd love to do one of those projects too. And then the intern stays in touch with the mentor, as we call them. And then they come and at the end of two weeks, they have to decide. So they'll often have a general sense by the time they arrive. And we'll have internal discussions about what are all the general ideas that we're wanting to pursue to see whether two people have the same idea, and maybe they should talk and all that. But then once the intern actually arrives, sometimes the idea goes linearly. And sometimes it takes a giant left turn. And we go, that sounded good. But when we thought about it, there's this other project, or it's already been done. And we found this paper, we were scooped. But we have this other great idea. So it's pretty, pretty flexible at the beginning. One of the questions for research labs is who's deciding what to do? And then who's to blame if it goes wrong? Who gets the credit if it goes right? And so in Adobe, we push the needle very much towards freedom of choice of projects by the researchers and the interns. But then we reward people based on impact. So if the projects ultimately end up impacting the products and having papers and so on. And so your alternative model, just to be clear, is that you have one lab director who thinks he's a genius and tells everybody what to do, takes all the credit if it goes well, blames everybody else if it goes badly. So we don't want that model. And this helps new ideas percolate up. The art of running such a lab is that there are strategic priorities for the company. And there are areas where we do want to invest and pressing problems. And so it's a little bit of a trickle down and filter up meets in the middle. And so you don't tell people you have to do X, but you say X would be particularly appreciated this year. And then people reinterpret X through the filter of things they want to do and they're interested in. And miraculously, it usually comes together very well. One thing that really helps is Adobe has a really broad portfolio of products. So if we have a good idea, there's usually a product team that is intrigued or interested. So it means we don't have to qualify things too much ahead of time. Once in a while, the product teams sponsor extra intern, because they have a particular problem that they really care about, in which case it's a little bit more, we really need one of these. And then we sort of say, great, I get an extra intern, we find an intern who thinks that's a great problem. But that's not the typical model. That's sort of the icing on the cake as far as the budget is concerned. And all of the above end up being important. It's really hard to predict at the beginning of the summer, which we all have high hopes of all of the intern projects, but ultimately, some of them pay off and some of them sort of are a nice paper, but don't turn into a feature. Others turn out not to be as novel as we thought, but they'd be a great feature, but not a paper. And then others, we make a little bit of progress and we realize how much we don't know. And maybe we revisit that problem several years in a row until it, finally we have a breakthrough and then it becomes more on track to impact a product. Jumping back to a big overall view of Adobe research, what are you looking forward to in 2019 and beyond? What is, you mentioned there's a giant suite of products, a giant suite of ideas, new interns, a large team of researchers. What do you think the future holds? In terms of the technological breakthroughs? Technological breakthroughs, especially ones that will make it into product, will get to impact the world. So I think the creative or the analytics assistants that we talked about where they're constantly trying to figure out what you're trying to do and how can they be helpful and make useful suggestions is a really hot topic. And it's very unpredictable as to when it'll be ready, but I'm really looking forward to seeing how much progress we make against that. I think some of the core technologies like generative adversarial networks are immensely promising and seeing how quickly those become practical for mainstream use cases at high resolution with really good quality is also exciting. And they also have this sort of strange way of even the things they do oddly are odd in an interesting way. So it can look like dreaming or something. So that's fascinating. I think internally, we have a Sensei platform, which is a way in which we're pulling our neural nets and other intelligence models into a central platform, which can then be leveraged by multiple product teams at once. So we're in the middle of transitioning from once you have a good idea, you pick a product team to work with and they sort of hand design it for that use case to a more sort of Henry Ford standard up in a standard way, which can be accessed in a standard way, which should mean that the time between a good idea and impacting our products will be greatly shortened. And when one product has a good idea, many of the other products can just leverage it too. So it's sort of an economy of scale. So that's more about the how than the what. But that combination of this sort of renaissance in AI, there's a comparable one in graphics with real time ray tracing and other really exciting emerging technologies. And when these all come together, you'll sort of basically be dancing with light, right, where you'll have real time shadows, reflections and as if it's a real world in front of you. But then with all these magical properties brought by AI, where it sort of anticipates or modifies itself in ways that make sense based on how it understands the creative task you're trying to do. That's a really exciting future for creative for myself to the creator. So first of all, I work in autonomous vehicles. I'm a roboticist. I love robots. And I think you have a fascination with snakes, both natural and artificial robots. I share your fascination. I mean, their movement is beautiful, adaptable. The adaptability is fascinating. There are, I looked it up, 2,900 species of snakes in the world. Wow. 875 venomous. Some are tiny, some are huge. I saw that there's one that's 25 feet in some cases. So what's the most interesting thing that you connect with in terms of snakes, both natural and artificial? What was the connection with robotics AI and this particular form of a robot? Well, it actually came out of my work in the 80s on computer animation, where I started doing things like cloth simulation and other kind of soft body simulation. And you'd sort of drop it and it would bounce and then it would just sort of stop moving. And I thought, well, what if you animate the spring lengths and simulate muscles? And the simplest object I could do that for was an earthworm. So I actually did a paper in 1988 called The Motion Dynamics of Snakes and Worms. And I read the physiology literature on both how snakes and worms move and then did some of the early computer animation examples of that. And so your interest in robotics came out of simulation and graphics. When I moved from Alias to Apple, we actually did a movie called Her Majesty's Secret Serpent, which is about a secret agent snake that parachutes in and captures a film canister from a satellite, which tells you how old fashioned we were thinking back then. Sort of classic 1950s or 60s Bond movie kind of thing. And at the same time, I'd always made radio controlled chips when I was a child and from scratch. And I thought, well, how can it be to build a real one? And so then started what turned out to be like a 15 year obsession with trying to build better snake robots. And the first one that I built just sort of slithered sideways, but didn't actually go forward. Then I added wheels and building things in real life makes you honest about the friction. The thing that appeals to me is I love creating the illusion of life, which is what drove me to animation. And if you have a robot with enough degrees of coordinated freedom that move in a kind of biological way, then it starts to cross the Ancani Valley and to seem like a creature rather than a thing. And I certainly got that with the early snakes by S3, I had it able to sidewind as well as go directly forward. My wife to be suggested that it would be the ring bearer at our wedding. So it actually went down the aisle carrying the rings and got in the local paper for that, which was really fun. And this was all done as a hobby. And then I, at the time that can onboard compute was incredibly limited. It was sort of. Yeah. So you should explain that these things, the whole idea is that you would, you're trying to run it autonomously. Autonomously on board right. And so the very first one, I actually built the controller from discrete logic cause I used to do LSI, you know, circuits and things when I was a teenager. And then the second and third one, the eight bit microprocessors were available with like the whole 256 bytes of RAM, which you could just about squeeze in. So they were radio controlled rather than autonomous and really were more about the physicality and coordinated motion. I've occasionally taken a sidestep into, if only I could make it cheaply enough, bake a great toy, which has been a lesson in how clockwork is its own magical realm that you venture into and learn things about backlash and other things you don't take into account as a computer scientist, which is why what seemed like a good idea doesn't work. So it was quite humbling. And then more recently I've been building S9, which is a much better engineered version of S3 where the motors wore out and it doesn't work anymore. And you can't buy replacements, which is sad given that it was such a meaningful one. S5 was about twice as long and looked much more biologically inspired. Unlike the typical roboticist, I taper my snakes. There are good mechanical reasons to do that, but it also makes them look more biological, although it means every segment's unique rather than a repetition, which is why most engineers don't do it. It actually saves weight and leverage and everything. And that one is currently on display at the International Spy Museum in Washington, DC. Not that it's done any spying. It was on YouTube and it got its own conspiracy theory where people thought that it wasn't real because I work at Adobe, it must be fake graphics. And people would write to me, tell me it's real. You know, they say the background doesn't move and it's like, it's on a tripod, you know? So that one, but you can see the real thing, so it really is true. And then the latest one is the first one where I could put a Raspberry Pi, which leads to all sorts of terrible jokes about Pythons and things. But this one can have on board compute. And then where my hobby work and my work work are converging is you can now add vision accelerator chips, which can evaluate neural nets and do object recognition and everything. So both for the snakes and more recently for the spider that I've been working on, having, you know, desktop level compute is now opening up a whole world of true autonomy with onboard compute, onboard batteries, and still having that sort of biomimetic quality that appeals to children in particular. They are really drawn to them and adults think they look creepy, but children actually think they look charming. And I gave a series of lectures at Girls Who Code to encourage people to take an interest in technology. And at the moment, I'd say they're still more expensive than the value that they add, which is why they're a great hobby for me, but they're not really a great product. It makes me think about doing that very early thing I did at Alias with changing the muscle rest lengths. If I could do that with a real artificial muscle material, then the next snake ideally would use that rather than motors and gearboxes and everything. It would be lighter, much stronger, and more continuous and smooth. So it's, I like to say being in research is a license to be curious. And I have the same feeling with my hobby. It forced me to read biology and be curious about things that otherwise would have just been, you know, a National Geographic special. Suddenly I'm thinking, how does that snake move? Can I copy it? I look at the trails that sidewinding snakes leave in sand and see if my snake robots would do the same thing. So out of something inanimate, I like why you put it, try to bring life into it and beauty. Absolutely. And then ultimately give it a personality, which is where the intelligent agent research will converge with the vision and voice synthesis to give it a sense of having, not necessarily human level intelligence. I think the Turing test is such a high bar. It's a little bit self defeating, but having one that you can have a meaningful conversation with, especially if you have a reasonably good sense of what you can say. So not trying to have it so a stranger could walk up and have one, but so as a pet owner or a robot pet owner, you could know what it thinks about and what it can reason about. Or sometimes just the meaningful interaction. If you have the kind of interaction you have with the dog, sometimes you might have a conversation, but it's usually one way. Absolutely. And nevertheless, it feels like a meaningful and meaningful connection. And one of the things that I'm trying to do in the sample audio that will play you is beginning to get towards the point where the reasoning system can explain why it knows something or why it thinks something. And that again, creates the sense that it really does know what it's talking about, but also for debugging as you get more and more elaborate behavior, it's like, why did you decide to do that? You know, how do you know that? I think the robot's really my muse for helping me think about the future of AI and what to invent next. So even at Adobe, that's mostly operating in digital world. Correct. Do you ever, do you see a future where Adobe even expands into the more physical world perhaps? So bringing life not into animations, but bringing life into physical objects with, whether it's, well, I'd have to say at the moment, it's a twinkle in my eye. I think the more likely thing is that we will bring virtual objects into the physical world through augmented reality and many of the ideas that might take five years to build a robot to do, you can do in a few weeks with digital assets. So I think when really intelligent robots finally become commonplace, they won't be that surprising because we'll have been living with those personalities for in the virtual sphere for a long time. And then they'll just say, Oh, it's, you know, Siri with legs or Alexa, Alexa on hooves or something. So I can see that world coming. And for now, it's still an adventure, still an adventure. And we don't know quite what the experience will be like. And it's really exciting to sort of see all of these different strands of my career converge. Yeah. In interesting ways. And it is definitely a fun adventure. So let me end with my favorite poem, the last few lines of my favorite poem of yours that ponders mortality and in some sense, immortality, you know, as our ideas live through the ideas of others, through the work of others, it ends with do not weep or mourn. It was enough. The little enemies permitted just a single dance, scattered them as deep as your eyes can see. I'm content. They'll have another chance sweeping more centered parts along to join a jostling lifting throng as others danced in me. Beautiful poem. Beautiful way to end it. Gavin, thank you so much for talking today. And thank you for inspiring and empowering millions of people like myself for creating amazing stuff. Oh, thank you. Great conversation.
Gavin Miller: Adobe Research | Lex Fridman Podcast #23
The following is a conversation with Rosalind Picard. She's a professor at MIT, director of the Effective Computing Research Group at the MIT Media Lab, and cofounder of two companies, Affectiva and Empatica. Over two decades ago, she launched a field of effective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence. The vital role of emotional communication has to the relationship between people in general and human robot interaction. I really enjoy talking with Ros over so many topics, including emotion, ethics, privacy, wearable computing, and her recent research in epilepsy, and even love and meaning. This conversation is part of the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Rosalind Picard. More than 20 years ago, you've coined the term effective computing and led a lot of research in this area since then. As I understand, the goal is to make the machine detect and interpret the emotional state of a human being and adapt the behavior of the machine based on the emotional state. So how is your understanding of the problem space defined by effective computing changed in the past 24 years? So it's the scope, the applications, the challenges, what's involved, how has that evolved over the years? Yeah, actually, originally, when I defined the term affective computing, it was a bit broader than just recognizing and responding intelligently to human emotion, although those are probably the two pieces that we've worked on the hardest. The original concept also encompassed machines that would have mechanisms that functioned like human emotion does inside them. It would be any computing that relates to arises from or deliberately influences human emotion. So the human computer interaction part is the part that people tend to see, like if I'm really ticked off at my computer and I'm scowling at it and I'm cursing at it and it just keeps acting smiling and happy like that little paperclip used to do, dancing, winking, that kind of thing just makes you even more frustrated, right? And I thought that stupid thing needs to see my affect. And if it's gonna be intelligent, which Microsoft researchers had worked really hard on, it actually had some of the most sophisticated AI in it at the time, that thing's gonna actually be smart. It needs to respond to me and you, and we can send it very different signals. So by the way, just a quick interruption, the Clippy, maybe it's in Word 95, 98, I don't remember when it was born, but many people, do you find yourself with that reference that people recognize what you're talking about still to this point? I don't expect the newest students to these days, but I've mentioned it to a lot of audiences, like how many of you know this Clippy thing? And still the majority of people seem to know it. So Clippy kind of looks at maybe natural language processing where you were typing and tries to help you complete, I think. I don't even remember what Clippy was, except annoying. Yeah, some people actually liked it. I would hear those stories. You miss it? Well, I miss the annoyance. They felt like there's an element. Someone was there. Somebody was there and we were in it together and they were annoying. It's like a puppy that just doesn't get it. They keep stripping up the couch kind of thing. And in fact, they could have done it smarter like a puppy. If they had done, like if when you yelled at it or cursed at it, if it had put its little ears back in its tail down and shrugged off, probably people would have wanted it back, right? But instead, when you yelled at it, what did it do? It smiled, it winked, it danced, right? If somebody comes to my office and I yell at them, they start smiling, winking and dancing. I'm like, I never want to see you again. So Bill Gates got a standing ovation when he said it was going away because people were so ticked. It was so emotionally unintelligent, right? It was intelligent about whether you were writing a letter, what kind of help you needed for that context. It was completely unintelligent about, hey, if you're annoying your customer, don't smile in their face when you do it. So that kind of mismatch was something the developers just didn't think about. And intelligence at the time was really all about math and language and chess and games, problems that could be pretty well defined. Social emotional interaction is much more complex than chess or Go or any of the games that people are trying to solve. And in order to understand that required skills that most people in computer science actually were lacking personally. Well, let's talk about computer science. Have things gotten better since the work, since the message, since you've really launched the field with a lot of research work in this space? I still find as a person like yourself, who's deeply passionate about human beings and yet am in computer science, there still seems to be a lack of, sorry to say empathy in as computer scientists. Yeah, well. Or hasn't gotten better. Let's just say there's a lot more variety among computer scientists these days. Computer scientists are a much more diverse group today than they were 25 years ago. And that's good. We need all kinds of people to become computer scientists so that computer science reflects more what society needs. And there's brilliance among every personality type. So it need not be limited to people who prefer computers to other people. How hard do you think it is? Your view of how difficult it is to recognize emotion or to create a deeply emotionally intelligent interaction. Has it gotten easier or harder as you've explored it further? And how far away are we from cracking this? If you think of the Turing test solving the intelligence, looking at the Turing test for emotional intelligence. I think it is as difficult as I thought it was gonna be. I think my prediction of its difficulty is spot on. I think the time estimates are always hard because they're always a function of society's love and hate of a particular topic. If society gets excited and you get thousands of researchers working on it for a certain application, that application gets solved really quickly. The general intelligence, the computer's complete lack of ability to have awareness of what it's doing, the fact that it's not conscious, the fact that there's no signs of it becoming conscious, the fact that it doesn't read between the lines, those kinds of things that we have to teach it explicitly, what other people pick up implicitly. We don't see that changing yet. There aren't breakthroughs yet that lead us to believe that that's gonna go any faster, which means that it's still gonna be kind of stuck with a lot of limitations where it's probably only gonna do the right thing in very limited, narrow, prespecified contexts where we can prescribe pretty much what's gonna happen there. So I don't see the, it's hard to predict a date because when people don't work on it, it's infinite. When everybody works on it, you get a nice piece of it well solved in a short amount of time. I actually think there's a more important issue right now than the difficulty of it. And that's causing some of us to put the brakes on a little bit. Usually we're all just like step on the gas, let's go faster. This is causing us to pull back and put the brakes on. And that's the way that some of this technology is being used in places like China right now. And that worries me so deeply that it's causing me to pull back myself on a lot of the things that we could be doing. And try to get the community to think a little bit more about, okay, if we're gonna go forward with that, how can we do it in a way that puts in place safeguards that protects people? So the technology we're referring to is just when a computer senses the human being, like the human face, right? So there's a lot of exciting things there, like forming a deep connection with the human being. So what are your worries, how that could go wrong? Is it in terms of privacy? Is it in terms of other kinds of more subtle things? But let's dig into privacy. So here in the US, if I'm watching a video of say a political leader, and in the US we're quite free as we all know to even criticize the president of the United States, right? Here that's not a shocking thing. It happens about every five seconds, right? But in China, what happens if you criticize the leader of the government, right? And so people are very careful not to do that. However, what happens if you're simply watching a video and you make a facial expression that shows a little bit of skepticism, right? Well, and here we're completely free to do that. In fact, we're free to fly off the handle and say anything we want, usually. I mean, there are some restrictions when the athlete does this as part of the national broadcast. Maybe the teams get a little unhappy about picking that forum to do it, right? But that's more a question of judgment. We have these freedoms, and in places that don't have those freedoms, what if our technology can read your underlying affective state? What if our technology can read it even noncontact? What if our technology can read it without your prior consent? And here in the US, in my first company we started, Affectiva, we have worked super hard to turn away money and opportunities that try to read people's affect without their prior informed consent. And even the software that is licensable, you have to sign things saying you will only use it in certain ways, which essentially is get people's buy in, right? Don't do this without people agreeing to it. There are other countries where they're not interested in people's buy in. They're just gonna use it. They're gonna inflict it on you. And if you don't like it, you better not scowl in the direction of any censors. So one, let me just comment on a small tangent. Do you know with the idea of adversarial examples and deep fakes and so on, what you bring up is actually, in that one sense, deep fakes provide a comforting protection that you can no longer really trust that the video of your face was legitimate. And therefore you always have an escape clause if a government is trying, if a stable, balanced, ethical government is trying to accuse you of something, at least you have protection. You can say it was fake news, as is a popular term now. Yeah, that's the general thinking of it. We know how to go into the video and see, for example, your heart rate and respiration and whether or not they've been tampered with. And we also can put like fake heart rate and respiration in your video now too. We decided we needed to do that. After we developed a way to extract it, we decided we also needed a way to jam it. And so the fact that we took time to do that other step too, that was time that I wasn't spending making the machine more affectively intelligent. And there's a choice in how we spend our time, which is now being swayed a little bit less by this goal and a little bit more like by concern about what's happening in society and what kind of future do we wanna build. And as we step back and say, okay, we don't just build AI to build AI to make Elon Musk more money or to make Amazon Jeff Bezos more money. Good gosh, you know, that's the wrong ethic. Why are we building it? What is the point of building AI? It used to be, it was driven by researchers in academia to get papers published and to make a career for themselves and to do something cool, right? Like, cause maybe it could be done. Now we realize that this is enabling rich people to get vastly richer, the poor are, the divide is even larger. And is that the kind of future that we want? Maybe we wanna think about, maybe we wanna rethink AI. Maybe we wanna rethink the problems in society that are causing the greatest inequity and rethink how to build AI that's not about a general intelligence, but that's about extending the intelligence and capability of the have nots so that we close these gaps in society. Do you hope that kind of stepping on the brake happens organically? Because I think still majority of the force behind AI is the desire to publish papers, is to make money without thinking about the why. Do you hope it happens organically? Is there room for regulation? Yeah, yeah, yeah, great questions. I prefer the, you know, they talk about the carrot versus the stick. I definitely prefer the carrot to the stick. And, you know, in our free world, we, there's only so much stick, right? You're gonna find a way around it. I generally think less regulation is better. That said, even though my position is classically carrot, no stick, no regulation, I think we do need some regulations in this space. I do think we need regulations around protecting people with their data, that you own your data, not Amazon, not Google. I would like to see people own their own data. I would also like to see the regulations that we have right now around lie detection being extended to emotion recognition in general, that right now you can't use a lie detector on an employee when you're, on a candidate when you're interviewing them for a job. I think similarly, we need to put in place protection around reading people's emotions without their consent and in certain cases, like characterizing them for a job and other opportunities. So I'm also, I also think that when we're reading emotion that's predictive around mental health, that that should, even though it's not medical data, that that should get the kinds of protections that our medical data gets. What most people don't know yet is right now with your smartphone use, and if you're wearing a sensor and you wanna learn about your stress and your sleep and your physical activity and how much you're using your phone and your social interaction, all of that nonmedical data, when we put it together with machine learning, now called AI, even though the founders of AI wouldn't have called it that, that capability can not only tell that you're calm right now or that you're getting a little stressed, but it can also predict how you're likely to be tomorrow. If you're likely to be sick or healthy, happy or sad, stressed or calm. Especially when you're tracking data over time. Especially when we're tracking a week of your data or more. Do you have an optimism towards, you know, a lot of people on our phones are worried about this camera that's looking at us. For the most part, on balance, are you optimistic about the benefits that can be brought from that camera that's looking at billions of us? Or should we be more worried? I think we should be a little bit more worried about who's looking at us and listening to us. The device sitting on your countertop in your kitchen, whether it's, you know, Alexa or Google Home or Apple, Siri, these devices want to listen while they say ostensibly to help us. And I think there are great people in these companies who do want to help people. Let me not brand them all bad. I'm a user of products from all of these companies I'm naming all the A companies, Alphabet, Apple, Amazon. They are awfully big companies, right? They have incredible power. And you know, what if China were to buy them, right? And suddenly all of that data were not part of free America, but all of that data were part of somebody who just wants to take over the world and you submit to them. And guess what happens if you so much as smirk the wrong way when they say something that you don't like? Well, they have reeducation camps, right? That's a nice word for them. By the way, they have a surplus of organs for people who have surgery these days. They don't have an organ donation problem because they take your blood and they know you're a match. And the doctors are on record of taking organs from people who are perfectly healthy and not prisoners. They're just simply not the favored ones of the government. And you know, that's a pretty freaky evil society. And we can use the word evil there. I was born in the Soviet Union. I can certainly connect to the worry that you're expressing. At the same time, probably both you and I and you very much so, you know, there's an exciting possibility that you can have a deep connection with a machine. Yeah, yeah. Right, so. Those of us, I've admitted students who say that they, you know, when you list like, who do you most wish you could have lunch with or dinner with, right? And they'll write like, I don't like people. I just like computers. And one of them said to me once when I had this party at my house, I want you to know, this is my only social event of the year, my one social event of the year. Like, okay, now this is a brilliant machine learning person, right? And we need that kind of brilliance in machine learning. And I love that computer science welcomes people who love people and people who are very awkward around people. I love that this is a field that anybody could join. We need all kinds of people and you don't need to be a social person. I'm not trying to force people who don't like people to suddenly become social. At the same time, if most of the people building the AIs of the future are the kind of people who don't like people, we've got a little bit of a problem. Well, hold on a second. So let me push back on that. So don't you think a large percentage of the world can, you know, there's loneliness. There is a huge problem with loneliness that's growing. And so there's a longing for connection. Do you... If you're lonely, you're part of a big and growing group. Yes. So we're in it together, I guess. If you're lonely, join the group. You're not alone. You're not alone. That's a good line. But do you think there's... You talked about some worry, but do you think there's an exciting possibility that something like Alexa and these kinds of tools can alleviate that loneliness in a way that other humans can't? Yeah, yeah, definitely. I mean, a great book can kind of alleviate loneliness because you just get sucked into this amazing story and you can't wait to go spend time with that character. And they're not a human character. There is a human behind it. But yeah, it can be an incredibly delightful way to pass the hours and it can meet needs. Even, you know, I don't read those trashy romance books, but somebody does, right? And what are they getting from this? Well, probably some of that feeling of being there, right? Being there in that social moment, that romantic moment or connecting with somebody. I've had a similar experience reading some science fiction books, right? And connecting with the character. Orson Scott Card, you know, just amazing writing and Ender's Game and Speaker for the Dead, terrible title. But those kind of books that pull you into a character and you feel like you're, you feel very social. It's very connected, even though it's not responding to you. And a computer, of course, can respond to you. So it can deepen it, right? You can have a very deep connection, much more than the movie Her, you know, plays up, right? Well, much more. I mean, movie Her is already a pretty deep connection, right? Well, but it's just a movie, right? It's scripted. It's just, you know, but I mean, like there can be a real interaction where the character can learn and you can learn. You could imagine it not just being you and one character. You could imagine a group of characters. You can imagine a group of people and characters, human and AI connecting, where maybe a few people can't sort of be friends with everybody, but the few people and their AIs can befriend more people. There can be an extended human intelligence in there where each human can connect with more people that way. But it's still very limited, but there are just, what I mean is there are many more possibilities than what's in that movie. So there's a tension here. So one, you expressed a really serious concern about privacy, about how governments can misuse the information, and there's the possibility of this connection. So let's look at Alexa. So personal assistance. For the most part, as far as I'm aware, they ignore your emotion. They ignore even the context or the existence of you, the intricate, beautiful, complex aspects of who you are, except maybe aspects of your voice that help it recognize for speech recognition. Do you think they should move towards trying to understand your emotion? All of these companies are very interested in understanding human emotion. They want, more people are telling Siri every day they want to kill themselves. Apple wants to know the difference between if a person is really suicidal versus if a person is just kind of fooling around with Siri, right? The words may be the same, the tone of voice and what surrounds those words is pivotal to understand if they should respond in a very serious way, bring help to that person, or if they should kind of jokingly tease back, ah, you just want to sell me for something else, right? Like, how do you respond when somebody says that? Well, you do want to err on the side of being careful and taking it seriously. People want to know if the person is happy or stressed in part, well, so let me give you an altruistic reason and a business profit motivated reason. And there are people in companies that operate on both principles. The altruistic people really care about their customers and really care about helping you feel a little better at the end of the day. And it would just make those people happy if they knew that they made your life better. If you came home stressed and after talking with their product, you felt better. There are other people who maybe have studied the way affect affects decision making and prices people pay. And they know, I don't know if I should tell you, like the work of Jen Lerner on heartstrings and purse strings, you know, if we manipulate you into a slightly sadder mood, you'll pay more, right? You'll pay more to change your situation. You'll pay more for something you don't even need to make yourself feel better. So, you know, if they sound a little sad, maybe I don't want to cheer them up. Maybe first I want to help them get something, a little shopping therapy, right? That helps them. Which is really difficult for a company that's primarily funded on advertisement. So they're encouraged to get you to offer you products or Amazon that's primarily funded on you buying things from their store. So I think we should be, you know, maybe we need regulation in the future to put a little bit of a wall between these agents that have access to our emotion and agents that want to sell us stuff. Maybe there needs to be a little bit more of a firewall in between those. So maybe digging in a little bit on the interaction with Alexa, you mentioned, of course, a really serious concern about like recognizing emotion, if somebody is speaking of suicide or depression and so on, but what about the actual interaction itself? Do you think, so if I, you know, you mentioned Clippy and being annoying, what is the objective function we're trying to optimize? Is it minimize annoyingness or minimize or maximize happiness? Or if we look at human to human relations, I think that push and pull, the tension, the dance, you know, the annoying, the flaws, that's what makes it fun. So is there a room for, like what is the objective function? There are times when you want to have a little push and pull, I think of kids sparring, right? You know, I see my sons and they, one of them wants to provoke the other to be upset and that's fun. And it's actually healthy to learn where your limits are, to learn how to self regulate. You can imagine a game where it's trying to make you mad and you're trying to show self control. And so if we're doing a AI human interaction that's helping build resilience and self control, whether it's to learn how to not be a bully or how to turn the other cheek or how to deal with an abusive person in your life, then you might need an AI that pushes your buttons, right? But in general, do you want an AI that pushes your buttons? Probably depends on your personality. I don't, I want one that's respectful, that is there to serve me and that is there to extend my ability to do things. I'm not looking for a rival, I'm looking for a helper. And that's the kind of AI I'd put my money on. Your sense is for the majority of people in the world, in order to have a rich experience, that's what they're looking for as well. So they're not looking, if you look at the movie Her, spoiler alert, I believe the program that the woman in the movie Her leaves the person for somebody else, says they don't wanna be dating anymore, right? Like, do you, your sense is if Alexa said, you know what, I'm actually had enough of you for a while, so I'm gonna shut myself off. You don't see that as... I'd say you're trash, cause I paid for you, right? You, we've got to remember, and this is where this blending human AI as if we're equals is really deceptive because AI is something at the end of the day that my students and I are making in the lab. And we're choosing what it's allowed to say, when it's allowed to speak, what it's allowed to listen to, what it's allowed to act on given the inputs that we choose to expose it to, what outputs it's allowed to have. It's all something made by a human. And if we wanna make something that makes our lives miserable, fine. I wouldn't invest in it as a business, unless it's just there for self regulation training. But I think we need to think about what kind of future we want. And actually your question, I really like the, what is the objective function? Is it to calm people down? Sometimes. Is it to always make people happy and calm them down? Well, there was a book about that, right? The brave new world, make everybody happy, take your Soma if you're unhappy, take your happy pill. And if you refuse to take your happy pill, well, we'll threaten you by sending you to Iceland to live there. I lived in Iceland three years. It's a great place. Don't take your Soma, then go to Iceland. A little TV commercial there. Now I was a child there for a few years. It's a wonderful place. So that part of the book never scared me. But really like, do we want AI to manipulate us into submission, into making us happy? Well, if you are a, you know, like a power obsessed sick dictator individual who only wants to control other people to get your jollies in life, then yeah, you wanna use AI to extend your power and your scale to force people into submission. If you believe that the human race is better off being given freedom and the opportunity to do things that might surprise you, then you wanna use AI to extend people's ability to build, you wanna build AI that extends human intelligence, that empowers the weak and helps balance the power between the weak and the strong, not that gives more power to the strong. So in this process of empowering people and sensing people, what is your sense on emotion in terms of recognizing emotion? The difference between emotion that is shown and emotion that is felt. So yeah, emotion that is expressed on the surface through your face, your body, and various other things, and what's actually going on deep inside on the biological level, on the neuroscience level, or some kind of cognitive level. Yeah, yeah. Whoa, no easy questions here. Well, yeah, I'm sure there's no definitive answer, but what's your sense? How far can we get by just looking at the face? We're very limited when we just look at the face, but we can get further than most people think we can get. People think, hey, I have a great poker face, therefore all you're ever gonna get from me is neutral. Well, that's naive. We can read with the ordinary camera on your laptop or on your phone. We can read from a neutral face if your heart is racing. We can read from a neutral face if your breathing is becoming irregular and showing signs of stress. We can read under some conditions that maybe I won't give you details on, how your heart rate variability power is changing. That could be a sign of stress, even when your heart rate is not necessarily accelerating. So... Sorry, from physio sensors or from the face? From the color changes that you cannot even see, but the camera can see. That's amazing. So you can get a lot of signal, but... So we get things people can't see using a regular camera. And from that, we can tell things about your stress. So if you were just sitting there with a blank face thinking nobody can read my emotion, well, you're wrong. Right, so that's really interesting, but that's from sort of visual information from the face. That's almost like cheating your way to the physiological state of the body, by being very clever with what you can do with vision. With signal processing. With signal processing. So that's really impressive. But if you just look at the stuff we humans can see, the poker, the smile, the smirks, the subtle, all the facial actions. So then you can hide that on your face for a limited amount of time. Now, if you're just going in for a brief interview and you're hiding it, that's pretty easy for most people. If you are, however, surveilled constantly everywhere you go, then it's gonna say, gee, you know, Lex used to smile a lot and now I'm not seeing so many smiles. And Roz used to laugh a lot and smile a lot very spontaneously. And now I'm only seeing these not so spontaneous looking smiles. And only when she's asked these questions. You know, that's something's changed here. Probably not getting enough sleep. We could look at that too. So now I have to be a little careful too. When I say we, you think we can't read your emotion and we can, it's not that binary. What we're reading is more some physiological changes that relate to your activation. Now, that doesn't mean that we know everything about how you feel. In fact, we still know very little about how you feel. Your thoughts are still private. Your nuanced feelings are still completely private. We can't read any of that. So there's some relief that we can't read that. Even brain imaging can't read that. Wearables can't read that. However, as we read your body state changes and we know what's going on in your environment and we look at patterns of those over time, we can start to make some inferences about what you might be feeling. And that is where it's not just the momentary feeling but it's more your stance toward things. And that could actually be a little bit more scary with certain kinds of governmental control freak people who want to know more about are you on their team or are you not? And getting that information through over time. So you're saying there's a lot of signal by looking at the change over time. Yeah. So you've done a lot of exciting work both in computer vision and physiological sense like wearables. What do you think is the best modality for, what's the best window into the emotional soul? Is it the face? Is it the voice? Depends what you want to know. It depends what you want to know. It depends what you want to know. Everything is informative. Everything we do is informative. So for health and wellbeing and things like that, do you find the wearable physiotechnical, measuring physiological signals is the best for health based stuff? So here I'm going to answer empirically with data and studies we've been doing. We've been doing studies. Now these are currently running with lots of different kinds of people but where we've published data and I can speak publicly to it, the data are limited right now to New England college students. So that's a small group. Among New England college students, when they are wearing a wearable like the empathic embrace here that's measuring skin conductance, movement, temperature. And when they are using a smartphone that is collecting their time of day of when they're texting, who they're texting, their movement around it, their GPS, the weather information based upon their location. And when it's using machine learning and putting all of that together and looking not just at right now but looking at your rhythm of behaviors over about a week. When we look at that, we are very accurate at forecasting tomorrow's stress, mood and happy, sad mood and health. And when we look at which pieces of that are most useful, first of all, if you have all the pieces, you get the best results. If you have only the wearable, you get the next best results. And that's still better than 80% accurate at forecasting tomorrow's levels. Isn't that exciting because the wearable stuff with physiological information, it feels like it violates privacy less than the noncontact face based methods. Yeah, it's interesting. I think what people sometimes don't, it's funny in the early days people would say, oh, wearing something or giving blood is invasive, right? Whereas a camera is less invasive because it's not touching you. I think on the contrary, the things that are not touching you are maybe the scariest because you don't know when they're on or off. And you don't know who's behind it, right? A wearable, depending upon what's happening to the data on it, if it's just stored locally or if it's streaming and what it is being attached to, in a sense, you have the most control over it because it's also very easy to just take it off, right? Now it's not sensing me. So if I'm uncomfortable with what it's sensing, now I'm free, right? If I'm comfortable with what it's sensing, then, and I happen to know everything about this one and what it's doing with it, so I'm quite comfortable with it, then I have control, I'm comfortable. Control is one of the biggest factors for an individual in reducing their stress. If I have control over it, if I know all there is to know about it, then my stress is a lot lower and I'm making an informed choice about whether to wear it or not, or when to wear it or not. I wanna wear it sometimes, maybe not others. Right, so that control, yeah, I'm with you. That control, even if, yeah, the ability to turn it off, that is a really important thing. It's huge. And we need to, maybe, if there's regulations, maybe that's number one to protect is people's ability to, it's easy to opt out as to opt in. Right, so you've studied a bit of neuroscience as well. How have looking at our own minds, sort of the biological stuff or the neurobiological, the neuroscience to get the signals in our brain, helped you understand the problem and the approach of effective computing, so? Originally, I was a computer architect and I was building hardware and computer designs and I wanted to build ones that worked like the brain. So I've been studying the brain as long as I've been studying how to build computers. Have you figured out anything yet? Very little. It's so amazing. You know, they used to think like, oh, if you remove this chunk of the brain and you find this function goes away, well, that's the part of the brain that did it. And then later they realized if you remove this other chunk of the brain, that function comes back and, oh no, we really don't understand it. Brains are so interesting and changing all the time and able to change in ways that will probably continue to surprise us. When we were measuring stress, you may know the story where we found an unusual big skin conductance pattern on one wrist in one of our kids with autism. And in trying to figure out how on earth you could be stressed on one wrist and not the other, like how can you get sweaty on one wrist, right? When you get stressed with that sympathetic fight or flight response, like you kind of should like sweat more in some places than others, but not more on one wrist than the other. That didn't make any sense. We learned that what had actually happened was a part of his brain had unusual electrical activity and that caused an unusually large sweat response on one wrist and not the other. And since then we've learned that seizures cause this unusual electrical activity. And depending where the seizure is, if it's in one place and it's staying there, you can have a big electrical response we can pick up with a wearable at one part of the body. You can also have a seizure that spreads over the whole brain, generalized grand mal seizure. And that response spreads and we can pick it up pretty much anywhere. As we learned this and then later built Embrace that's now FDA cleared for seizure detection, we have also built relationships with some of the most amazing doctors in the world who not only help people with unusual brain activity or epilepsy, but some of them are also surgeons and they're going in and they're implanting electrodes, not just to momentarily read the strange patterns of brain activity that we'd like to see return to normal, but also to read out continuously what's happening in some of these deep regions of the brain during most of life when these patients are not seizing. Most of the time they're not seizing, most of the time they're fine. And so we are now working on mapping those deep brain regions that you can't even usually get with EEG scalp electrodes because the changes deep inside don't reach the surface. But interesting when some of those regions are activated, we see a big skin conductance response. Who would have thunk it, right? Like nothing here, but something here. In fact, right after seizures that we think are the most dangerous ones that precede what's called SUDEP, Sudden Unexpected Death and Epilepsy, there's a period where the brainwaves go flat and it looks like the person's brain has stopped, but it hasn't. The activity has gone deep into a region that can make the cortical activity look flat, like a quick shutdown signal here. It can unfortunately cause breathing to stop if it progresses long enough. Before that happens, we see a big skin conductance response in the data that we have. The longer this flattening, the bigger our response here. So we have been trying to learn, you know, initially, like why are we getting a big response here when there's nothing here? Well, it turns out there's something much deeper. So we can now go inside the brains of some of these individuals, fabulous people who usually aren't seizing, and get this data and start to map it. So that's the active research that we're doing right now with top medical partners. So this wearable sensor that's looking at skin conductance can capture sort of the ripples of the complexity of what's going on in our brain. So this little device, you have a hope that you can start to get the signal from the interesting things happening in the brain. Yeah, we've already published the strong correlations between the size of this response and the flattening that happens afterwards. And unfortunately, also in a real SUDEP case where the patient died because the, well, we don't know why. We don't know if somebody was there, it would have definitely prevented it. But we know that most SUDEPs happen when the person's alone. And in this case, a SUDEP is an acronym, S U D E P. And it stands for the number two cause of years of life lost actually among all neurological disorders. Stroke is number one, SUDEP is number two, but most people haven't heard of it. Actually, I'll plug my TED talk, it's on the front page of TED right now that talks about this. And we hope to change that. I hope everybody who's heard of SIDS and stroke will now hear of SUDEP because we think in most cases it's preventable if people take their meds and aren't alone when they have a seizure. Not guaranteed to be preventable. There are some exceptions, but we think most cases probably are. So you had this embrace now in the version two wristband, right, for epilepsy management. That's the one that's FDA approved? Yes. Which is kind of a clear. FDA cleared, they say. Sorry. No, it's okay. It essentially means it's approved for marketing. Got it. Just a side note, how difficult is that to do? It's essentially getting FDA approval for computer science technology. It's so agonizing. It's much harder than publishing multiple papers in top medical journals. Yeah, we've published peer reviewed top medical journal neurology, best results, and that's not good enough for the FDA. Is that system, so if we look at the peer review of medical journals, there's flaws, there's strengths, is the FDA approval process, how does it compare to the peer review process? Does it have the strength? I'll take peer review over FDA any day. But is that a good thing? Is that a good thing for FDA? You're saying, does it stop some amazing technology from getting through? Yeah, it does. The FDA performs a very important good role in keeping people safe. They keep things, they put you through tons of safety testing and that's wonderful and that's great. I'm all in favor of the safety testing. But sometimes they put you through additional testing that they don't have to explain why they put you through it and you don't understand why you're going through it and it doesn't make sense. And that's very frustrating. And maybe they have really good reasons and they just would, it would do people a service to articulate those reasons. Be more transparent. Be more transparent. So as part of Empatica, you have sensors. So what kind of problems can we crack? What kind of things from seizures to autism to I think I've heard you mentioned depression. What kind of things can we alleviate? Can we detect? What's your hope of what, how we can make the world a better place with this wearable tech? I would really like to see my fellow brilliant researchers step back and say, what are the really hard problems that we don't know how to solve that come from people maybe we don't even see in our normal life because they're living in the poor places. They're stuck on the bus. They can't even afford the Uber or the Lyft or the data plan or all these other wonderful things we have that we keep improving on. Meanwhile, there's all these folks left behind in the world and they're struggling with horrible diseases with depression, with epilepsy, with diabetes, with just awful stuff that maybe a little more time and attention hanging out with them and learning what are their challenges in life? What are their needs? How do we help them have job skills? How do we help them have a hope and a future and a chance to have the great life that so many of us building technology have? And then how would that reshape the kinds of AI that we build? How would that reshape the new apps that we build or the maybe we need to focus on how to make things more low cost and green instead of thousand dollar phones? I mean, come on, why can't we be thinking more about things that do more with less for these folks? Quality of life is not related to the cost of your phone. It's not something that, it's been shown that what about $75,000 of income and happiness is the same, okay? However, I can tell you, you get a lot of happiness from helping other people. You get a lot more than $75,000 buys. So how do we connect up the people who have real needs with the people who have the ability to build the future and build the kind of future that truly improves the lives of all the people that are currently being left behind? So let me return just briefly on a point, maybe in the movie, Her. So do you think if we look farther into the future, you said so much of the benefit from making our technology more empathetic to us human beings would make them better tools, empower us, make our lives better. Well, if we look farther into the future, do you think we'll ever create an AI system that we can fall in love with? That we can fall in love with and loves us back on a level that is similar to human to human interaction, like in the movie Her or beyond? I think we can simulate it in ways that could, you know, sustain engagement for a while. Would it be as good as another person? I don't think so, if you're used to like good people. Now, if you've just grown up with nothing but abuse and you can't stand human beings, can we do something that helps you there that gives you something through a machine? Yeah, but that's pretty low bar, right? If you've only encountered pretty awful people. If you've encountered wonderful, amazing people, we're nowhere near building anything like that. And I would not bet on building it. I would bet instead on building the kinds of AI that helps kind of raise all boats, that helps all people be better people, helps all people figure out if they're getting sick tomorrow and helps give them what they need to stay well tomorrow. That's the kind of AI I wanna build that improves human lives, not the kind of AI that just walks on The Tonight Show and people go, wow, look how smart that is. Really? And then it goes back in a box, you know? So on that point, if we continue looking a little bit into the future, do you think an AI that's empathetic and does improve our lives need to have a physical presence, a body? And even let me cautiously say the C word consciousness and even fear of mortality. So some of those human characteristics, do you think it needs to have those aspects or can it remain simply a machine learning tool that learns from data of behavior that learns to make us, based on previous patterns, feel better? Or does it need those elements of consciousness? It depends on your goals. If you're making a movie, it needs a body. It needs a gorgeous body. It needs to act like it has consciousness. It needs to act like it has emotion, right? Because that's what sells. That's what's gonna get me to show up and enjoy the movie. Okay. In real life, does it need all that? Well, if you've read Orson Scott Card, Ender's Game, Speaker of the Dead, it could just be like a little voice in your earring, right? And you could have an intimate relationship and it could get to know you. And it doesn't need to be a robot. But that doesn't make this compelling of a movie, right? I mean, we already think it's kind of weird when a guy looks like he's talking to himself on the train, even though it's earbuds. So we have these, embodied is more powerful. Embodied, when you compare interactions with an embodied robot versus a video of a robot versus no robot, the robot is more engaging. The robot gets our attention more. The robot, when you walk in your house, is more likely to get you to remember to do the things that you asked it to do, because it's kind of got a physical presence. You can avoid it if you don't like it. It could see you're avoiding it. There's a lot of power to being embodied. There will be embodied AIs. They have great power and opportunity and potential. There will also be AIs that aren't embodied, that just are little software assistants that help us with different things that may get to know things about us. Will they be conscious? There will be attempts to program them to make them appear to be conscious. We can already write programs that make it look like, oh, what do you mean? Of course I'm aware that you're there, right? I mean, it's trivial to say stuff like that. It's easy to fool people, but does it actually have conscious experience like we do? Nobody has a clue how to do that yet. That seems to be something that is beyond what any of us knows how to build now. Will it have to have that? I think you can get pretty far with a lot of stuff without it. But will we accord it rights? Well, that's more a political game than it is a question of real consciousness. Yeah, can you go to jail for turning off Alexa is the question for an election maybe a few decades from now. Well, Sophia Robot's already been given rights as a citizen in Saudi Arabia, right? Even before women have full rights. Then the robot was still put back in the box to be shipped to the next place where it would get a paid appearance, right? Yeah, it's dark and almost comedic, if not absurd. So I've heard you speak about your journey in finding faith. Sure. And how you discovered some wisdoms about life and beyond from reading the Bible. And I've also heard you say that, you said scientists too often assume that nothing exists beyond what can be currently measured. Yeah, materialism. Materialism. And scientism, yeah. So in some sense, this assumption enables the near term scientific method, assuming that we can uncover the mysteries of this world by the mechanisms of measurement that we currently have. But we easily forget that we've made this assumption. So what do you think we miss out on by making that assumption? It's fine to limit the scientific method to things we can measure and reason about and reproduce. That's fine. I think we have to recognize that sometimes we scientists also believe in things that happen historically. Like I believe the Holocaust happened. I can't prove events from past history scientifically. You prove them with historical evidence, right? With the impact they had on people, with eyewitness testimony and things like that. So a good thinker recognizes that science is one of many ways to get knowledge. It's not the only way. And there's been some really bad philosophy and bad thinking recently, you can call it scientism, where people say science is the only way to get to truth. And it's not, it just isn't. There are other ways that work also. Like knowledge of love with someone. You don't prove your love through science, right? So history, philosophy, love, a lot of other things in life show us that there's more ways to gain knowledge and truth if you're willing to believe there is such a thing, and I believe there is, than science. I do, I am a scientist, however. And in my science, I do limit my science to the things that the scientific method can do. But I recognize that it's myopic to say that that's all there is. Right, there's, just like you listed, there's all the why questions. And really we know, if we're being honest with ourselves, the percent of what we really know is basically zero relative to the full mystery of the... Measure theory, a set of measure zero, if I have a finite amount of knowledge, which I do. So you said that you believe in truth. So let me ask that old question. What do you think this thing is all about? What's the life on earth? Life, the universe, and everything? And everything, what's the meaning? I can't quote Douglas Adams 42. It's my favorite number. By the way, that's my street address. My husband and I guessed the exact same number for our house, we got to pick it. And there's a reason we picked 42, yeah. So is it just 42 or is there, do you have other words that you can put around it? Well, I think there's a grand adventure and I think this life is a part of it. I think there's a lot more to it than meets the eye and the heart and the mind and the soul here. I think we see but through a glass dimly in this life. We see only a part of all there is to know. If people haven't read the Bible, they should, if they consider themselves educated and you could read Proverbs and find tremendous wisdom in there that cannot be scientifically proven. But when you read it, there's something in you, like a musician knows when the instruments played right and it's beautiful. There's something in you that comes alive and knows that there's a truth there that it's like your strings are being plucked by the master instead of by me, right, playing when I pluck it. But probably when you play, it sounds spectacular, right? And when you encounter those truths, there's something in you that sings and knows that there is more than what I can prove mathematically or program a computer to do. Don't get me wrong, the math is gorgeous. The computer programming can be brilliant. It's inspiring, right? We wanna do more. None of this squashes my desire to do science or to get knowledge through science. I'm not dissing the science at all. I grow even more in awe of what the science can do because I'm more in awe of all there is we don't know. And really at the heart of science, you have to have a belief that there's truth, that there's something greater to be discovered. And some scientists may not wanna use the faith word, but it's faith that drives us to do science. It's faith that there is truth, that there's something to know that we don't know, that it's worth knowing, that it's worth working hard, and that there is meaning, that there is such a thing as meaning, which by the way, science can't prove either. We have to kind of start with some assumptions that there's things like truth and meaning. And these are really questions philosophers own, right? This is their space, of philosophers and theologians at some level. So these are things science, when people claim that science will tell you all truth, there's a name for that. It's its own kind of faith. It's scientism and it's very myopic. Yeah, there's a much bigger world out there to be explored in ways that science may not, at least for now, allow us to explore. Yeah, and there's meaning and purpose and hope and joy and love and all these awesome things that make it all worthwhile too. I don't think there's a better way to end it, Roz. Thank you so much for talking today. Thanks Lex, what a pleasure. Great questions.
Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24
The following is a conversation with Jeff Hawkins. He's the founder of the Redwood Center for Theoretical Neuroscience in 2002, and NuMenta in 2005. In his 2004 book, titled On Intelligence, and in the research before and after, he and his team have worked to reverse engineer the neural cortex, and propose artificial intelligence architectures, approaches, and ideas that are inspired by the human brain. These ideas include Hierarchical Tupperware Memory, HTM, from 2004, and new work, the Thousand Brains Theory of Intelligence from 2017, 18, and 19. Jeff's ideas have been an inspiration to many who have looked for progress beyond the current machine learning approaches, but they have also received criticism for lacking a body of empirical evidence supporting the models. This is always a challenge when seeking more than small incremental steps forward in AI. Jeff is a brilliant mind, and many of the ideas he has developed and aggregated from neuroscience are worth understanding and thinking about. There are limits to deep learning, as it is currently defined. Forward progress in AI is shrouded in mystery. My hope is that conversations like this can help provide an inspiring spark for new ideas. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman, spelled F R I D. And now, here's my conversation with Jeff Hawkins. Are you more interested in understanding the human brain or in creating artificial systems that have many of the same qualities but don't necessarily require that you actually understand the underpinning workings of our mind? So there's a clear answer to that question. My primary interest is understanding the human brain. No question about it. But I also firmly believe that we will not be able to create fully intelligent machines until we understand how the human brain works. So I don't see those as separate problems. I think there's limits to what can be done with machine intelligence if you don't understand the principles by which the brain works. And so I actually believe that studying the brain is actually the fastest way to get to machine intelligence. And within that, let me ask the impossible question, how do you, not define, but at least think about what it means to be intelligent? So I didn't try to answer that question first. We said, let's just talk about how the brain works and let's figure out how certain parts of the brain, mostly the neocortex, but some other parts too. The parts of the brain most associated with intelligence. And let's discover the principles by how they work. Because intelligence isn't just like some mechanism and it's not just some capabilities. It's like, okay, we don't even know where to begin on this stuff. And so now that we've made a lot of progress on this, after we've made a lot of progress on how the neocortex works, and we can talk about that, I now have a very good idea what's gonna be required to make intelligent machines. I can tell you today, some of the things are gonna be necessary, I believe, to create intelligent machines. Well, so we'll get there. We'll get to the neocortex and some of the theories of how the whole thing works. And you're saying, as we understand more and more about the neocortex, about our own human mind, we'll be able to start to more specifically define what it means to be intelligent. It's not useful to really talk about that until. I don't know if it's not useful. Look, there's a long history of AI, as you know. And there's been different approaches taken to it. And who knows, maybe they're all useful. So the good old fashioned AI, the expert systems, the current convolutional neural networks, they all have their utility. They all have a value in the world. But I would think almost everyone agree that none of them are really intelligent in a sort of a deep way that humans are. And so it's just the question of how do you get from where those systems were or are today to where a lot of people think we're gonna go. And there's a big, big gap there, a huge gap. And I think the quickest way of bridging that gap is to figure out how the brain does that. And then we can sit back and look and say, oh, which of these principles that the brain works on are necessary and which ones are not? Clearly, we don't have to build this in, and intelligent machines aren't gonna be built out of organic living cells. But there's a lot of stuff that goes on the brain that's gonna be necessary. So let me ask maybe, before we get into the fun details, let me ask maybe a depressing or a difficult question. Do you think it's possible that we will never be able to understand how our brain works, that maybe there's aspects to the human mind, like we ourselves cannot introspectively get to the core, that there's a wall you eventually hit? Yeah, I don't believe that's the case. I have never believed that's the case. There's not been a single thing humans have ever put their minds to that we've said, oh, we reached the wall, we can't go any further. It's just, people keep saying that. People used to believe that about life. Alain Vital, right, there's like, what's the difference between living matter and nonliving matter, something special that we never understand. We no longer think that. So there's no historical evidence that suggests this is the case, and I just never even consider that's a possibility. I would also say, today, we understand so much about the neocortex. We've made tremendous progress in the last few years that I no longer think of it as an open question. The answers are very clear to me. The pieces we don't know are clear to me, but the framework is all there, and it's like, oh, okay, we're gonna be able to do this. This is not a problem anymore, just takes time and effort, but there's no mystery, a big mystery anymore. So then let's get into it for people like myself who are not very well versed in the human brain, except my own. Can you describe to me, at the highest level, what are the different parts of the human brain, and then zooming in on the neocortex, the parts of the neocortex, and so on, a quick overview. Yeah, sure. The human brain, we can divide it roughly into two parts. There's the old parts, lots of pieces, and then there's the new part. The new part is the neocortex. It's new because it didn't exist before mammals. The only mammals have a neocortex, and in humans, in primates, it's very large. In the human brain, the neocortex occupies about 70 to 75% of the volume of the brain. It's huge. And the old parts of the brain are, there's lots of pieces there. There's the spinal cord, and there's the brain stem, and the cerebellum, and the different parts of the basal ganglia, and so on. In the old parts of the brain, you have the autonomic regulation, like breathing and heart rate. You have basic behaviors, so like walking and running are controlled by the old parts of the brain. All the emotional centers of the brain are in the old part of the brain, so when you feel anger or hungry, lust, or things like that, those are all in the old parts of the brain. And we associate with the neocortex all the things we think about as sort of high level perception and cognitive functions, anything from seeing and hearing and touching things to language to mathematics and engineering and science and so on. Those are all associated with the neocortex, and they're certainly correlated. Our abilities in those regards are correlated with the relative size of our neocortex compared to other mammals. So that's like the rough division, and you obviously can't understand the neocortex completely isolated, but you can understand a lot of it with just a few interfaces to the old parts of the brain, and so it gives you a system to study. The other remarkable thing about the neocortex, compared to the old parts of the brain, is the neocortex is extremely uniform. It's not visibly or anatomically, it's very, I always like to say it's like the size of a dinner napkin, about two and a half millimeters thick, and it looks remarkably the same everywhere. Everywhere you look in that two and a half millimeters is this detailed architecture, and it looks remarkably the same everywhere, and that's across species. A mouse versus a cat and a dog and a human. Where if you look at the old parts of the brain, there's lots of little pieces do specific things. So it's like the old parts of our brain evolved, like this is the part that controls heart rate, and this is the part that controls this, and this is this kind of thing, and that's this kind of thing, and these evolved for eons a long, long time, and they have their specific functions, and all of a sudden mammals come along, and they got this thing called the neocortex, and it got large by just replicating the same thing over and over and over again. This is like, wow, this is incredible. So all the evidence we have, and this is an idea that was first articulated in a very cogent and beautiful argument by a guy named Vernon Malcastle in 1978, I think it was, that the neocortex all works on the same principle. So language, hearing, touch, vision, engineering, all these things are basically underlying, are all built on the same computational substrate. They're really all the same problem. So the low level of the building blocks all look similar. Yeah, and they're not even that low level. We're not talking about like neurons. We're talking about this very complex circuit that exists throughout the neocortex. It's remarkably similar. It's like, yes, you see variations of it here and there, more of the cell, less and less, and so on. But what Malcastle argued was, he says, you know, if you take a section of neocortex, why is one a visual area and one is a auditory area? Or why is, and his answer was, it's because one is connected to eyes and one is connected to ears. Literally, you mean just it's most closest in terms of number of connections to the sensor. Literally, literally, if you took the optic nerve and attached it to a different part of the neocortex, that part would become a visual region. This actually, this experiment was actually done by Merkankasur in developing, I think it was lemurs, I can't remember what it was, some animal. And there's a lot of evidence to this. You know, if you take a blind person, a person who's born blind at birth, they're born with a visual neocortex. It doesn't, may not get any input from the eyes because of some congenital defect or something. And that region becomes, does something else. It picks up another task. So, and it's, so it's this very complex thing. It's not like, oh, they're all built on neurons. No, they're all built in this very complex circuit and somehow that circuit underlies everything. And so this is the, it's called the common cortical algorithm, if you will. Some scientists just find it hard to believe and they just, I can't believe that's true, but the evidence is overwhelming in this case. And so a large part of what it means to figure out how the brain creates intelligence and what is intelligence in the brain is to understand what that circuit does. If you can figure out what that circuit does, as amazing as it is, then you can, then you understand what all these other cognitive functions are. So if you were to sort of put neocortex outside of your book on intelligence, you look, if you wrote a giant tome, a textbook on the neocortex, and you look maybe a couple of centuries from now, how much of what we know now would still be accurate two centuries from now? So how close are we in terms of understanding? I have to speak from my own particular experience here. So I run a small research lab here. It's like any other research lab. I'm sort of the principal investigator. There's actually two of us and there's a bunch of other people. And this is what we do. We study the neocortex and we publish our results and so on. So about three years ago, we had a real breakthrough in this field. Just tremendous breakthrough. We've now published, I think, three papers on it. And so I have a pretty good understanding of all the pieces and what we're missing. I would say that almost all the empirical data we've collected about the brain, which is enormous. If you don't know the neuroscience literature, it's just incredibly big. And it's, for the most part, all correct. It's facts and experimental results and measurements and all kinds of stuff. But none of that has been really assimilated into a theoretical framework. It's data without, in the language of Thomas Kuhn, the historian, would be a sort of a pre paradigm science. Lots of data, but no way to fit it together. I think almost all of that's correct. There's just gonna be some mistakes in there. And for the most part, there aren't really good cogent theories about it, how to put it together. It's not like we have two or three competing good theories, which ones are right and which ones are wrong. It's like, nah, people are just scratching their heads. Some people have given up on trying to figure out what the whole thing does. In fact, there's very, very few labs that we do that focus really on theory and all this unassimilated data and trying to explain it. So it's not like we've got it wrong. It's just that we haven't got it at all. So it's really, I would say, pretty early days in terms of understanding the fundamental theory's forces of the way our mind works. I don't think so. I would have said that's true five years ago. So as I said, we had some really big breakthroughs on this recently and we started publishing papers on this. So we'll get to that. But so I don't think it's, I'm an optimist and from where I sit today, most people would disagree with this, but from where I sit today, from what I know, it's not super early days anymore. We are, the way these things go is it's not a linear path, right? You don't just start accumulating and get better and better and better. No, all this stuff you've collected, none of it makes sense. All these different things are just sort of around. And then you're gonna have some breaking points where all of a sudden, oh my God, now we got it right. That's how it goes in science. And I personally feel like we passed that little thing about a couple of years ago, all that big thing a couple of years ago. So we can talk about that. Time will tell if I'm right, but I feel very confident about it. That's why I'm willing to say it on tape like this. At least very optimistic. So let's, before those few years ago, let's take a step back to HTM, the hierarchical temporal memory theory, which you first proposed on intelligence and went through a few different generations. Can you describe what it is, how it evolved through the three generations since you first put it on paper? Yeah, so one of the things that neuroscientists just sort of missed for many, many years, and especially people who were thinking about theory, was the nature of time in the brain. Brains process information through time. The information coming into the brain is constantly changing. The patterns from my speech right now, if you were listening to it at normal speed, would be changing on your ears about every 10 milliseconds or so, you'd have a change. This constant flow, when you look at the world, your eyes are moving constantly, three to five times a second, and the input's completely changing. If I were to touch something like a coffee cup, as I move my fingers, the input changes. So this idea that the brain works on time changing patterns is almost completely, or was almost completely missing from a lot of the basic theories, like fears of vision and so on. It's like, oh no, we're gonna put this image in front of you and flash it and say, what is it? Convolutional neural networks work that way today, right? Classify this picture. But that's not what vision is like. Vision is this sort of crazy time based pattern that's going all over the place, and so is touch and so is hearing. So the first part of hierarchical temporal memory was the temporal part. It's to say, you won't understand the brain, nor will you understand intelligent machines unless you're dealing with time based patterns. The second thing was, the memory component of it was, is to say that we aren't just processing input, we learn a model of the world. And the memory stands for that model. The point of the brain, the part of the neocortex, it learns a model of the world. We have to store things, our experiences, in a form that leads to a model of the world. So we can move around the world, we can pick things up and do things and navigate and know how it's going on. So that's what the memory referred to. And many people just, they were thinking about like certain processes without memory at all. They're just like processing things. And then finally, the hierarchical component was a reflection to that the neocortex, although it's this uniform sheet of cells, different parts of it project to other parts, which project to other parts. And there is a sort of rough hierarchy in terms of that. So the hierarchical temporal memory is just saying, look, we should be thinking about the brain as time based, model memory based, and hierarchical processing. And that was a placeholder for a bunch of components that we would then plug into that. We still believe all those things I just said, but we now know so much more that I'm stopping to use the word hierarchical temporal memory yet because it's insufficient to capture the stuff we know. So again, it's not incorrect, but it's, I now know more and I would rather describe it more accurately. Yeah, so you're basically, we could think of HTM as emphasizing that there's three aspects of intelligence that are important to think about whatever the eventual theory it converges to. So in terms of time, how do you think of nature of time across different time scales? So you mentioned things changing, sensory inputs changing every 10, 20 minutes. What about every few minutes, every few months and years? Well, if you think about a neuroscience problem, the brain problem, neurons themselves can stay active for certain periods of time, parts of the brain where they stay active for minutes. You could hold a certain perception or an activity for a certain period of time, but most of them don't last that long. And so if you think about your thoughts are the activity of neurons, if you're gonna wanna involve something that happened a long time ago, even just this morning, for example, the neurons haven't been active throughout that time. So you have to store that. So if I ask you, what did you have for breakfast today? That is memory, that is you've built into your model the world now, you remember that. And that memory is in the synapses, is basically in the formation of synapses. And so you're sliding into what, you know, it's the different timescales. There's timescales of which we are like understanding my language and moving about and seeing things rapidly and over time, that's the timescales of activities of neurons. But if you wanna get in longer timescales, then it's more memory. And we have to invoke those memories to say, oh yes, well now I can remember what I had for breakfast because I stored that someplace. I may forget it tomorrow, but I'd store it for now. So does memory also need to have, so the hierarchical aspect of reality is not just about concepts, it's also about time? Do you think of it that way? Yeah, time is infused in everything. It's like you really can't separate it out. If I ask you, what is your, you know, how's the brain learn a model of this coffee cup here? I have a coffee cup and I'm at the coffee cup. I say, well, time is not an inherent property of the model I have of this cup, whether it's a visual model or a tactile model. I can sense it through time, but the model itself doesn't really have much time. If I asked you, if I said, well, what is the model of my cell phone? My brain has learned a model of the cell phone. So if you have a smartphone like this, and I said, well, this has time aspects to it. I have expectations when I turn it on, what's gonna happen, what or how long it's gonna take to do certain things, if I bring up an app, what sequences, and so I have, and it's like melodies in the world, you know? Melody has a sense of time. So many things in the world move and act, and there's a sense of time related to them. Some don't, but most things do actually. So it's sort of infused throughout the models of the world. You build a model of the world, you're learning the structure of the objects in the world, and you're also learning how those things change through time. Okay, so it really is just a fourth dimension that's infused deeply, and you have to make sure that your models of intelligence incorporate it. So, like you mentioned, the state of neuroscience is deeply empirical, a lot of data collection. It's, you know, that's where it is. You mentioned Thomas Kuhn, right? Yeah. And then you're proposing a theory of intelligence, and which is really the next step, the really important step to take, but why is HTM, or what we'll talk about soon, the right theory? So is it more in the, is it backed by intuition? Is it backed by evidence? Is it backed by a mixture of both? Is it kind of closer to where string theory is in physics, where there's mathematical components which show that, you know what, it seems that this, it fits together too well for it not to be true, which is where string theory is. Is that where you're kind of seeing? It's a mixture of all those things, although definitely where we are right now is definitely much more on the empirical side than, let's say, string theory. The way this goes about, we're theorists, right? So we look at all this data, and we're trying to come up with some sort of model that explains it, basically, and there's, unlike string theory, there's vast more amounts of empirical data here that I think than most physicists deal with. And so our challenge is to sort through that and figure out what kind of constructs would explain this. And when we have an idea, you come up with a theory of some sort, you have lots of ways of testing it. First of all, there are 100 years of assimilated, assimilated, unassimilated empirical data from neuroscience. So we go back and read papers, and we say, oh, did someone find this already? We can predict X, Y, and Z, and maybe no one's even talked about it since 1972 or something, but we go back and find that, and we say, oh, either it can support the theory or it can invalidate the theory. And we say, okay, we have to start over again. Oh, no, it's supportive, let's keep going with that one. So the way I kind of view it, when we do our work, we look at all this empirical data, and what I call it is a set of constraints. We're not interested in something that's biologically inspired. We're trying to figure out how the actual brain works. So every piece of empirical data is a constraint on a theory. In theory, if you have the correct theory, it needs to explain every pin, right? So we have this huge number of constraints on the problem, which initially makes it very, very difficult. If you don't have many constraints, you can make up stuff all the day. You can say, oh, here's an answer on how you can do this, you can do that, you can do this. But if you consider all biology as a set of constraints, all neuroscience as a set of constraints, and even if you're working in one little part of the neocortex, for example, there are hundreds and hundreds of constraints. These are empirical constraints that it's very, very difficult initially to come up with a theoretical framework for that. But when you do, and it solves all those constraints at once, you have a high confidence that you got something close to correct. It's just mathematically almost impossible not to be. So that's the curse and the advantage of what we have. The curse is we have to solve, we have to meet all these constraints, which is really hard. But when you do meet them, then you have a great confidence that you've discovered something. In addition, then we work with scientific labs. So we'll say, oh, there's something we can't find, we can predict something, but we can't find it anywhere in the literature. So we will then, we have people we've collaborated with, we'll say, sometimes they'll say, you know what? I have some collected data, which I didn't publish, but we can go back and look at it and see if we can find that, which is much easier than designing a new experiment. You know, neuroscience experiments take a long time, years. So, although some people are doing that now too. So, but between all of these things, I think it's a reasonable, actually a very, very good approach. We are blessed with the fact that we can test our theories out the yin yang here because there's so much unassimilar data and we can also falsify our theories very easily, which we do often. So it's kind of reminiscent to whenever that was with Copernicus, you know, when you figure out that the sun's at the center of the solar system as opposed to earth, the pieces just fall into place. Yeah, I think that's the general nature of aha moments is, and it's Copernicus, it could be, you could say the same thing about Darwin, you could say the same thing about, you know, about the double helix, that people have been working on a problem for so long and have all this data and they can't make sense of it, they can't make sense of it. But when the answer comes to you and everything falls into place, it's like, oh my gosh, that's it. That's got to be right. I asked both Jim Watson and Francis Crick about this. I asked them, you know, when you were working on trying to discover the structure of the double helix, and when you came up with the sort of the structure that ended up being correct, but it was sort of a guess, you know, it wasn't really verified yet. I said, did you know that it was right? And they both said, absolutely. So we absolutely knew it was right. And it doesn't matter if other people didn't believe it or not, we knew it was right. They'd get around to thinking it and agree with it eventually anyway. And that's the kind of thing you hear a lot with scientists who really are studying a difficult problem. And I feel that way too about our work. Have you talked to Crick or Watson about the problem you're trying to solve, the, of finding the DNA of the brain? Yeah, in fact, Francis Crick was very interested in this in the latter part of his life. And in fact, I got interested in brains by reading an essay he wrote in 1979 called Thinking About the Brain. And that was when I decided I'm gonna leave my profession of computers and engineering and become a neuroscientist. Just reading that one essay from Francis Crick. I got to meet him later in life. I spoke at the Salk Institute and he was in the audience. And then I had a tea with him afterwards. He was interested in a different problem. He was focused on consciousness. The easy problem, right? Well, I think it's the red herring. And so we weren't really overlapping a lot there. Jim Watson, who's still alive, is also interested in this problem. And he was, when he was director of the Cold Spring Harbor Laboratories, he was really sort of behind moving in the direction of neuroscience there. And so he had a personal interest in this field. And I have met with him numerous times. And in fact, the last time was a little bit over a year ago, I gave a talk at Cold Spring Harbor Labs about the progress we were making in our work. And it was a lot of fun because he said, well, you wouldn't be coming here unless you had something important to say. So I'm gonna go attend your talk. So he sat in the very front row. Next to him was the director of the lab, Bruce Stillman. So these guys are in the front row of this auditorium. Nobody else in the auditorium wants to sit in the front row because there's Jim Watson and there's the director. And I gave a talk and then I had dinner with him afterwards. But there's a great picture of my colleague Subitai Amantak where I'm up there sort of like screaming the basics of this new framework we have. And Jim Watson's on the edge of his chair. He's literally on the edge of his chair, like intently staring up at the screen. And when he discovered the structure of DNA, the first public talk he gave was at Cold Spring Harbor Labs. And there's a picture, there's a famous picture of Jim Watson standing at the whiteboard with an overrated thing pointing at something, pointing at the double helix with his pointer. And it actually looks a lot like the picture of me. So there was a sort of funny, there's Arian talking about the brain and there's Jim Watson staring intently at it. And of course there with, whatever, 60 years earlier, he was standing pointing at the double helix. That's one of the great discoveries in all of, whatever, biology, science, all science and DNA. So it's funny that there's echoes of that in your presentation. Do you think, in terms of evolutionary timeline and history, the development of the neocortex was a big leap? Or is it just a small step? So like, if we ran the whole thing over again, from the birth of life on Earth, how likely would we develop the mechanism of the neocortex? Okay, well those are two separate questions. One is, was it a big leap? And one was how likely it is, okay? They're not necessarily related. Maybe correlated. Maybe correlated, maybe not. And we don't really have enough data to make a judgment about that. I would say definitely it was a big leap. And I can tell you why. I don't think it was just another incremental step. I don't get that at the moment. I don't really have any idea how likely it is. If we look at evolution, we have one data point, which is Earth, right? Life formed on Earth billions of years ago, whether it was introduced here or it created it here, or someone introduced it, we don't really know, but it was here early. It took a long, long time to get to multicellular life. And then for multicellular life, it took a long, long time to get the neocortex. And we've only had the neocortex for a few 100,000 years. So that's like nothing, okay? So is it likely? Well, it certainly isn't something that happened right away on Earth. And there were multiple steps to get there. So I would say it's probably not gonna be something that would happen instantaneously on other planets that might have life. It might take several billion years on average. Is it likely? I don't know, but you'd have to survive for several billion years to find out. Probably. Is it a big leap? Yeah, I think it is a qualitative difference in all other evolutionary steps. I can try to describe that if you'd like. Sure, in which way? Yeah, I can tell you how. Pretty much, let's start with a little preface. Many of the things that humans are able to do do not have obvious survival advantages precedent. We could create music, is that, is there a really survival advantage to that? Maybe, maybe not. What about mathematics? Is there a real survival advantage to mathematics? Well, you could stretch it. You can try to figure these things out, right? But most of evolutionary history, everything had immediate survival advantages to it. So, I'll tell you a story, which I like, may or may not be true, but the story goes as follows. Organisms have been evolving for, since the beginning of life here on Earth, and adding this sort of complexity onto that, and this sort of complexity onto that, and the brain itself is evolved this way. In fact, there's old parts, and older parts, and older, older parts of the brain that kind of just keeps calming on new things, and we keep adding capabilities. When we got to the neocortex, initially it had a very clear survival advantage in that it produced better vision, and better hearing, and better touch, and maybe, and so on. But what I think happens is that evolution discovered, it took a mechanism, and this is in our recent theories, but it took a mechanism evolved a long time ago for navigating in the world, for knowing where you are. These are the so called grid cells and place cells of an old part of the brain. And it took that mechanism for building maps of the world, and knowing where you are on those maps, and how to navigate those maps, and turns it into a sort of a slimmed down, idealized version of it. And that idealized version could now apply to building maps of other things. Maps of coffee cups, and maps of phones, maps of mathematics. Concepts almost. Concepts, yes, and not just almost, exactly. And so, and it just started replicating this stuff, right? You just think more, and more, and more. So we went from being sort of dedicated purpose neural hardware to solve certain problems that are important to survival, to a general purpose neural hardware that could be applied to all problems. And now it's escaped the orbit of survival. We are now able to apply it to things which we find enjoyment, but aren't really clearly survival characteristics. And that it seems to only have happened in humans, to the large extent. And so that's what's going on, where we sort of have, we've sort of escaped the gravity of evolutionary pressure, in some sense, in the neocortex. And it now does things which are not, that are really interesting, discovering models of the universe, which may not really help us. Does it matter? How does it help us surviving, knowing that there might be multiverses, or that there might be the age of the universe, or how do various stellar things occur? It doesn't really help us survive at all. But we enjoy it, and that's what happened. Or at least not in the obvious way, perhaps. It is required, if you look at the entire universe in an evolutionary way, it's required for us to do interplanetary travel, and therefore survive past our own sun. But you know, let's not get too. Yeah, but evolution works at one time frame, it's survival, if you think of survival of the phenotype, survival of the individual. What you're talking about there is spans well beyond that. So there's no genetic, I'm not transferring any genetic traits to my children that are gonna help them survive better on Mars. Totally different mechanism, that's right. So let's get into the new, as you've mentioned, this idea of the, I don't know if you have a nice name, thousand. We call it the thousand brain theory of intelligence. I like it. Can you talk about this idea of a spatial view of concepts and so on? Yeah, so can I just describe sort of the, there's an underlying core discovery, which then everything comes from that. That's a very simple, this is really what happened. We were deep into problems about understanding how we build models of stuff in the world and how we make predictions about things. And I was holding a coffee cup just like this in my hand. And my finger was touching the side, my index finger. And then I moved it to the top and I was gonna feel the rim at the top of the cup. And I asked myself a very simple question. I said, well, first of all, I say, I know that my brain predicts what it's gonna feel before it touches it. You can just think about it and imagine it. And so we know that the brain's making predictions all the time. So the question is, what does it take to predict that? And there's a very interesting answer. First of all, it says the brain has to know it's touching a coffee cup. It has to have a model of a coffee cup. It needs to know where the finger currently is on the cup relative to the cup. Because when I make a movement, it needs to know where it's going to be on the cup after the movement is completed relative to the cup. And then it can make a prediction about what it's gonna sense. So this told me that the neocortex, which is making this prediction, needs to know that it's sensing it's touching a cup. And it needs to know the location of my finger relative to that cup in a reference frame of the cup. It doesn't matter where the cup is relative to my body. It doesn't matter its orientation. None of that matters. It's where my finger is relative to the cup, which tells me then that the neocortex has a reference frame that's anchored to the cup. Because otherwise I wouldn't be able to say the location and I wouldn't be able to predict my new location. And then we quickly, very instantly can say, well, every part of my skin could touch this cup. And therefore every part of my skin is making predictions and every part of my skin must have a reference frame that it's using to make predictions. So the big idea is that throughout the neocortex, there are, everything is being stored and referenced in reference frames. You can think of them like XYZ reference frames, but they're not like that. We know a lot about the neural mechanisms for this, but the brain thinks in reference frames. And as an engineer, if you're an engineer, this is not surprising. You'd say, if I wanted to build a CAD model of the coffee cup, well, I would bring it up and some CAD software, and I would assign some reference frame and say this features at this locations and so on. But the fact that this, the idea that this is occurring throughout the neocortex everywhere, it was a novel idea. And then a zillion things fell into place after that, a zillion. So now we think about the neocortex as processing information quite differently than we used to do it. We used to think about the neocortex as processing sensory data and extracting features from that sensory data and then extracting features from the features, very much like a deep learning network does today. But that's not how the brain works at all. The brain works by assigning everything, every input, everything to reference frames. And there are thousands, hundreds of thousands of them active at once in your neocortex. It's a surprising thing to think about, but once you sort of internalize this, you understand that it explains almost every, almost all the mysteries we've had about this structure. So one of the consequences of that is that every small part of the neocortex, say a millimeter square, and there's 150,000 of those. So it's about 150,000 square millimeters. If you take every little square millimeter of the cortex, it's got some input coming into it and it's gonna have reference frames where it's assigned that input to. And each square millimeter can learn complete models of objects. So what do I mean by that? If I'm touching the coffee cup, well, if I just touch it in one place, I can't learn what this coffee cup is because I'm just feeling one part. But if I move it around the cup and touch it at different areas, I can build up a complete model of the cup because I'm now filling in that three dimensional map, which is the coffee cup. I can say, oh, what am I feeling at all these different locations? That's the basic idea, it's more complicated than that. But so through time, and we talked about time earlier, through time, even a single column, which is only looking at, or a single part of the cortex, which is only looking at a small part of the world, can build up a complete model of an object. And so if you think about the part of the brain, which is getting input from all my fingers, so they're spread across the top of your head here. This is the somatosensory cortex. There's columns associated with all the different areas of my skin. And what we believe is happening is that all of them are building models of this cup, every one of them, or things. They're not all building, not every column or every part of the cortex builds models of everything, but they're all building models of something. And so you have, so when I touch this cup with my hand, there are multiple models of the cup being invoked. If I look at it with my eyes, there are, again, many models of the cup being invoked, because each part of the visual system, the brain doesn't process an image. That's a misleading idea. It's just like your fingers touching the cup, so different parts of my retina are looking at different parts of the cup. And thousands and thousands of models of the cup are being invoked at once. And they're all voting with each other, trying to figure out what's going on. So that's why we call it the thousand brains theory of intelligence, because there isn't one model of a cup. There are thousands of models of this cup. There are thousands of models of your cellphone and about cameras and microphones and so on. It's a distributed modeling system, which is very different than the way people have thought about it. And so that's a really compelling and interesting idea. I have two first questions. So one, on the ensemble part of everything coming together, you have these thousand brains. How do you know which one has done the best job of forming the... Great question. Let me try to explain it. There's a problem that's known in neuroscience called the sensor fusion problem. Yes. And so the idea is there's something like, oh, the image comes from the eye. There's a picture on the retina and then it gets projected to the neocortex. Oh, by now it's all spread out all over the place and it's kind of squirrely and distorted and pieces are all over the... It doesn't look like a picture anymore. When does it all come back together again? Or you might say, well, yes, but I also have sounds or touches associated with the cup. So I'm seeing the cup and touching the cup. How do they get combined together again? So it's called the sensor fusion problem. As if all these disparate parts have to be brought together into one model someplace. That's the wrong idea. The right idea is that you've got all these guys voting. There's auditory models of the cup. There's visual models of the cup. There's tactile models of the cup. In the vision system, there might be ones that are more focused on black and white and ones focusing on color. It doesn't really matter. There's just thousands and thousands of models of this cup. And they vote. They don't actually come together in one spot. Just literally think of it this way. Imagine you have these columns that are like about the size of a little piece of spaghetti. Like a two and a half millimeters tall and about a millimeter in wide. They're not physical, but you could think of them that way. And each one's trying to guess what this thing is or touching. Now, they can do a pretty good job if they're allowed to move over time. So I can reach my hand into a black box and move my finger around an object. And if I touch enough spaces, I go, okay, now I know what it is. But often we don't do that. Often I can just reach and grab something with my hand all at once and I get it. Or if I had to look through the world through a straw, so I'm only invoking one little column, I can only see part of something because I have to move the straw around. But if I open my eyes, I see the whole thing at once. So what we think is going on is all these little pieces of spaghetti, if you will, all these little columns in the cortex, are all trying to guess what it is that they're sensing. They'll do a better guess if they have time and can move over time. So if I move my eyes, I move my fingers. But if they don't, they have a poor guess. It's a probabilistic guess of what they might be touching. Now, imagine they can post their probability at the top of a little piece of spaghetti. Each one of them says, I think, and it's not really a probability distribution. It's more like a set of possibilities. In the brain, it doesn't work as a probability distribution. It works as more like what we call a union. So you could say, and one column says, I think it could be a coffee cup, a soda can, or a water bottle. And another column says, I think it could be a coffee cup or a telephone or a camera or whatever, right? And all these guys are saying what they think it might be. And there's these long range connections in certain layers in the cortex. So there's in some layers in some cells types in each column, send the projections across the brain. And that's the voting occurs. And so there's a simple associative memory mechanism. We've described this in a recent paper and we've modeled this that says, they can all quickly settle on the only or the one best answer for all of them. If there is a single best answer, they all vote and say, yep, it's gotta be the coffee cup. And at that point, they all know it's a coffee cup. And at that point, everyone acts as if it's a coffee cup. They're like, yep, we know it's a coffee, even though I've only seen one little piece of this world, I know it's a coffee cup I'm touching or I'm seeing or whatever. And so you can think of all these columns are looking at different parts in different places, different sensory input, different locations, they're all different. But this layer that's doing the voting, it solidifies. It's just like it crystallizes and says, oh, we all know what we're doing. And so you don't bring these models together in one model, you just vote and there's a crystallization of the vote. Great, that's at least a compelling way to think about the way you form a model of the world. Now, you talk about a coffee cup. Do you see this, as far as I understand, you are proposing this as well, that this extends to much more than coffee cups? Yeah. It does. Or at least the physical world, it expands to the world of concepts. Yeah, it does. And well, first, the primary thing is evidence for that is that the regions of the neocortex that are associated with language or high level thought or mathematics or things like that, they look like the regions of the neocortex that process vision, hearing, and touch. They don't look any different. Or they look only marginally different. And so one would say, well, if Vernon Mountcastle, who proposed that all the parts of the neocortex do the same thing, if he's right, then the parts that are doing language or mathematics or physics are working on the same principle. They must be working on the principle of reference frames. So that's a little odd thought. But of course, we had no prior idea how these things happen. So let's go with that. And we, in our recent paper, we talked a little bit about that. I've been working on it more since. I have better ideas about it now. I'm sitting here very confident that that's what's happening. And I can give you some examples that help you think about that. It's not we understand it completely, but I understand it better than I've described it in any paper so far. So, but we did put that idea out there. It says, okay, this is, it's a good place to start, you know? And the evidence would suggest it's how it's happening. And then we can start tackling that problem one piece at a time. Like, what does it mean to do high level thought? What does it mean to do language? How would that fit into a reference frame framework? Yeah, so there's a, I don't know if you could tell me if there's a connection, but there's an app called Anki that helps you remember different concepts. And they talk about like a memory palace that helps you remember completely random concepts by trying to put them in a physical space in your mind and putting them next to each other. It's called the method of loci. Loci, yeah. For some reason, that seems to work really well. Now, that's a very narrow kind of application of just remembering some facts. But that's a very, very telling one. Yes, exactly. So this seems like you're describing a mechanism why this seems to work. So basically the way what we think is going on is all things you know, all concepts, all ideas, words, everything you know are stored in reference frames. And so if you want to remember something, you have to basically navigate through a reference frame the same way a rat navigates through a maze and the same way my finger navigates to this coffee cup. You are moving through some space. And so if you have a random list of things you were asked to remember, by assigning them to a reference frame, you've already know very well to see your house, right? And the idea of the method of loci is you can say, okay, in my lobby, I'm going to put this thing. And then the bedroom, I put this one. I go down the hall, I put this thing. And then you want to recall those facts or recall those things. You just walk mentally, you walk through your house. You're mentally moving through a reference frame that you already had. And that tells you, there's two things that are really important about that. It tells us the brain prefers to store things in reference frames. And that the method of recalling things or thinking, if you will, is to move mentally through those reference frames. You could move physically through some reference frames, like I could physically move through the reference frame of this coffee cup. I can also mentally move through the reference frame of the coffee cup, imagining me touching it. But I can also mentally move my house. And so now we can ask yourself, or are all concepts stored this way? There was some recent research using human subjects in fMRI, and I'm going to apologize for not knowing the name of the scientists who did this. But what they did is they put humans in this fMRI machine, which is one of these imaging machines. And they gave the humans tasks to think about birds. So they had different types of birds, and birds that look big and small, and long necks and long legs, things like that. And what they could tell from the fMRI was a very clever experiment. You get to tell when humans were thinking about the birds, that the birds, the knowledge of birds was arranged in a reference frame, similar to the ones that are used when you navigate in a room. That these are called grid cells, and there are grid cell like patterns of activity in the neocortex when they do this. So it's a very clever experiment. And what it basically says, that even when you're thinking about something abstract, and you're not really thinking about it as a reference frame, it tells us the brain is actually using a reference frame. And it's using the same neural mechanisms. These grid cells are the basic same neural mechanism that we propose that grid cells, which exist in the old part of the brain, the entorhinal cortex, that that mechanism is now similar mechanism is used throughout the neocortex. It's the same nature to preserve this interesting way of creating reference frames. And so now they have empirical evidence that when you think about concepts like birds, that you're using reference frames that are built on grid cells. So that's similar to the method of loci, but in this case, the birds are related. So they create their own reference frame, which is consistent with bird space. And when you think about something, you go through that. You can make the same example, let's take mathematics. Let's say you wanna prove a conjecture. What is a conjecture? A conjecture is a statement you believe to be true, but you haven't proven it. And so it might be an equation. I wanna show that this is equal to that. And you have some places you start with. You say, well, I know this is true, and I know this is true. And I think that maybe to get to the final proof, I need to go through some intermediate results. What I believe is happening is literally these equations or these points are assigned to a reference frame, a mathematical reference frame. And when you do mathematical operations, a simple one might be multiply or divide, but you might be a little plus transform or something else. That is like a movement in the reference frame of the math. And so you're literally trying to discover a path from one location to another location in a space of mathematics. And if you can get to these intermediate results, then you know your map is pretty good, and you know you're using the right operations. Much of what we think about is solving hard problems is designing the correct reference frame for that problem, figuring out how to organize the information and what behaviors I wanna use in that space to get me there. Yeah, so if you dig in an idea of this reference frame, whether it's the math, you start a set of axioms to try to get to proving the conjecture. Can you try to describe, maybe take a step back, how you think of the reference frame in that context? Is it the reference frame that the axioms are happy in? Is it the reference frame that might contain everything? Is it a changing thing as you? You have many, many reference frames. I mean, in fact, the way the theory, the thousand brain theory of intelligence says that every single thing in the world has its own reference frame. So every word has its own reference frames. And we can talk about this. The mathematics work out, this is no problem for neurons to do this. But how many reference frames does a coffee cup have? Well, it's on a table. Let's say you ask how many reference frames could a column in my finger that's touching the coffee cup have? Because there are many, many copy, there are many, many models of the coffee cup. So the coffee, there is no one model of a coffee cup. There are many models of a coffee cup. And you could say, well, how many different things can my finger learn? Is this the question you want to ask? Imagine I say every concept, every idea, everything you've ever know about that you can say, I know that thing has a reference frame associated with it. And what we do when we build composite objects, we assign reference frames to point another reference frame. So my coffee cup has multiple components to it. It's got a limb, it's got a cylinder, it's got a handle. And those things have their own reference frames and they're assigned to a master reference frame, which is called this cup. And now I have this Numenta logo on it. Well, that's something that exists elsewhere in the world. It's its own thing. So it has its own reference frame. So we now have to say, how can I assign the Numenta logo reference frame onto the cylinder or onto the coffee cup? So it's all, we talked about this in the paper that came out in December of this last year. The idea of how you can assign reference frames to reference frames, how neurons could do this. So, well, my question is, even though you mentioned reference frames a lot, I almost feel it's really useful to dig into how you think of what a reference frame is. I mean, it was already helpful for me to understand that you think of reference frames as something there is a lot of. Okay, so let's just say that we're gonna have some neurons in the brain, not many, actually, 10,000, 20,000 are gonna create a whole bunch of reference frames. What does it mean? What is a reference frame? First of all, these reference frames are different than the ones you might be used to. We know lots of reference frames. For example, we know the Cartesian coordinates, X, Y, Z, that's a type of reference frame. We know longitude and latitude, that's a different type of reference frame. If I look at a printed map, you might have columns A through M, and rows one through 20, that's a different type of reference frame. It's kind of a Cartesian coordinate reference frame. The interesting thing about the reference frames in the brain, and we know this because these have been established through neuroscience studying the entorhinal cortex. So I'm not speculating here, okay? This is known neuroscience in an old part of the brain. The way these cells create reference frames, they have no origin. So what it's more like, you have a point, a point in some space, and you, given a particular movement, you can then tell what the next point should be. And you can then tell what the next point would be, and so on. You can use this to calculate how to get from one point to another. So how do I get from my house to my home, or how do I get my finger from the side of my cup to the top of the cup? How do I get from the axioms to the conjecture? So it's a different type of reference frame, and I can, if you want, I can describe in more detail, I can paint a picture of how you might want to think about that. It's really helpful to think it's something you can move through, but is there, is it helpful to think of it as spatial in some sense, or is there something that's more? No, it's definitely spatial. It's spatial in a mathematical sense. How many dimensions? Can it be a crazy number of dimensions? Well, that's an interesting question. In the old part of the brain, the entorhinal cortex, they studied rats, and initially it looks like, oh, this is just two dimensional. It's like the rat is in some box in the maze or whatever, and they know where the rat is using these two dimensional reference frames to know where it is in the maze. We said, well, okay, but what about bats? That's a mammal, and they fly in three dimensional space. How do they do that? They seem to know where they are, right? So this is a current area of active research, and it seems like somehow the neurons in the entorhinal cortex can learn three dimensional space. We just, two members of our team, along with Elif Fett from MIT, just released a paper this literally last week. It's on bioRxiv, where they show that you can, if you, the way these things work, and I won't get, unless you want to, I won't get into the detail, but grid cells can represent any n dimensional space. It's not inherently limited. You can think of it this way. If you had two dimensional, the way it works is you had a bunch of two dimensional slices. That's the way these things work. There's a whole bunch of two dimensional models, and you can just, you can slice up any n dimensional space with two dimensional projections. So, and you could have one dimensional models. So there's nothing inherent about the mathematics about the way the neurons do this, which constrain the dimensionality of the space, which I think was important. So obviously I have a three dimensional map of this cup. Maybe it's even more than that, I don't know. But it's clearly a three dimensional map of the cup. I don't just have a projection of the cup. But when I think about birds, or when I think about mathematics, perhaps it's more than three dimensions. Who knows? So in terms of each individual column building up more and more information over time, do you think that mechanism is well understood? In your mind, you've proposed a lot of architectures there. Is that a key piece, or is it, is the big piece, the thousand brain theory of intelligence, the ensemble of it all? Well, I think they're both big. I mean, clearly the concept, as a theorist, the concept is most exciting, right? The high level concept. The high level concept. This is a totally new way of thinking about how the neocortex works. So that is appealing. It has all these ramifications. And with that, as a framework for how the brain works, you can make all kinds of predictions and solve all kinds of problems. Now we're trying to work through many of these details right now. Okay, how do the neurons actually do this? Well, it turns out, if you think about grid cells and place cells in the old parts of the brain, there's a lot that's known about them, but there's still some mysteries. There's a lot of debate about exactly the details, how these work and what are the signs. And we have that still, that same level of detail, that same level of concern. What we spend here most of our time doing is trying to make a very good list of the things we don't understand yet. That's the key part here. What are the constraints? It's not like, oh, this thing seems to work, we're done. No, it's like, okay, it kind of works, but these are other things we know it has to do and it's not doing those yet. I would say we're well on the way here. We're not done yet. There's a lot of trickiness to this system, but the basic principles about how different layers in the neocortex are doing much of this, we understand. But there's some fundamental parts that we don't understand as well. So what would you say is one of the harder open problems or one of the ones that have been bothering you, keeping you up at night the most? Oh, well, right now, this is a detailed thing that wouldn't apply to most people, okay? Sure. But you want me to answer that question? Yeah, please. We've talked about as if, oh, to predict what you're going to sense on this coffee cup, I need to know where my finger is gonna be on the coffee cup. That is true, but it's insufficient. Think about my finger touches the edge of the coffee cup. My finger can touch it at different orientations. I can rotate my finger around here and that doesn't change. I can make that prediction and somehow, so it's not just the location. There's an orientation component of this as well. This is known in the old parts of the brain too. There's things called head direction cells, which way the rat is facing. It's the same kind of basic idea. So if my finger were a rat, you know, in three dimensions, I have a three dimensional orientation and I have a three dimensional location. If I was a rat, I would have a, you might think of it as a two dimensional location, a two dimensional orientation, a one dimensional orientation, like just which way is it facing? So how the two components work together, how it is that I combine orientation, the orientation of my sensor, as well as the location is a tricky problem. And I think I've made progress on it. So at a bigger version of that, so perspective is super interesting, but super specific. Yeah, I warned you. No, no, no, that's really good, but there's a more general version of that. Do you think context matters, the fact that we're in a building in North America, that we, in the day and age where we have mugs? I mean, there's all this extra information that you bring to the table about everything else in the room that's outside of just the coffee cup. How does it get connected, do you think? Yeah, and that is another really interesting question. I'm gonna throw that under the rubric or the name of attentional problems. First of all, we have this model, I have many, many models. And also the question, does it matter? Well, it matters for certain things, of course it does. Maybe what we think of that as a coffee cup in another part of the world is viewed as something completely different. Or maybe our logo, which is very benign in this part of the world, it means something very different in another part of the world. So those things do matter. I think the way to think about it is the following, one way to think about it, is we have all these models of the world, okay? And we model everything. And as I said earlier, I kind of snuck it in there, our models are actually, we build composite structure. So every object is composed of other objects, which are composed of other objects, and they become members of other objects. So this room has chairs and a table and a room and walls and so on. Now we can just arrange these things in a certain way and go, oh, that's the nomenclature conference room. So, and what we do is when we go around the world and we experience the world, by walking into a room, for example, the first thing I do is I can say, oh, I'm in this room, do I recognize the room? Then I can say, oh, look, there's a table here. And by attending to the table, I'm then assigning this table in the context of the room. Then I can say, oh, on the table, there's a coffee cup. Oh, and on the table, there's a logo. And in the logo, there's the word Nementa. Oh, and look in the logo, there's the letter E. Oh, and look, it has an unusual serif. And it doesn't actually, but I pretended to serif. So the point is your attention is kind of drilling deep in and out of these nested structures. And I can pop back up and I can pop back down. I can pop back up and I can pop back down. So when I attend to the coffee cup, I haven't lost the context of everything else, but it's sort of, there's this sort of nested structure. So the attention filters the reference frame information for that particular period of time? Yes, it basically, moment to moment, you attend the sub components, and then you can attend the sub components to sub components. And you can move up and down. You can move up and down. We do that all the time. You're not even, now that I'm aware of it, I'm very conscious of it. But until, but most people don't even think about this. You just walk in a room and you don't say, oh, I looked at the chair and I looked at the board and looked at that word on the board and I looked over here, what's going on, right? So what percent of your day are you deeply aware of this? And what part can you actually relax and just be Jeff? Me personally, like my personal day? Yeah. Unfortunately, I'm afflicted with too much of the former. Well, unfortunately or unfortunately. Yeah. You don't think it's useful? Oh, it is useful, totally useful. I think about this stuff almost all the time. And one of my primary ways of thinking is when I'm in sleep at night, I always wake up in the middle of the night. And then I stay awake for at least an hour with my eyes shut in sort of a half sleep state thinking about these things. I come up with answers to problems very often in that sort of half sleeping state. I think about it on my bike ride, I think about it on walks. I'm just constantly thinking about this. I have to almost schedule time to not think about this stuff because it's very, it's mentally taxing. Are you, when you're thinking about this stuff, are you thinking introspectively, like almost taking a step outside of yourself and trying to figure out what is your mind doing right now? I do that all the time, but that's not all I do. I'm constantly observing myself. So as soon as I started thinking about grid cells, for example, and getting into that, I started saying, oh, well, grid cells can have my place of sense in the world. That's where you know where you are. And it's interesting, we always have a sense of where we are unless we're lost. And so I started at night when I got up to go to the bathroom, I would start trying to do it completely with my eyes closed all the time. And I would test my sense of grid cells. I would walk five feet and say, okay, I think I'm here. Am I really there? What's my error? And then I would calculate my error again and see how the errors could accumulate. So even something as simple as getting up in the middle of the night to go to the bathroom, I'm testing these theories out. It's kind of fun. I mean, the coffee cup is an example of that too. So I find that these sort of everyday introspections are actually quite helpful. It doesn't mean you can ignore the science. I mean, I spend hours every day reading ridiculously complex papers. That's not nearly as much fun, but you have to sort of build up those constraints and the knowledge about the field and who's doing what and what exactly they think is happening here. And then you can sit back and say, okay, let's try to piece this all together. Let's come up with some, I'm very, in this group here, people, they know they do, I do this all the time. I come in with these introspective ideas and say, well, have you ever thought about this? Now watch, well, let's all do this together. And it's helpful. It's not, as long as you don't, all you did was that, then you're just making up stuff. But if you're constraining it by the reality of the neuroscience, then it's really helpful. So let's talk a little bit about deep learning and the successes in the applied space of neural networks, ideas of training model on data and these simple computational units, artificial neurons that with backpropagation, statistical ways of being able to generalize from the training set onto data that's similar to that training set. So where do you think are the limitations of those approaches? What do you think are its strengths relative to your major efforts of constructing a theory of human intelligence? Well, I'm not an expert in this field. I'm somewhat knowledgeable. So, but I'm not. Some of it is in just your intuition. What are your? Well, I have a little bit more than intuition, but I just want to say like, you know, one of the things that you asked me, do I spend all my time thinking about neuroscience? I do. That's to the exclusion of thinking about things like convolutional neural networks. But I try to stay current. So look, I think it's great, the progress they've made. It's fantastic. And as I mentioned earlier, it's very highly useful for many things. The models that we have today are actually derived from a lot of neuroscience principles. There are distributed processing systems and distributed memory systems, and that's how the brain works. They use things that we might call them neurons, but they're really not neurons at all. So we can just, they're not really neurons. So they're distributed processing systems. And that nature of hierarchy, that came also from neuroscience. And so there's a lot of things, the learning rules, basically, not back prop, but other, you know, sort of heavy on top of that. I'd be curious to say they're not neurons at all. Can you describe in which way? I mean, some of it is obvious, but I'd be curious if you have specific ways in which you think are the biggest differences. Yeah, we had a paper in 2016 called Why Neurons Have Thousands of Synapses. And if you read that paper, you'll know what I'm talking about here. A real neuron in the brain is a complex thing. And let's just start with the synapses on it, which is a connection between neurons. Real neurons can have everywhere from five to 30,000 synapses on them. The ones near the cell body, the ones that are close to the soma of the cell body, those are like the ones that people model in artificial neurons. There is a few hundred of those. Maybe they can affect the cell. They can make the cell become active. 95% of the synapses can't do that. They're too far away. So if you activate one of those synapses, it just doesn't affect the cell body enough to make any difference. Any one of them individually. Any one of them individually, or even if you do a mass of them. What real neurons do is the following. If you activate or you get 10 to 20 of them active at the same time, meaning they're all receiving an input at the same time, and those 10 to 20 synapses or 40 synapses within a very short distance on the dendrite, like 40 microns, a very small area. So if you activate a bunch of these right next to each other at some distant place, what happens is it creates what's called the dendritic spike. And the dendritic spike travels through the dendrites and can reach the soma or the cell body. Now, when it gets there, it changes the voltage, which is sort of like gonna make the cell fire, but never enough to make the cell fire. It's sort of what we call, it says we depolarize the cell, you raise the voltage a little bit, but not enough to do anything. It's like, well, what good is that? And then it goes back down again. So we propose a theory, which I'm very confident in basics are, is that what's happening there is those 95% of the synapses are recognizing dozens to hundreds of unique patterns. They can write about 10, 20 synapses at a time, and they're acting like predictions. So the neuron actually is a predictive engine on its own. It can fire when it gets enough, what they call proximal input from those ones near the cell fire, but it can get ready to fire from dozens to hundreds of patterns that it recognizes from the other guys. And the advantage of this to the neuron is that when it actually does produce a spike in action potential, it does so slightly sooner than it would have otherwise. And so what could is slightly sooner? Well, the slightly sooner part is it, all the excitatory neurons in the brain are surrounded by these inhibitory neurons, and they're very fast, the inhibitory neurons, these basket cells. And if I get my spike out a little bit sooner than someone else, I inhibit all my neighbors around me, right? And what you end up with is a different representation. You end up with a reputation that matches your prediction. It's a sparser representation, meaning fewer neurons are active, but it's much more specific. And so we showed how networks of these neurons can do very sophisticated temporal prediction, basically. So this, summarize this, real neurons in the brain are time based prediction engines, and there's no concept of this at all in artificial, what we call point neurons. I don't think you can build a brain without them. I don't think you can build intelligence without them, because it's where a large part of the time comes from. These are predictive models, and the time is, there's a prior and a prediction and an action, and it's inherent through every neuron in the neocortex. So I would say that point neurons sort of model a piece of that, and not very well at that either. But like for example, synapses are very unreliable, and you cannot assign any precision to them. So even one digit of precision is not possible. So the way real neurons work is they don't add these, they don't change these weights accurately like artificial neural networks do. They basically form new synapses, and so what you're trying to always do is detect the presence of some 10 to 20 active synapses at the same time, as opposed, and they're almost binary. It's like, because you can't really represent anything much finer than that. So these are the kind of, and I think that's actually another essential component, because the brain works on sparse patterns, and all that mechanism is based on sparse patterns, and I don't actually think you could build real brains or machine intelligence without incorporating some of those ideas. It's hard to even think about the complexity that emerges from the fact that the timing of the firing matters in the brain, the fact that you form new synapses, and I mean, everything you just mentioned in the past couple minutes. Trust me, if you spend time on it, you can get your mind around it. It's not like, it's no longer a mystery to me. No, but sorry, as a function, in a mathematical way, can you start getting an intuition about what gets it excited, what not, and what kind of representation? Yeah, it's not as easy as, there's many other types of neural networks that are more amenable to pure analysis, especially very simple networks. Oh, I have four neurons, and they're doing this. Can we describe to them mathematically what they're doing type of thing? Even the complexity of convolutional neural networks today, it's sort of a mystery. They can't really describe the whole system. And so it's different. My colleague Subitai Ahmad, he did a nice paper on this. You can get all this stuff on our website if you're interested, talking about sort of the mathematical properties of sparse representations. And so what we can do is we can show mathematically, for example, why 10 to 20 synapses to recognize a pattern is the correct number, is the right number you'd wanna use. And by the way, that matches biology. We can show mathematically some of these concepts about the show why the brain is so robust to noise and error and fallout and so on. We can show that mathematically as well as empirically in simulations. But the system can't be analyzed completely. Any complex system can't, and so that's out of the realm. But there is mathematical benefits and intuitions that can be derived from mathematics. And we try to do that as well. Most of our papers have a section about that. So I think it's refreshing and useful for me to be talking to you about deep neural networks, because your intuition basically says that we can't achieve anything like intelligence with artificial neural networks. Well, not in the current form. Not in the current form. I'm sure we can do it in the ultimate form, sure. So let me dig into it and see what your thoughts are there a little bit. So I'm not sure if you read this little blog post called Bitter Lesson by Rich Sutton recently. He's a reinforcement learning pioneer. I'm not sure if you're familiar with him. His basic idea is that all the stuff we've done in AI in the past 70 years, he's one of the old school guys. The biggest lesson learned is that all the tricky things we've done, they benefit in the short term, but in the long term, what wins out is a simple general method that just relies on Moore's law, on computation getting faster and faster. This is what he's saying. This is what has worked up to now. This is what has worked up to now. If you're trying to build a system, if we're talking about, he's not concerned about intelligence. He's concerned about a system that works in terms of making predictions on applied narrow AI problems, right? That's what this discussion is about. That you just try to go as general as possible and wait years or decades for the computation to make it actually. Is he saying that as a criticism or is he saying this is a prescription of what we ought to be doing? Well, it's very difficult. He's saying this is what has worked and yes, a prescription, but it's a difficult prescription because it says all the fun things you guys are trying to do, we are trying to do. He's part of the community. He's saying it's only going to be short term gains. So this all leads up to a question, I guess, on artificial neural networks and maybe our own biological neural networks is do you think if we just scale things up significantly, so take these dumb artificial neurons, the point neurons, I like that term. If we just have a lot more of them, do you think some of the elements that we see in the brain may start emerging? No, I don't think so. We can do bigger problems of the same type. I mean, it's been pointed out by many people that today's convolutional neural networks aren't really much different than the ones we had quite a while ago. They're bigger and train more and we have more labeled data and so on. But I don't think you can get to the kind of things I know the brain can do and that we think about as intelligence by just scaling it up. So that may be, it's a good description of what's happened in the past, what's happened recently with the reemergence of artificial neural networks. It may be a good prescription for what's gonna happen in the short term. But I don't think that's the path. I've said that earlier. There's an alternate path. I should mention to you, by the way, that we've made sufficient progress on the whole cortical theory in the last few years that last year we decided to start actively pursuing how do we get these ideas embedded into machine learning? Well, that's, again, being led by my colleague, Subed Tariman, and he's more of a machine learning guy. I'm more of a neuroscience guy. So this is now, I wouldn't say our focus, but it is now an equal focus here because we need to proselytize what we've learned and we need to show how it's beneficial to the machine learning layer. So we're putting, we have a plan in place right now. In fact, we just did our first paper on this. I can tell you about that. But one of the reasons I wanna talk to you is because I'm trying to get more people in the machine learning community to say, I need to learn about this stuff. And maybe we should just think about this a bit more about what we've learned about the brain and what are those team at Nimenta, what have they done? Is that useful for us? Yeah, so is there elements of all the cortical theory that things we've been talking about that may be useful in the short term? Yes, in the short term, yes. This is the, sorry to interrupt, but the open question is, it certainly feels from my perspective that in the long term, some of the ideas we've been talking about will be extremely useful. The question is whether in the short term. Well, this is always what I would call the entrepreneur's dilemma. So you have this long term vision, oh, we're gonna all be driving electric cars or we're all gonna have computers or we're all gonna, whatever. And you're at some point in time and you say, I can see that long term vision, I'm sure it's gonna happen. How do I get there without killing myself? Without going out of business, right? That's the challenge. That's the dilemma. That's the really difficult thing to do. So we're facing that right now. So ideally what you'd wanna do is find some steps along the way that you can get there incrementally. You don't have to like throw it all out and start over again. The first thing that we've done is we focus on the sparse representations. So just in case you don't know what that means or some of the listeners don't know what that means, in the brain, if I have like 10,000 neurons, what you would see is maybe 2% of them active at a time. You don't see 50%, you don't see 30%, you might see 2%. And it's always like that. For any set of sensory inputs? It doesn't matter if anything, doesn't matter any part of the brain. But which neurons differs? Which neurons are active? Yeah, so let's say I take 10,000 neurons that are representing something. They're sitting there in a little block together. It's a teeny little block of neurons, 10,000 neurons. And they're representing a location, they're representing a cup, they're representing the input from my sensors. I don't know, it doesn't matter. It's representing something. The way the representations occur, it's always a sparse representation. Meaning it's a population code. So which 200 cells are active tells me what's going on. It's not, individual cells aren't that important at all. It's the population code that matters. And when you have sparse population codes, then all kinds of beautiful properties come out of them. So the brain uses sparse population codes. We've written and described these benefits in some of our papers. So they give this tremendous robustness to the systems. Brains are incredibly robust. Neurons are dying all the time and spasming and synapses are falling apart all the time. And it keeps working. So what Sibutai and Louise, one of our other engineers here have done, have shown they're introducing sparseness into convolutional neural networks. Now other people are thinking along these lines, but we're going about it in a more principled way, I think. And we're showing that if you enforce sparseness throughout these convolutional neural networks in both the act, which sort of, which neurons are active and the connections between them, that you get some very desirable properties. So one of the current hot topics in deep learning right now are these adversarial examples. So, you know, you give me any deep learning network and I can give you a picture that looks perfect and you're going to call it, you know, you're going to say the monkey is, you know, an airplane. So that's a problem. And DARPA just announced some big thing. They're trying to, you know, have some contest for this. But if you enforce sparse representations here, many of these problems go away. They're much more robust and they're not easy to fool. So we've already shown some of those results, just literally in January or February, just like last month we did that. And you can, I think it's on bioRxiv right now, or on iRxiv, you can read about it. But, so that's like a baby step, okay? That's taking something from the brain. We know about sparseness. We know why it's important. We know what it gives the brain. So let's try to enforce that onto this. What's your intuition why sparsity leads to robustness? Because it feels like it would be less robust. Why would you feel the rest robust to you? So it just feels like if the fewer neurons are involved, the more fragile the representation. But I didn't say there was lots of few neurons. I said, let's say 200. That's a lot. There's still a lot, it's just. So here's an intuition for it. This is a bit technical, so for engineers, machine learning people, this will be easy, but all the listeners, maybe not. If you're trying to classify something, you're trying to divide some very high dimensional space into different pieces, A and B. And you're trying to create some point where you say, all these points in this high dimensional space are A, and all these points in this high dimensional space are B. And if you have points that are close to that line, it's not very robust. It works for all the points you know about, but it's not very robust, because you can just move a little bit and you've crossed over the line. When you have sparse representations, imagine I pick, I'm gonna pick 200 cells active out of 10,000, okay? So I have 200 cells active. Now let's say I pick randomly another, a different representation, 200. The overlap between those is gonna be very small, just a few. I can pick millions of samples randomly of 200 neurons, and not one of them will overlap more than just a few. So one way to think about it is, if I wanna fool one of these representations to look like one of those other representations, I can't move just one cell, or two cells, or three cells, or four cells. I have to move 100 cells. And that makes them robust. In terms of further, so you mentioned sparsity. What would be the next thing? Yeah. Okay, so we have, we picked one. We don't know if it's gonna work well yet. So again, we're trying to come up with incremental ways to moving from brain theory to add pieces to machine learning, current machine learning world, and one step at a time. So the next thing we're gonna try to do is sort of incorporate some of the ideas of the thousand brains theory, that you have many, many models that are voting. Now that idea is not new. There's a mixture of models that's been around for a long time. But the way the brain does it is a little different. And the way it votes is different. And the kind of way it represents uncertainty is different. So we're just starting this work, but we're gonna try to see if we can sort of incorporate some of the principles of voting, or principles of the thousand brain theory. Like lots of simple models that talk to each other in a certain way. And can we build more machines, systems that learn faster and also, well mostly are multimodal and robust to multimodal type of issues. So one of the challenges there is the machine learning computer vision community has certain sets of benchmarks, sets of tests based on which they compete. And I would argue, especially from your perspective, that those benchmarks aren't that useful for testing the aspects that the brain is good at, or intelligence. They're not really testing intelligence. They're very fine. And it's been extremely useful for developing specific mathematical models, but it's not useful in the long term for creating intelligence. So you think you also have a role in proposing better tests? Yeah, this is a very, you've identified a very serious problem. First of all, the tests that they have are the tests that they want. Not the tests of the other things that we're trying to do, right? You know, what are the, so on. The second thing is sometimes these, to be competitive in these tests, you have to have huge data sets and huge computing power. And so, you know, and we don't have that here. We don't have it as well as other big teams that big companies do. So there's numerous issues there. You know, we come out, you know, where our approach to this is all based on, in some sense, you might argue, elegance. We're coming at it from like a theoretical base that we think, oh my God, this is so clearly elegant. This is how brains work. This is what intelligence is. But the machine learning world has gotten in this phase where they think it doesn't matter. Doesn't matter what you think, as long as you do, you know, 0.1% better on this benchmark, that's what, that's all that matters. And that's a problem. You know, we have to figure out how to get around that. That's a challenge for us. That's one of the challenges that we have to deal with. So I agree, you've identified a big issue. It's difficult for those reasons. But you know, part of the reasons I'm talking to you here today is I hope I'm gonna get some machine learning people to say, I'm gonna read those papers. Those might be some interesting ideas. I'm tired of doing this 0.1% improvement stuff, you know? Well, that's why I'm here as well, because I think machine learning now as a community is at a place where the next step needs to be orthogonal to what has received success in the past. Well, you see other leaders saying this, machine learning leaders, you know, Jeff Hinton with his capsules idea. Many people have gotten up to say, you know, we're gonna hit road map, maybe we should look at the brain, you know, things like that. So hopefully that thinking will occur organically. And then we're in a nice position for people to come and look at our work and say, well, what can we learn from these guys? Yeah, MIT is launching a billion dollar computing college that's centered around this idea, so. Is it on this idea of what? Well, the idea that, you know, the humanities, psychology, and neuroscience have to work all together to get to build the S. Yeah, I mean, Stanford just did this Human Centered AI Center. I'm a little disappointed in these initiatives because, you know, they're focusing on sort of the human side of it, and it could very easily slip into how humans interact with intelligent machines, which is nothing wrong with that, but that's not, that is orthogonal to what we're trying to do. We're trying to say, like, what is the essence of intelligence? I don't care. In fact, I wanna build intelligent machines that aren't emotional, that don't smile at you, that, you know, that aren't trying to tuck you in at night. Yeah, there is that pattern that you, when you talk about understanding humans is important for understanding intelligence, that you start slipping into topics of ethics or, yeah, like you said, the interactive elements as opposed to, no, no, no, we have to zoom in on the brain, study what the human brain, the baby, the... Let's study what a brain does. Does. And then we can decide which parts of that we wanna recreate in some system, but until you have that theory about what the brain does, what's the point, you know, it's just, you're gonna be wasting time, I think. Right, just to break it down on the artificial neural network side, maybe you could speak to this on the biological neural network side, the process of learning versus the process of inference. Maybe you can explain to me, is there a difference between, you know, in artificial neural networks, there's a difference between the learning stage and the inference stage. Do you see the brain as something different? One of the big distinctions that people often say, I don't know how correct it is, is artificial neural networks need a lot of data. They're very inefficient learning. Do you see that as a correct distinction from the biology of the human brain, that the human brain is very efficient, or is that just something we deceive ourselves? No, it is efficient, obviously. We can learn new things almost instantly. And so what elements do you think are useful? Yeah, I can talk about that. You brought up two issues there. So remember I talked early about the constraints we always feel, well, one of those constraints is the fact that brains are continually learning. That's not something we said, oh, we can add that later. That's something that was upfront, had to be there from the start, made our problems harder. But we showed, going back to the 2016 paper on sequence memory, we showed how that happens, how the brains infer and learn at the same time. And our models do that. And they're not two separate phases, or two separate sets of time. I think that's a big, big problem in AI, at least for many applications, not for all. So I can talk about that. There are some, it gets detailed, there are some parts of the neocortex in the brain where actually what's going on, there's these cycles of activity in the brain. And there's very strong evidence that you're doing more of inference on one part of the phase, and more of learning on the other part of the phase. So the brain can actually sort of separate different populations of cells or going back and forth like this. But in general, I would say that's an important problem. We have all of our networks that we've come up with do both. And they're continuous learning networks. And you mentioned benchmarks earlier. Well, there are no benchmarks about that. So we have to, we get in our little soapbox, and hey, by the way, this is important, and here's a mechanism for doing that. But until you can prove it to someone in some commercial system or something, it's a little harder. So yeah, one of the things I had to linger on that is in some ways to learn the concept of a coffee cup, you only need this one coffee cup and maybe some time alone in a room with it. Well, the first thing is, imagine I reach my hand into a black box and I'm reaching, I'm trying to touch something. I don't know upfront if it's something I already know or if it's a new thing. And I have to, I'm doing both at the same time. I don't say, oh, let's see if it's a new thing. Oh, let's see if it's an old thing. I don't do that. As I go, my brain says, oh, it's new or it's not new. And if it's new, I start learning what it is. And by the way, it starts learning from the get go, even if it's gonna recognize it. So they're not separate problems. And so that's the thing there. The other thing you mentioned was the fast learning. So I was just talking about continuous learning, but there's also fast learning. Literally, I can show you this coffee cup and I say, here's a new coffee cup. It's got the logo on it. Take a look at it, done, you're done. You can predict what it's gonna look like, you know, in different positions. So I can talk about that too. In the brain, the way learning occurs, I mentioned this earlier, but I'll mention it again. The way learning occurs, imagine I am a section of a dendrite of a neuron, and I'm gonna learn something new. Doesn't matter what it is. I'm just gonna learn something new. I need to recognize a new pattern. So what I'm gonna do is I'm gonna form new synapses. New synapses, we're gonna rewire the brain onto that section of the dendrite. Once I've done that, everything else that neuron has learned is not affected by it. That's because it's isolated to that small section of the dendrite. They're not all being added together, like a point neuron. So if I learn something new on this segment here, it doesn't change any of the learning that occur anywhere else in that neuron. So I can add something without affecting previous learning. And I can do it quickly. Now let's talk, we can talk about the quickness, how it's done in real neurons. You might say, well, doesn't it take time to form synapses? Yes, it can take maybe an hour to form a new synapse. We can form memories quicker than that, and I can explain that how it happens too, if you want. But it's getting a bit neurosciencey. That's great, but is there an understanding of these mechanisms at every level? Yeah. So from the short term memories and the forming. So this idea of synaptogenesis, the growth of new synapses, that's well described, it's well understood. And that's an essential part of learning. That is learning. That is learning. Okay. Going back many, many years, people, you know, it was, what's his name, the psychologist who proposed, Hebb, Donald Hebb. He proposed that learning was the modification of the strength of a connection between two neurons. People interpreted that as the modification of the strength of a synapse. He didn't say that. He just said there's a modification between the effect of one neuron and another. So synaptogenesis is totally consistent with what Donald Hebb said. But anyway, there's these mechanisms, the growth of new synapses. You can go online, you can watch a video of a synapse growing in real time. It's literally, you can see this little thing going boop. It's pretty impressive. So those mechanisms are known. Now there's another thing that we've speculated and we've written about, which is consistent with known neuroscience, but it's less proven. And this is the idea, how do I form a memory really, really quickly? Like instantaneous. If it takes an hour to grow a synapse, like that's not instantaneous. So there are types of synapses called silent synapses. They look like a synapse, but they don't do anything. They're just sitting there. It's like if an action potential comes in, it doesn't release any neurotransmitter. Some parts of the brain have more of these than others. For example, the hippocampus has a lot of them, which is where we associate most short term memory with. So what we speculated, again, in that 2016 paper, we proposed that the way we form very quick memories, very short term memories, or quick memories, is that we convert silent synapses into active synapses. It's like saying a synapse has a zero weight and a one weight, but the longterm memory has to be formed by synaptogenesis. So you can remember something really quickly by just flipping a bunch of these guys from silent to active. It's not from 0.1 to 0.15. It's like, it doesn't do anything till it releases transmitter. And if I do that over a bunch of these, I've got a very quick short term memory. So I guess the lesson behind this is that most neural networks today are fully connected. Every neuron connects every other neuron from layer to layer. That's not correct in the brain. We don't want that. We actually don't want that. It's bad. You want a very sparse connectivity so that any neuron connects to some subset of the neurons in the other layer. And it does so on a dendrite by dendrite segment basis. So it's a very some parcelated out type of thing. And that then learning is not adjusting all these weights, but learning is just saying, okay, connect to these 10 cells here right now. In that process, you know, with artificial neural networks, it's a very simple process of backpropagation that adjusts the weights. The process of synaptogenesis. Synaptogenesis. Synaptogenesis. It's even easier. It's even easier. It's even easier. Backpropagation requires something that really can't happen in brains. This backpropagation of this error signal, that really can't happen. People are trying to make it happen in brains, but it doesn't happen in brains. This is pure Hebbian learning. Well, synaptogenesis is pure Hebbian learning. It's basically saying, there's a population of cells over here that are active right now. And there's a population of cells over here active right now. How do I form connections between those active cells? And it's literally saying this guy became active. These 100 neurons here became active before this neuron became active. So form connections to those ones. That's it. There's no propagation of error, nothing. All the networks we do, all the models we have work on almost completely on Hebbian learning, but on dendritic segments and multiple synapses at the same time. So now let's sort of turn the question that you already answered, and maybe you can answer it again. If you look at the history of artificial intelligence, where do you think we stand? How far are we from solving intelligence? You said you were very optimistic. Can you elaborate on that? Yeah, it's always the crazy question to ask because no one can predict the future. Absolutely. So I'll tell you a story. I used to run a different neuroscience institute called the Redwood Neuroscience Institute, and we would hold these symposiums and we'd get like 35 scientists from around the world to come together. And I used to ask them all the same question. I would say, well, how long do you think it'll be before we understand how the neocortex works? And everyone went around the room and they had introduced the name and they have to answer that question. So I got, the typical answer was 50 to 100 years. Some people would say 500 years. Some people said never. I said, why are you a neuroscientist? It's never gonna, it's a good pay. It's interesting. So, you know, but it doesn't work like that. As I mentioned earlier, these are not, these are step functions. Things happen and then bingo, they happen. You can't predict that. I feel I've already passed a step function. So if I can do my job correctly over the next five years, then, meaning I can proselytize these ideas. I can convince other people they're right. We can show that other people, machine learning people should pay attention to these ideas. Then we're definitely in an under 20 year timeframe. If I can do those things, if I'm not successful in that, and this is the last time anyone talks to me and no one reads our papers and you know, and I'm wrong or something like that, then I don't know. But it's not 50 years. Think about electric cars. How quickly are they gonna populate the world? It probably takes about a 20 year span. It'll be something like that. But I think if I can do what I said, we're starting it. And of course there could be other, you said step functions. It could be everybody gives up on your ideas for 20 years and then all of a sudden somebody picks it up again. Wait, that guy was onto something. Yeah, so that would be a failure on my part, right? Think about Charles Babbage. Charles Babbage, he's the guy who invented the computer back in the 18 something, 1800s. And everyone forgot about it until 100 years later. And say, hey, this guy figured this stuff out a long time ago. But he was ahead of his time. I don't think, as I said, I recognize this is part of any entrepreneur's challenge. I use entrepreneur broadly in this case. I'm not meaning like I'm building a business or trying to sell something. I mean, I'm trying to sell ideas. And this is the challenge as to how you get people to pay attention to you, how do you get them to give you positive or negative feedback, how do you get the people to act differently based on your ideas. So we'll see how well we do on that. So you know that there's a lot of hype behind artificial intelligence currently. Do you, as you look to spread the ideas that are of neocortical theory, the things you're working on, do you think there's some possibility we'll hit an AI winter once again? Yeah, it's certainly a possibility. No question about it. Is that something you worry about? Yeah, well, I guess, do I worry about it? I haven't decided yet if that's good or bad for my mission. That's true, that's very true. Because it's almost like you need the winter to refresh the palette. Yeah, it's like, I want, here's what you wanna have it is. You want, like to the extent that everyone is so thrilled about the current state of machine learning and AI and they don't imagine they need anything else, it makes my job harder. If everything crashed completely and every student left the field and there was no money for anybody to do anything and it became an embarrassment to talk about machine intelligence and AI, that wouldn't be good for us either. You want sort of the soft landing approach, right? You want enough people, the senior people in AI and machine learning to say, you know, we need other approaches. We really need other approaches. Damn, we need other approaches. Maybe we should look to the brain. Okay, let's look to the brain. Who's got some brain ideas? Okay, let's start a little project on the side here trying to do brain idea related stuff. That's the ideal outcome we would want. So I don't want a total winter and yet I don't want it to be sunny all the time either. So what do you think it takes to build a system with human level intelligence where once demonstrated you would be very impressed? So does it have to have a body? Does it have to have the C word we used before, consciousness as an entirety in a holistic sense? First of all, I don't think the goal is to create a machine that is human level intelligence. I think it's a false goal. Back to Turing, I think it was a false statement. We want to understand what intelligence is and then we can build intelligent machines of all different scales, all different capabilities. A dog is intelligent. I don't need, that'd be pretty good to have a dog. But what about something that doesn't look like an animal at all, in different spaces? So my thinking about this is that we want to define what intelligence is, agree upon what makes an intelligent system. We can then say, okay, we're now gonna build systems that work on those principles or some subset of them and we can apply them to all different types of problems. And the kind, the idea, it's not computing. We don't ask, if I take a little one chip computer, I don't say, well, that's not a computer because it's not as powerful as this big server over here. No, no, because we know that what the principles of computing are and I can apply those principles to a small problem or into a big problem. And same, intelligence needs to get there. We have to say, these are the principles. I can make a small one, a big one. I can make them distributed. I can put them on different sensors. They don't have to be human like at all. Now, you did bring up a very interesting question about embodiment. Does it have to have a body? It has to have some concept of movement. It has to be able to move through these reference frames I talked about earlier. Whether it's physically moving, like I need, if I'm gonna have an AI that understands coffee cups, it's gonna have to pick up the coffee cup and touch it and look at it with its eyes and hands or something equivalent to that. If I have a mathematical AI, maybe it needs to move through mathematical spaces. I could have a virtual AI that lives in the internet and its movements are traversing links and digging into files, but it's got a location that it's traveling through some space. You can't have an AI that just take some flash thing input. We call it flash inference. Here's a pattern, done. No, it's movement pattern, movement pattern, movement pattern, attention, digging, building structure, figuring out the model of the world. So some sort of embodiment, whether it's physical or not, has to be part of it. So self awareness and the way to be able to answer where am I? Well, you're bringing up self, that's a different topic, self awareness. No, the very narrow definition of self, meaning knowing a sense of self enough to know where am I in the space where it's actually. Yeah, basically the system needs to know its location or each component of the system needs to know where it is in the world at that point in time. So self awareness and consciousness. Do you think one, from the perspective of neuroscience and neurocortex, these are interesting topics, solvable topics. Do you have any ideas of why the heck it is that we have a subjective experience at all? Yeah, I have a lot of thoughts on that. And is it useful or is it just a side effect of us? It's interesting to think about. I don't think it's useful as a means to figure out how to build intelligent machines. It's something that systems do and we can talk about what it is that are like, well, if I build a system like this, then it would be self aware. Or if I build it like this, it wouldn't be self aware. So that's a choice I can have. It's not like, oh my God, it's self aware. I can't turn, I heard an interview recently with this philosopher from Yale, I can't remember his name, I apologize for that. But he was talking about, well, if these computers are self aware, then it would be a crime to unplug them. And I'm like, oh, come on, that's not, I unplug myself every night, I go to sleep. Is that a crime? I plug myself in again in the morning and there I am. So people get kind of bent out of shape about this. I have very definite, very detailed understanding or opinions about what it means to be conscious and what it means to be self aware. I don't think it's that interesting a problem. You've talked to Christoph Koch. He thinks that's the only problem. I didn't actually listen to your interview with him, but I know him and I know that's the thing he cares about. He also thinks intelligence and consciousness are disjoint. So I mean, it's not, you don't have to have one or the other. So he is. I disagree with that. I just totally disagree with that. So where's your thoughts and consciousness, where does it emerge from? Because it is. So then we have to break it down to the two parts, okay? Because consciousness isn't one thing. That's part of the problem with that term is it means different things to different people and there's different components of it. There is a concept of self awareness, okay? That can be very easily explained. You have a model of your own body. The neocortex models things in the world and it also models your own body. And then it has a memory. It can remember what you've done, okay? So it can remember what you did this morning, can remember what you had for breakfast and so on. And so I can say to you, okay, Lex, were you conscious this morning when you had your bagel? And you'd say, yes, I was conscious. Now what if I could take your brain and revert all the synapses back to the state they were this morning? And then I said to you, Lex, were you conscious when you ate the bagel? And you said, no, I wasn't conscious. I said, here's a video of eating the bagel. And you said, I wasn't there. That's not possible because I must've been unconscious at that time. So we can just make this one to one correlation between memory of your body's trajectory through the world over some period of time, a memory and the ability to recall that memory is what you would call conscious. I was conscious of that, it's a self awareness. And any system that can recall, memorize what it's done recently and bring that back and invoke it again would say, yeah, I'm aware. I remember what I did. All right, I got it. That's an easy one. Although some people think that's a hard one. The more challenging part of consciousness is this one that's sometimes used going by the word of qualia, which is, why does an object seem red? Or what is pain? And why does pain feel like something? Why do I feel redness? Or why do I feel painness? And then I could say, well, why does sight seems different than hearing? It's the same problem. It's really, these are all just neurons. And so how is it that, why does looking at you feel different than hearing you? It feels different, but there's just neurons in my head. They're all doing the same thing. So that's an interesting question. The best treatise I've read about this is by a guy named Oregon. He wrote a book called, Why Red Doesn't Sound Like a Bell. It's a little, it's not a trade book, easy to read, but it, and it's an interesting question. Take something like color. Color really doesn't exist in the world. It's not a property of the world. Property of the world that exists is light frequency. And that gets turned into, we have certain cells in the retina that respond to different frequencies different than others. And so when they enter the brain, you just have a bunch of axons that are firing at different rates. And from that, we perceive color. But there is no color in the brain. I mean, there's no color coming in on those synapses. It's just a correlation between some axons and some property of frequency. And that isn't even color itself. Frequency doesn't have a color. It's just what it is. So then the question is, well, why does it even appear to have a color at all? Just as you're describing it, there seems to be a connection to those ideas of reference frames. I mean, it just feels like consciousness having the subject, assigning the feeling of red to the actual color or to the wavelength is useful for intelligence. Yeah, I think that's a good way of putting it. It's useful as a predictive mechanism or useful as a generalization idea. It's a way of grouping things together to say, it's useful to have a model like this. So think about the well known syndrome that people who've lost a limb experience called phantom limbs. And what they claim is they can have their arm is removed, but they feel their arm. That not only feel it, they know it's there. It's there, I know it's there. They'll swear to you that it's there. And then they can feel pain in their arm and they'll feel pain in their finger. And if they move their non existent arm behind their back, then they feel the pain behind their back. So this whole idea that your arm exists is a model of your brain. It may or may not really exist. And just like, but it's useful to have a model of something that sort of correlates to things in the world. So you can make predictions about what would happen when those things occur. It's a little bit of a fuzzy, but I think you're getting quite towards the answer there. It's useful for the model to express things certain ways that we can then map them into these reference frames and make predictions about them. I need to spend more time on this topic. It doesn't bother me. Do you really need to spend more time? Yeah, I know. It does feel special that we have subjective experience, but I'm yet to know why. I'm just personally curious. It's not necessary for the work we're doing here. I don't think I need to solve that problem to build intelligent machines at all, not at all. But there is sort of the silly notion that you described briefly that doesn't seem so silly to us humans is, if you're successful building intelligent machines, it feels wrong to then turn them off. Because if you're able to build a lot of them, it feels wrong to then be able to turn off the... Well, why? Let's break that down a bit. As humans, why do we fear death? There's two reasons we fear death. Well, first of all, I'll say, when you're dead, it doesn't matter at all. Who cares? You're dead. So why do we fear death? We fear death for two reasons. One is because we are programmed genetically to fear death. That's a survival and pop beginning of the genes thing. And we also are programmed to feel sad when people we know die. We don't feel sad for someone we don't know dies. There's people dying right now, they're only just gonna say, I don't feel bad about them, because I don't know them. But if I knew them, I'd feel really bad. So again, these are old brain, genetically embedded things that we fear death. It's outside of those uncomfortable feelings. There's nothing else to worry about. Well, wait, hold on a second. Do you know the denial of death by Becker? No. There's a thought that death is, our whole conception of our world model kind of assumes immortality. And then death is this terror that underlies it all. So like... Some people's world model, not mine. But, okay, so what Becker would say is that you're just living in an illusion. You've constructed an illusion for yourself because it's such a terrible terror, the fact that this... What's the illusion? The illusion that death doesn't matter. You're still not coming to grips with... The illusion of what? That death is... Going to happen. Oh, like it's not gonna happen? You're actually operating. You haven't, even though you said you've accepted it, you haven't really accepted the notion that you're gonna die is what you say. So it sounds like you disagree with that notion. Yeah, yeah, totally. I literally, every night I go to bed, it's like dying. Like little deaths. It's little deaths. And if I didn't wake up, it wouldn't matter to me. Only if I knew that was gonna happen would it be bothersome. If I didn't know it was gonna happen, how would I know? Then I would worry about my wife. So imagine I was a loner and I lived in Alaska and I lived out there and there was no animals. Nobody knew I existed. I was just eating these roots all the time. And nobody knew I was there. And one day I didn't wake up. What pain in the world would there exist? Well, so most people that think about this problem would say that you're just deeply enlightened or are completely delusional. One of the two. But I would say that's a very enlightened way to see the world. That's the rational one as well. It's rational, that's right. But the fact is we don't, I mean, we really don't have an understanding of why the heck it is we're born and why we die and what happens after we die. Well, maybe there isn't a reason, maybe there is. So I'm interested in those big problems too, right? You interviewed Max Tegmark, and there's people like that, right? I'm interested in those big problems as well. And in fact, when I was young, I made a list of the biggest problems I could think of. First, why does anything exist? Second, why do we have the laws of physics that we have? Third, is life inevitable? And why is it here? Fourth, is intelligence inevitable? And why is it here? I stopped there because I figured if you can make a truly intelligent system, that will be the quickest way to answer the first three questions. I'm serious. And so I said, my mission, you asked me earlier, my first mission is to understand the brain, but I felt that is the shortest way to get to true machine intelligence. And I wanna get to true machine intelligence because even if it doesn't occur in my lifetime, other people will benefit from it because I think it'll occur in my lifetime, but 20 years, you never know. But that will be the quickest way for us to, we can make super mathematicians, we can make super space explorers, we can make super physicist brains that do these things and that can run experiments that we can't run. We don't have the abilities to manipulate things and so on, but we can build intelligent machines that do all those things with the ultimate goal of finding out the answers to the other questions. Let me ask you another depressing and difficult question, which is once we achieve that goal of creating, no, of understanding intelligence, do you think we would be happier, more fulfilled as a species? The understanding intelligence or understanding the answers to the big questions? Understanding intelligence. Oh, totally, totally. It would be far more fun place to live. You think so? Oh yeah, why not? I mean, just put aside this terminator nonsense and just think about, you can think about, we can talk about the risks of AI if you want. I'd love to, so let's talk about. But I think the world would be far better knowing things. We're always better than know things. Do you think it's better, is it a better place to live in that I know that our planet is one of many in the solar system and the solar system's one of many in the galaxy? I think it's a more, I dread, I sometimes think like, God, what would it be like to live 300 years ago? I'd be looking up at the sky, I can't understand anything. Oh my God, I'd be like going to bed every night going, what's going on here? Well, I mean, in some sense I agree with you, but I'm not exactly sure. So I'm also a scientist, so I share your views, but I'm not, we're like rolling down the hill together. What's down the hill? I feel like we're climbing a hill. Whatever. We're getting closer to enlightenment and you're going down the hill. We're climbing, we're getting pulled up a hill by our curiosity. Our curiosity is, we're pulling ourselves up the hill by our curiosity. Yeah, Sisyphus was doing the same thing with the rock. Yeah, yeah, yeah, yeah. But okay, our happiness aside, do you have concerns about, you talk about Sam Harris, Elon Musk, of existential threats of intelligent systems? No, I'm not worried about existential threats at all. There are some things we really do need to worry about. Even today's AI, we have things we have to worry about. We have to worry about privacy and about how it impacts false beliefs in the world. And we have real problems and things to worry about with today's AI. And that will continue as we create more intelligent systems. There's no question, the whole issue about making intelligent armaments and weapons is something that really we have to think about carefully. I don't think of those as existential threats. I think those are the kind of threats we always face and we'll have to face them here and we'll have to deal with them. We could talk about what people think are the existential threats, but when I hear people talking about them, they all sound hollow to me. They're based on ideas, they're based on people who really have no idea what intelligence is. And if they knew what intelligence was, they wouldn't say those things. So those are not experts in the field. Yeah, so there's two, right? So one is like super intelligence. So a system that becomes far, far superior in reasoning ability than us humans. How is that an existential threat? Then, so there's a lot of ways in which it could be. One way is us humans are actually irrational, inefficient and get in the way of, not happiness, but whatever the objective function is of maximizing that objective function. Super intelligent. The paperclip problem and things like that. So the paperclip problem but with the super intelligent. Yeah, yeah, yeah, yeah. So we already face this threat in some sense. They're called bacteria. These are organisms in the world that would like to turn everything into bacteria. And they're constantly morphing, they're constantly changing to evade our protections. And in the past, they have killed huge swaths of populations of humans on this planet. So if you wanna worry about something that's gonna multiply endlessly, we have it. And I'm far more worried in that regard. I'm far more worried that some scientists in the laboratory will create a super virus or a super bacteria that we cannot control. That is a more of an existential threat. Putting an intelligence thing on top of it actually seems to make it less existential to me. It's like, it limits its power. It limits where it can go. It limits the number of things it can do in many ways. A bacteria is something you can't even see. So that's only one of those problems. Yes, exactly. So the other one, just in your intuition about intelligence, when you think about intelligence of us humans, do you think of that as something, if you look at intelligence on a spectrum from zero to us humans, do you think you can scale that to something far, far superior to all the mechanisms we've been talking about? I wanna make another point here, Lex, before I get there. Intelligence is the neocortex. It is not the entire brain. The goal is not to make a human. The goal is not to make an emotional system. The goal is not to make a system that wants to have sex and reproduce. Why would I build that? If I wanna have a system that wants to reproduce and have sex, make bacteria, make computer viruses. Those are bad things, don't do that. Those are really bad, don't do those things. Regulate those. But if I just say I want an intelligent system, why does it have to have any of the human like emotions? Why does it even care if it lives? Why does it even care if it has food? It doesn't care about those things. It's just, you know, it's just in a trance thinking about mathematics or it's out there just trying to build the space for it on Mars. That's a choice we make. Don't make human like things, don't make replicating things, don't make things that have emotions, just stick to the neocortex. So that's a view actually that I share but not everybody shares in the sense that you have faith and optimism about us as engineers of systems, humans as builders of systems to not put in stupid, not. So this is why I mentioned the bacteria one. Because you might say, well, some person's gonna do that. Well, some person today could create a bacteria that's resistant to all the known antibacterial agents. So we already have that threat. We already know this is going on. It's not a new threat. So just accept that and then we have to deal with it, right? Yeah, so my point is nothing to do with intelligence. Intelligence is a separate component that you might apply to a system that wants to reproduce and do stupid things. Let's not do that. Yeah, in fact, it is a mystery why people haven't done that yet. My dad is a physicist, believes that the reason, he says, for example, nuclear weapons haven't proliferated amongst evil people. So one belief that I share is that there's not that many evil people in the world that would use, whether it's bacteria or nuclear weapons or maybe the future AI systems to do bad. So the fraction is small. And the second is that it's actually really hard, technically, so the intersection between evil and competent is small in terms of, and that's the. And by the way, to really annihilate humanity, you'd have to have sort of the nuclear winter phenomenon, which is not one person shooting or even 10 bombs. You'd have to have some automated system that detonates a million bombs or whatever many thousands we have. So extreme evil combined with extreme competence. And to start with building some stupid system that would automatically, Dr. Strangelove type of thing, you know, I mean, look, we could have some nuclear bomb go off in some major city in the world. I think that's actually quite likely, even in my lifetime. I don't think that's an unlikely thing. And it'd be a tragedy. But it won't be an existential threat. And it's the same as, you know, the virus of 1917, whatever it was, you know, the influenza. These bad things can happen and the plague and so on. We can't always prevent them. We always try, but we can't. But they're not existential threats until we combine all those crazy things together. So on the spectrum of intelligence from zero to human, do you have a sense of whether it's possible to create several orders of magnitude or at least double that of human intelligence? Talking about neuro context. I think it's the wrong thing to say double the intelligence. Break it down into different components. Can I make something that's a million times fast than a human brain? Yes, I can do that. Could I make something that is, has a lot more storage than the human brain? Yes, I could do that. More common, more copies of common. Can I make something that attaches to different sensors than human brain? Yes, I can do that. Could I make something that's distributed? So these people, yeah, we talked early about the departure of the neocortex voting. They don't have to be co located. Like, you know, they can be all around the place. I could do that too. Those are the levers I have, but is it more intelligent? Well, it depends what I train it on. What is it doing? If it's. Well, so here's the thing. So let's say larger neocortex and or whatever size that allows for higher and higher hierarchies to form, we're talking about reference frames and concepts. Could I have something that's a super physicist or a super mathematician? Yes. And the question is, once you have a super physicist, will they be able to understand something? Do you have a sense that it will be orders of math, like us compared to ants? Could we ever understand it? Yeah. Most people cannot understand general relativity. It's a really hard thing to get. I mean, yeah, you can paint it in a fuzzy picture, stretchy space, you know? But the field equations to do that and the deep intuitions are really, really hard. And I've tried, I'm unable to do it. Like it's easy to get special relativity, but general relativity, man, that's too much. And so we already live with this to some extent. The vast majority of people can't understand actually what the vast majority of other people actually know. We're just, either we don't have the effort to, or we can't, or we don't have time, or just not smart enough, whatever. But we have ways of communicating. Einstein has spoken in a way that I can understand. He's given me analogies that are useful. I can use those analogies from my own work and think about concepts that are similar. It's not stupid. It's not like he's existing some other plane and there's no connection with my plane in the world here. So that will occur. It already has occurred. That's what my point of this story is. It already has occurred. We live it every day. One could argue that when we create machine intelligence that think a million times faster than us that it'll be so far we can't make the connections. But you know, at the moment, everything that seems really, really hard to figure out in the world, when you actually figure it out, it's not that hard. You know, almost everyone can understand the multiverses. Almost everyone can understand quantum physics. Almost everyone can understand these basic things, even though hardly any people could figure those things out. Yeah, but really understand. But you don't need to really. Only a few people really understand. You need to only understand the projections, the sprinkles of the useful insights from that. That was my example of Einstein, right? His general theory of relativity is one thing that very, very, very few people can get. And what if we just said those other few people are also artificial intelligences? How bad is that? In some sense they are, right? Yeah, they say already. I mean, Einstein wasn't a really normal person. He had a lot of weird quirks. And so did the other people who worked with him. So, you know, maybe they already were sort of this astral plane of intelligence that, we live with it already. It's not a problem. It's still useful and, you know. So do you think we are the only intelligent life out there in the universe? I would say that intelligent life has and will exist elsewhere in the universe. I'll say that. There was a question about contemporaneous intelligence life, which is hard to even answer when we think about relativity and the nature of space time. Can't say what exactly is this time someplace else in the world. But I think it's, you know, I do worry a lot about the filter idea, which is that perhaps intelligent species don't last very long. And so we haven't been around very long. And as a technological species, we've been around for almost nothing, you know. What, 200 years, something like that. And we don't have any data, a good data point on whether it's likely that we'll survive or not. So do I think that there have been intelligent life elsewhere in the universe? Almost certainly, of course. In the past, in the future, yes. Does it survive for a long time? I don't know. This is another reason I'm excited about our work, is our work meaning the general world of AI. I think we can build intelligent machines that outlast us. You know, they don't have to be tied to Earth. They don't have to, you know, I'm not saying they're recreating, you know, aliens, I'm just saying, if I asked myself, and this might be a good point to end on here. If I asked myself, you know, what's special about our species? We're not particularly interesting physically. We don't fly, we're not good swimmers, we're not very fast, we're not very strong, you know. It's our brain, that's the only thing. And we are the only species on this planet that's built the model of the world that extends beyond what we can actually sense. We're the only people who know about the far side of the moon, and the other universes, and I mean, other galaxies, and other stars, and about what happens in the atom. There's no, that knowledge doesn't exist anywhere else. It's only in our heads. Cats don't do it, dogs don't do it, monkeys don't do it, it's just on. And that is what we've created that's unique. Not our genes, it's knowledge. And if I asked me, what is the legacy of humanity? What should our legacy be? It should be knowledge. We should preserve our knowledge in a way that it can exist beyond us. And I think the best way of doing that, in fact you have to do it, is it has to go along with intelligent machines that understand that knowledge. It's a very broad idea, but we should be thinking, I call it a state planning for humanity. We should be thinking about what we wanna leave behind when as a species we're no longer here. And that'll happen sometime. Sooner or later it's gonna happen. And understanding intelligence and creating intelligence gives us a better chance to prolong. It does give us a better chance to prolong life, yes. It gives us a chance to live on other planets. But even beyond that, I mean our solar system will disappear one day, just given enough time. So I don't know, I doubt we'll ever be able to travel to other things, but we could tell the stars, but we could send intelligent machines to do that. So you have an optimistic, a hopeful view of our knowledge of the echoes of human civilization living through the intelligent systems we create? Oh, totally. Well, I think the intelligent systems we create are in some sense the vessel for bringing them beyond Earth or making them last beyond humans themselves. How do you feel about that? That they won't be human, quote unquote? Who cares? Human, what is human? Our species are changing all the time. Human today is not the same as human just 50 years ago. What is human? Do we care about our genetics? Why is that important? As I point out, our genetics are no more interesting than a bacterium's genetics. It's no more interesting than a monkey's genetics. What we have, what's unique and what's valuable is our knowledge, what we've learned about the world. And that is the rare thing. That's the thing we wanna preserve. It's, who cares about our genes? That's not. It's the knowledge. It's the knowledge. That's a really good place to end. Thank you so much for talking to me. No, it was fun.
Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25
The following is a conversation with Sean Carroll. He's a theoretical physicist at Caltech specializing in quantum mechanics, gravity, and cosmology. He's the author of several popular books, one on the arrow of time called From Eternity to Here, one on the Higgs boson called Particle at the End of the Universe, and one on science and philosophy called The Big Picture on the Origins of Life, Meaning, and the Universe Itself. He has an upcoming book on quantum mechanics that you can preorder now called Something Deeply Hidden. He writes one of my favorite blogs on his website, preposterousuniverse.com. I recommend clicking on the Greatest Hits link that lists accessible, interesting posts on the arrow of time, dark matter, dark energy, the Big Bang, general relativity, string theory, quantum mechanics, and the big meta questions about the philosophy of science, God, ethics, politics, academia, and much, much more. Finally, and perhaps most famously, he's the host of a podcast called Mindscape that you should subscribe to and support on Patreon. Along with the Joe Rogan experience, Sam Harris's Making Sense, and Dan Carlin's Hardcore History, Sean's Mindscape podcast is one of my favorite ways to learn new ideas or explore different perspectives and ideas that I thought I understood. It was truly an honor to meet and spend a couple hours with Sean. It's a bit heartbreaking to say that for the first time ever, the audio recorder for this podcast died in the middle of our conversation. There's technical reasons for this, having to do with phantom power that I now understand and will avoid. It took me one hour to notice and fix the problem. So, much like the universe is 68% dark energy, roughly the same amount from this conversation was lost, except in the memories of the two people involved and in my notes. I'm sure we'll talk again and continue this conversation on this podcast or on Sean's. And of course, I look forward to it. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now, here's my conversation with Sean Carroll. What do you think is more interesting and impactful, understanding how the universe works at a fundamental level or understanding how the human mind works? You know, of course this is a crazy, meaningless, unanswerable question in some sense, because they're both very interesting and there's no absolute scale of interestingness that we can rate them on. There's a glib answer that says the human brain is part of the universe, right? And therefore, understanding the universe is more fundamental than understanding the human brain. But do you really believe that once we understand the fundamental way the universe works at the particle level, the forces, we would be able to understand how the mind works? No, certainly not. We cannot understand how ice cream works just from understanding how particles work, right? So I'm a big believer in emergence. I'm a big believer that there are different ways of talking about the world beyond just the most fundamental microscopic one. You know, when we talk about tables and chairs and planets and people, we're not talking the language of particle physics and cosmology. So, but understanding the universe, you didn't say just at the most fundamental level, right? So understanding the universe at all levels is part of that. I do think, you know, to be a little bit more fair to the question, there probably are general principles of complexity, biology, information processing, memory, knowledge, creativity that go beyond just the human brain, right? And maybe one could count understanding those as part of understanding the universe. The human brain, as far as we know, is the most complex thing in the universe. So there's, it's certainly absurd to think that by understanding the fundamental laws of particle physics, you get any direct insight on how the brain works. But then there's this step from the fundamentals of particle physics to information processing, which a lot of physicists and philosophers may be a little bit carelessly take when they talk about artificial intelligence. Do you think of the universe as a kind of a computational device? No, to be like, the honest answer there is no. There's a sense in which the universe processes information, clearly. There's a sense in which the universe is like a computer, clearly. But in some sense, I think, I tried to say this once on my blog and no one agreed with me, but the universe is more like a computation than a computer because the universe happens once. A computer is a general purpose machine, right? That you can ask it different questions, even a pocket calculator, right? And it's set up to answer certain kinds of questions. The universe isn't that. So information processing happens in the universe, but it's not what the universe is. And I know your MIT colleague, Seth Lloyd, feels very differently about this, right? Well, you're thinking of the universe as a closed system. I am. So what makes a computer more like a PC, like a computing machine is that there's a human that every once comes up to it and moves the mouse around. So input. Gives it input. Gives it input. And that's why you're saying it's just a computation, a deterministic thing that's just unrolling. But the immense complexity of it is nevertheless like processing. There's a state and then it changes with good rules. And there's a sense for a lot of people that if the brain operates, the human brain operates within that world, then it's simply just a small subset of that. And so there's no reason we can't build arbitrarily great intelligences. Yeah. Do you think of intelligence in this way? Intelligence is tricky. I don't have a definition of it offhand. So I remember this panel discussion that I saw on YouTube. I wasn't there, but Seth Lloyd was on the panel. And so was Martin Rees, the famous astrophysicist. And Seth gave his shtick for why the universe is a computer and explained this. And Martin Rees said, so what is not a computer? And Seth was like, oh, that's a good question. I'm not sure. Because if you have a sufficiently broad definition of what a computer is, then everything is, right? And the simile or the analogy gains force when it excludes some things. You know, is the moon going around the earth performing a computation? I can come up with definitions in which the answer is yes, but it's not a very useful computation. I think that it's absolutely helpful to think about the universe in certain situations, certain contexts, as an information processing device. I'm even guilty of writing a paper called Quantum Circuit Cosmology, where we modeled the whole universe as a quantum circuit. As a circuit. As a circuit, yeah. With qubits kind of thing? With qubits basically, right, yeah. So, and qubits becoming more and more entangled. So do we wanna digress a little bit? Let's do it. It's kind of fun. So here's a mystery about the universe that is so deep and profound that nobody talks about it. Space expands, right? And we talk about, in a certain region of space, a certain number of degrees of freedom, a certain number of ways that the quantum fields and the particles in that region can arrange themselves. That number of degrees of freedom in a region of space is arguably finite. We actually don't know how many there are, but there's a very good argument that says it's a finite number. So as the universe expands and space gets bigger, are there more degrees of freedom? If it's an infinite number, it doesn't really matter. Infinity times two is still infinity. But if it's a finite number, then there's more space, so there's more degrees of freedom. So where did they come from? That would mean the universe is not a closed system. There's more degrees of freedom popping into existence. So what we suggested was that there are more degrees of freedom, and it's not that they're not there to start, but they're not entangled to start. So the universe that you and I know of, the three dimensions around us that we see, we said those are the entangled degrees of freedom making up space time. And as the universe expands, there are a whole bunch of qubits in their zero state that become entangled with the rest of space time through the action of these quantum circuits. So what does it mean that there's now more degrees of freedom as they become more entangled? Yeah, so. As the universe expands. That's right, so there's more and more degrees of freedom that are entangled, that are playing part, playing the role of part of the entangled space time structure. So the basic, the underlying philosophy is that space time itself arises from the entanglement of some fundamental quantum degrees of freedom. Wow, okay, so at which point is most of the entanglement happening? Are we talking about close to the Big Bang? Are we talking about throughout the time of the life? Throughout history, yeah. So the idea is that at the Big Bang, almost all the degrees of freedom that the universe could have were there, but they were unentangled with anything else. And that's a reflection of the fact that the Big Bang had a low entropy. It was a very simple, very small place. And as space expands, more and more degrees of freedom become entangled with the rest of the world. Well, I have to ask John Carroll, what do you think of the thought experiment from Nick Bostrom that we're living in a simulation? So I think, let me contextualize that a little bit more. I think people don't actually take this thought experiments. I think it's quite interesting. It's not very useful, but it's quite interesting. From the perspective of AI, a lot of the learning that can be done usually happens in simulation from artificial examples. And so it's a constructive question to ask, how difficult is our real world to simulate? Right. Which is kind of a dual part of, if we're living in a simulation and somebody built that simulation, if you were to try to do it yourself, how hard would it be? So obviously we could be living in a simulation. If you just want the physical possibility, then I completely agree that it's physically possible. I don't think that we actually are. So take this one piece of data into consideration. You know, we live in a big universe, okay? There's two trillion galaxies in our observable universe with 200 billion stars in each galaxy, et cetera. It would seem to be a waste of resources to have a universe that big going on just to do a simulation. So in other words, I want to be a good Bayesian. I want to ask under this hypothesis, what do I expect to see? So the first thing I would say is I wouldn't expect to see a universe that was that big, okay? The second thing is I wouldn't expect the resolution of the universe to be as good as it is. So it's always possible that if our superhuman simulators only have finite resources, that they don't render the entire universe, right? That the part that is out there, the two trillion galaxies, isn't actually being simulated fully, okay? But then the obvious extrapolation of that is that only I am being simulated fully. Like the rest of you are just non player characters, right? I'm the only thing that is real. The rest of you are just chat bots. Beyond this wall, I see the wall, but there is literally nothing on the other side of the wall. That is sort of the Bayesian prediction. That's what it would be like to do an efficient simulation of me. So like none of that seems quite realistic. I don't see, I hear the argument that it's just possible and easy to simulate lots of things. I don't see any evidence from what we know about our universe that we look like a simulated universe. Now, maybe you can say, well, we don't know what it would look like, but that's just abandoning your Bayesian responsibilities. Like your job is to say under this theory, here's what you would expect to see. Yeah, so certainly if you think about simulation as a thing that's like a video game where only a small subset is being rendered. But say the entire, all the laws of physics, the entire closed system of the quote unquote universe, it had a creator. Yeah, it's always possible. Right, so that's not useful to think about when you're thinking about physics. The way Nick Bostrom phrases it, if it's possible to simulate a universe, eventually we'll do it. Right. You can use that by the way for a lot of things. Well, yeah. But I guess the question is, how hard is it to create a universe? I wrote a little blog post about this and maybe I'm missing something, but there's an argument that says not only that it might be possible to simulate a universe, but probably if you imagine that you actually attribute consciousness and agency to the little things that we're simulating, to our little artificial beings, there's probably a lot more of them than there are ordinary organic beings in the universe or there will be in the future, right? So there's an argument that not only is being a simulation possible, it's probable because in the space of all living consciousnesses, most of them are being simulated, right? Most of them are not at the top level. I think that argument must be wrong because it follows from that argument that, if we're simulated, but we can also simulate other things, well, but if we can simulate other things, they can simulate other things, right? If we give them enough power and resolution and ultimately we'll reach a bottom because the laws of physics in our universe have a bottom, we're made of atoms and so forth, so there will be the cheapest possible simulations. And if you believe the original argument, you should conclude that we should be in the cheapest possible simulation because that's where most people are. But we don't look like that. It doesn't look at all like we're at the edge of resolution, that we're 16 bit things. It seems much easier to make much lower level things than we are. And also, I questioned the whole approach to the anthropic principle that says we are typical observers in the universe. I think that that's not actually, I think that there's a lot of selection that we can do that we're typical within things we already know, but not typical within all of the universe. So do you think there's intelligent life, however you would like to define intelligent life, out there in the universe? My guess is that there is not intelligent life in the observable universe other than us, simply on the basis of the fact that the likely number of other intelligent species in the observable universe, there's two likely numbers, zero or billions. And if there had been billions, you would have noticed already. For there to be literally like a small number, like, you know, Star Trek, there's a dozen intelligent civilizations in our galaxy, but not a billion, that's weird. That's sort of bizarre to me. It's easy for me to imagine that there are zero others because there's just a big bottleneck to making multicellular life or technological life or whatever. It's very hard for me to imagine that there's a whole bunch out there that have somehow remained hidden from us. The question I'd like to ask is what would intelligent life look like? What I mean by that question and where it's going is what if intelligent life is just in some very big ways different than the one that has on Earth? That there's all kinds of intelligent life that operates at different scales of both size and temporal. Right, that's a great possibility because I think we should be humble about what intelligence is, what life is. We don't even agree on what life is, much less what intelligent life is, right? So that's an argument for humility, saying there could be intelligent life of a very different character, right? Like you could imagine the dolphins are intelligent but never invent space travel because they live in the ocean and they don't have thumbs, right? So they never invent technology, they never invent smelting. Maybe the universe is full of intelligent species that just don't make technology, right? That's compatible with the data, I think. And I think maybe what you're pointing at is even more out there versions of intelligence, intelligence in intermolecular clouds or on the surface of a neutron star or in between the galaxies in giant things where the equivalent of a heartbeat is 100 million years. On the one hand, yes, we should be very open minded about those things. On the other hand, all of us share the same laws of physics. There might be something about the laws of physics, even though we don't currently know exactly what that thing would be, that makes meters and years the right length and timescales for intelligent life. Maybe not, but we're made of atoms, atoms have a certain size, we orbit stars or stars have a certain lifetime. It's not impossible to me that there's a sweet spot for intelligent life that we find ourselves in. So I'm open minded either way, I'm open minded either being humble and there's all sorts of different kinds of life or no, there's a reason we just don't know it yet why life like ours is the kind of life that's out there. Yeah, I'm of two minds too, but I often wonder if our brains is just designed to quite obviously to operate and see the world in these timescales and we're almost blind and the tools we've created for detecting things are blind to the kind of observation needed to see intelligent life at other scales. Well, I'm totally open to that, but so here's another argument I would make, we have looked for intelligent life, but we've looked at for it in the dumbest way we can, by turning radio telescopes to the sky. And why in the world would a super advanced civilization randomly beam out radio signals wastefully in all directions into the universe? That just doesn't make any sense, especially because in order to think that you would actually contact another civilization, you would have to do it forever, you have to keep doing it for millions of years, that sounds like a waste of resources. If you thought that there were other solar systems with planets around them, where maybe intelligent life didn't yet exist, but might someday, you wouldn't try to talk to it with radio waves, you would send a spacecraft out there and you would park it around there and it would be like, from our point of view, it'd be like 2001, where there was a monolith. Monolith. There could be an artifact, in fact, the other way works also, right? There could be artifacts in our solar system that have been put there by other technologically advanced civilizations and that's how we will eventually contact them. We just haven't explored the solar system well enough yet to find them. The reason why we don't think about that is because we're young and impatient, right? Like, it would take more than my lifetime to actually send something to another star system and wait for it and then come back. So, but if we start thinking on hundreds of thousands of years or million year time scales, that's clearly the right thing to do. Are you excited by the thing that Elon Musk is doing with SpaceX in general? Space, but the idea of space exploration, even though your, or your species is young and impatient? Yeah. No, I do think that space travel is crucially important, long term. Even to other star systems. And I think that many people overestimate the difficulty because they say, look, if you travel 1% the speed of light to another star system, we'll be dead before we get there, right? And I think that it's much easier. And therefore, when they write their science fiction stories, they imagine we'd go faster than the speed of light because otherwise they're too impatient, right? We're not gonna go faster than the speed of light, but we could easily imagine that the human lifespan gets extended to thousands of years. And once you do that, then the stars are much closer effectively, right? And then what's a hundred year trip, right? So I think that that's gonna be the future, the far future, not my lifetime once again, but baby steps. Unless your lifetime gets extended. Well, it's in a race against time, right? A friend of mine who actually thinks about these things said, you know, you and I are gonna die, but I don't know about our grandchildren. That's, I don't know, predicting the future is hard, but that's at least a plausible scenario. And so, yeah, no, I think that as we discussed earlier, there are threats to the earth, known and unknown, right? Having spread humanity and biology elsewhere is a really important longterm goal. What kind of questions can science not currently answer, but might soon? When you think about the problems and the mysteries before us that may be within reach of science. I think an obvious one is the origin of life. We don't know how that happened. There's a difficulty in knowing how it happened historically actually, you know, literally on earth, but starting life from non life is something I kind of think we're close to, right? We're really. You really think so? Like how difficult is it to start life? Well, I've talked to people, including on the podcast about this. You know, life requires three things. Life as we know it. So there's a difference with life, which who knows what it is, and life as we know it, which we can talk about with some intelligence. So life as we know it requires compartmentalization. You need like a little membrane around your cell. Metabolism, you need to take in food and eat it and let that make you do things. And then replication, okay? So you need to have some information about who you are that you pass down to future generations. In the lab, compartmentalization seems pretty easy. Not hard to make lipid bilayers that come into little cellular walls pretty easily. Metabolism and replication are hard, but replication we're close to. People have made RNA like molecules in the lab that I think the state of the art is, they're not able to make one molecule that reproduces itself, but they're able to make two molecules that reproduce each other. So that's okay. That's pretty close. Metabolism is harder, believe it or not, even though it's sort of the most obvious thing, but you want some sort of controlled metabolism and the actual cellular machinery in our bodies is quite complicated. It's hard to see it just popping into existence all by itself. It probably took a while, but we're making progress. And in fact, I don't think we're spending nearly enough money on it. If I were the NSF, I would flood this area with money because it would change our view of the world if we could actually make life in the lab and understand how it was made originally here on earth. And I'm sure it'd have some ripple effects that help cure disease and so on. I mean, just that understanding. So synthetic biology is a wonderful big frontier where we're making cells. Right now, the best way to do that is to borrow heavily from existing biology, right? Well, Craig Venter several years ago created an artificial cell, but all he did was, not all he did, it was a tremendous accomplishment, but all he did was take out the DNA from a cell and put in entirely new DNA and let it boot up and go. What about the leap to creating intelligent life on earth? Yeah. Again, we define intelligence, of course, but let's just even say Homo sapiens, the modern intelligence in our human brain. Do you have a sense of what's involved in that leap and how big of a leap that is? So AI would count in this, or do you really want life? Do you want really an organism in some sense? AI would count, I think. Okay. Yeah, of course, of course AI would count. Well, let's say artificial consciousness, right? So I do not think we are on the threshold of creating artificial consciousness. I think it's possible. I'm not, again, very educated about how close we are, but my impression is not that we're really close because we understand how little we understand of consciousness and what it is. So if we don't have any idea what it is, it's hard to imagine we're on the threshold of making it ourselves. But it's doable, it's possible. I don't see any obstacles in principle. So yeah, I would hold out some interest in that happening eventually. I think in general, consciousness, I think we would be just surprised how easy consciousness is once we create intelligence. I think consciousness is a thing that's just something we all fake. Well, good. No, actually, I like this idea that in fact, consciousness is way less mysterious than we think because we're all at every time, at every moment, less conscious than we think we are, right? We can fool things. And I think that plus the idea that you not only have artificial intelligent systems, but you put them in a body, right, give them a robot body, that will help the faking a lot. Yeah, I think creating consciousness in artificial consciousness is as simple as asking a Roomba to say, I'm conscious, and refusing to be talked out of it. Could be, it could be. And I mean, I'm almost being silly, but that's what we do. That's what we do with each other. This is the kind of, that consciousness is also a social construct. And a lot of our ideas of intelligence is a social construct. And so reaching that bar involves something that's beyond, that doesn't necessarily involve the fundamental understanding of how you go from electrons to neurons to cognition. No, actually, I think that is an extremely good point. And in fact, what it suggests is, so yeah, you referred to Kate Darling, who I had on the podcast, and who does these experiments with very simple robots, but they look like animals, and they can look like they're experiencing pain, and we human beings react very negatively to these little robots looking like they're experiencing pain. And what you wanna say is, yeah, but they're just robots. It's not really pain, right? It's just some electrons going around. But then you realize, you and I are just electrons going around, and that's what pain is also. And so what I would have an easy time imagining is that there is a spectrum between these simple little robots that Kate works with and a human being, where there are things that sort of by some strict definition, Turing test level thing are not conscious, but nevertheless walk and talk like they're conscious. And it could be that the future is, I mean, Siri is close, right? And so it might be the future has a lot more agents like that. And in fact, rather than someday going, aha, we have consciousness, we'll just creep up on it with more and more accurate reflections of what we expect. And in the future, maybe the present, for example, we haven't met before, and you're basically assuming that I'm human as it's a high probability at this time because the yeah, but in the future, there might be question marks around that, right? Yeah, no, absolutely. Certainly videos are almost to the point where you shouldn't trust them already. Photos you can't trust, right? Videos is easier to trust, but we're getting worse that, we're getting better at faking them, right? Yeah, so physical embodied people, what's so hard about faking that? So this is very depressing, this conversation we're having right now. So I mean, To me, it's exciting. To me, you're doing it. So it's exciting to you, but it's a sobering thought. We're very bad, right? At imagining what the next 50 years are gonna be like when we're in the middle of a phase transition as we are right now. Yeah, and I, in general, I'm not blind to all the threats. I am excited by the power of technology to solve, to protect us against the threats as they evolve. I'm not as much as Steven Pinker optimistic about the world, but in everything I've seen, all of the brilliant people in the world that I've met are good people. So the army of the good in terms of the development of technology is large. Okay, you're way more optimistic than I am. I think that goodness and badness are equally distributed among intelligent and unintelligent people. I don't see much of a correlation there. Interesting. Neither of us have proof. Yeah, exactly. Again, opinions are free, right? Nor definitions of good and evil. We come without definitions or without data opinions. So what kind of questions can science not currently answer and may never be able to answer in your view? Well, the obvious one is what is good and bad? What is right and wrong? I think that there are questions that, science tells us what happens, what the world is and what it does. It doesn't say what the world should do or what we should do, because we're part of the world. But we are part of the world and we have the ability to feel like something's right, something's wrong. And to make a very long story very short, I think that the idea of moral philosophy is systematizing our intuitions of what is right and what is wrong. And science might be able to predict ahead of time what we will do, but it won't ever be able to judge whether we should have done it or not. So, you're kind of unique in terms of scientists. Listen, it doesn't have to do with podcasts, but even just reaching out, I think you referred to as sort of doing interdisciplinary science. So you reach out and talk to people that are outside of your discipline, which I always hope that's what science was for. In fact, I was a little disillusioned when I realized that academia is very siloed. Yeah. And so the question is, how, at your own level, how do you prepare for these conversations? How do you think about these conversations? How do you open your mind enough to have these conversations? And it may be a little bit broader, how can you advise other scientists to have these kinds of conversations? Not at the podcast, the fact that you're doing a podcast is awesome, other people get to hear them, but it's also good to have it without mics in general. It's a good question, but a tough one to answer. I think about a guy I know who's a personal trainer, and he was asked on a podcast, how do we psych ourselves up to do a workout? How do we make that discipline to go and work out? And he's like, why are you asking me? I can't stop working out. I don't need to psych myself up. So, and likewise, he asked me, how do you get to have interdisciplinary conversations on all sorts of different things, all sorts of different people? I'm like, that's what makes me go, right? Like that's, I couldn't stop doing that. I did that long before any of them were recorded. In fact, a lot of the motivation for starting recording it was making sure I would read all these books that I had purchased, right? Like all these books I wanted to read, not enough time to read them. And now if I have the motivation, cause I'm gonna interview Pat Churchland, I'm gonna finally read her book. You know, and it's absolutely true that academia is extraordinarily siloed, right? We don't talk to people. We rarely do. And in fact, when we do, it's punished. You know, like the people who do it successfully generally first became very successful within their little siloed discipline. And only then did they start expanding out. If you're a young person, you know, I have graduate students. I try to be very, very candid with them about this, that it's, you know, most graduate students are to not become faculty members, right? It's a tough road. And so live the life you wanna live, but do it with your eyes open about what it does to your job chances. And the more broad you are and the less time you spend hyper specializing in your field, the lower your job chances are. That's just an academic reality. It's terrible, I don't like it, but it's a reality. And for some people, that's fine. Like there's plenty of people who are wonderful scientists who have zero interest in branching out and talking to things, to anyone outside their field. But it is disillusioning to me. Some of the, you know, romantic notion I had of the intellectual academic life is belied by the reality of it. The idea that we should reach out beyond our discipline and that is a positive good is just so rare in universities that it may as well not exist at all. But that said, even though you're saying you're doing it like the personal trainer, because you just can't help it, you're also an inspiration to others. Like I could speak for myself. You know, I also have a career I'm thinking about, right? And without your podcast, I may have not have been doing this at all, right? So it makes me realize that these kinds of conversations is kind of what science is about in many ways. The reason we write papers, this exchange of ideas, is it's much harder to do interdisciplinary papers, I would say. And conversations are easier. So conversations is the beginning. And in the field of AI, it's obvious that we should think outside of pure computer vision competitions on a particular data sets. We should think about the broader impact of how this can be, you know, reaching out to physics, to psychology, to neuroscience and having these conversations so that you're an inspiration. And so never know how the world changes. I mean, the fact that this stuff is out there and I've a huge number of people come up to me, grad students, really loving the podcast, inspired by it. And they will probably have that, they'll be ripple effects when they become faculty and so on and so on. We can end on a balance between pessimism and optimism. And Sean, thank you so much for talking to me, it was awesome. No, Lex, thank you very much for this conversation. It was great.
Sean Carroll: The Nature of the Universe, Life, and Intelligence | Lex Fridman Podcast #26
The following is a conversation with Kai Fu Lee. He's the chairman and CEO of Cinovation Ventures that manages a $2 billion dual currency investment fund with a focus on developing the next generation of Chinese high tech companies. He's the former president of Google China and the founder of what is now called Microsoft Research Asia, an institute that trained many of the artificial intelligence leaders in China, including CTOs or AI execs at Baidu, Tencent, Alibaba, Lenovo, and Huawei. He was named one of the 100 most influential people in the world by Time Magazine. He's the author of seven bestselling books in Chinese and most recently, the New York Times bestseller called AI Superpowers, China, Silicon Valley, and the New World Order. He has unparalleled experience in working across major tech companies and governments and applications of AI, and so he has a unique perspective on global innovation and the future of AI that I think is important to listen to and think about. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube and iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now, here's my conversation with Kaifu Li. I immigrated from Russia to US when I was 13. You immigrated to US at about the same age. The Russian people, the American people, the Chinese people each have a certain soul, a spirit that permeates throughout the generations. So maybe it's a little bit of a poetic question, but could you describe your sense of what defines the Chinese soul? I think the Chinese soul of people today, right, we're talking about people who have had centuries of burden because of the poverty that the country has gone through, and suddenly shined with hope of prosperity in the past 40 years as China opened up and embraced market economy. And undoubtedly, there are two sets of pressures on the people, that of the tradition, that of facing difficult situations, and that of hope of wanting to be the first to become successful and wealthy. So that's a very strong hunger and a strong desire and strong work ethic that drives China forward. And is there roots to not just this generation, but before that's deeper than just the new economic developments? Is there something that's unique to China that you could speak to that's in the people? Yeah, well, the Chinese tradition is about excellence, dedication, and results. And the Chinese exams and study subjects in schools have traditionally started from memorizing 10,000 characters, not an easy task to start with. And further by memorizing his historic philosopher's literature poetry. So it really is probably the strongest rote learning mechanism created to make sure people had good memory and remember things extremely well. That's, I think at the same time, suppresses the breakthrough innovation and also enhances the speed execution get results. And that I think characterizes the historic basis of China. That's interesting, because there's echoes of that in Russian education as well as rote memorization. So you have to memorize a lot of poetry. I mean, there's just an emphasis on perfection in all forms that's not conducive to perhaps what you're speaking to, which is creativity. But you think that kind of education holds back the innovative spirit that you might see in the United States? Well, it holds back the breakthrough innovative spirits that we see in the United States, but it does not hold back the valuable execution oriented, result oriented value creating engines, which we see China being very successful. So is there a difference between a Chinese AI engineer today and an American AI engineer, perhaps rooted in the culture that we just talked about or the education or the very soul of the people or no? And what would your advice be to each if there's a difference? Well, there's a lot that's similar because AI is about mastering sciences, about using known technologies and trying new things, but it's also about picking from many parts of possible networks to use and different types of parameters to tune. And that part is somewhat rote. And it is also, as anyone who's built AI products can tell you a lot about cleansing the data because AI runs better with more data and data is generally unstructured, errorful and unclean. And the effort to clean the data is immense. So I think the better part of American engineering, AI engineering process is to try new things, to do things people haven't done before and to use technology to solve most if not all problems. So to make the algorithm work despite not so great data, find error tolerant ways to deal with the data. The Chinese way would be to basically enumerate to the fullest extent all the possible ways by a lot of machines, try lots of different ways to get it to work and spend a lot of resources and money and time cleaning up data. That means the AI engineer may be writing data cleansing algorithms, working with thousands of people who label or correct or do things with the data. That is the incredible hard work that might lead to better results. So the Chinese engineer would rely on and ask for more and more and more data and find ways to cleanse them and make them work in the system and probably less time thinking about new algorithms that can overcome data or other issues. So where's your intuition? Where do you think the biggest impact in the next 10 years lies? Is it in some breakthrough algorithms or is it in just this at scale rigor, a rigorous approach to data, cleaning data, organizing data onto the same algorithms? What do you think the big impact in the applied world is? Well, if you're really in the company and you have to deliver results, using known techniques and enhancing data seems like the more expedient approach that's very low risk and likely to generate better and better results. And that's why the Chinese approach has done quite well. Now, there are a lot of more challenging startups and problems such as autonomous vehicles, medical diagnosis that existing algorithms probably won't solve. And that would put the Chinese approach more challenged and give them more breakthrough innovation approach, more of an edge on those kinds of problems. So let me talk to that a little more. So my intuition personally is that data can take us extremely far. So you brought up autonomous vehicles and medical diagnosis. So your intuition is that huge amounts of data might not be able to completely help us solve that problem. Right, so breaking that down further in autonomous vehicle, I think huge amounts of data probably will solve trucks driving on highways, which will deliver a significant value and China will probably lead in that. And full L5 autonomous is likely to require new technologies we don't yet know. And that might require academia and great industrial research, both innovating and working together. And in that case, US has an advantage. So the interesting question there is, I don't know if you're familiar on the autonomous vehicle space and the developments with Tesla and Elon Musk. I am. Where they are in fact full steam ahead into this mysterious complex world of full autonomy, L5, L4, L5, and they're trying to solve that purely with data. So the same kind of thing that you're saying is just for highway, which is what a lot of people share your intuition. They're trying to solve with data. So just to linger on that moment further, do you think possible for them to achieve success with simply just a huge amount of this training on edge cases and difficult cases in urban environments, not just highway and so on? I think it would be very hard. One could characterize Tesla's approach as kind of a Chinese strength approach, right? Gather all the data you can and hope that will overcome the problems. But in autonomous driving, clearly a lot of the decisions aren't merely solved by aggregating data and having feedback loop. There are things that are more akin to human thinking. And how would those be integrated and built? There has not yet been a lot of success integrating human intelligence or call it expert systems if you will, even though that's a taboo word with the machine learning. And the integration of the two types of thinking hasn't yet been demonstrated. And the question is how much can you push a purely machine learning approach? And of course, Tesla also has an additional constraint that they don't have all the sensors. I know that they think it's foolish to use LIDARs, but that's clearly a one less very valuable and reliable source of input that they're foregoing, which may also have consequences. I think the advantage of course is capturing data that no one has ever seen before. And in some cases such as computer vision and speech recognition, I have seen Chinese companies accumulate data that's not seen anywhere in the Western world and they have delivered superior results. But then speech recognition and object recognition are relatively suitable problems for deep learning and don't have the potentially need for the human intelligence analytical planning elements. And the same on the speech recognition side, your intuition that speech recognition and the machine learning approaches to speech recognition won't take us to a conversational system that can pass the Turing test, which is sort of maybe akin to what driving is. So it needs to have something more than just simply simple language understanding, simple language generation. Roughly right. I would say that based on purely machine learning approaches, it's hard to imagine it could lead to a full conversational experience across arbitrary domains, which is akin to L5. I'm a little hesitant to use the word Turing test because the original definition was probably too easy. We probably do that, yeah. The spirit of the Turing test is what I was referring to. Of course. So you've had major leadership research positions at Apple, Microsoft, Google. So continuing on the discussion of America, Russia, Chinese, Seoul and culture and so on. What is the culture of Silicon Valley in contrast to China and maybe US broadly? And what is the unique culture of each of these three major companies in your view? I think in aggregate, Silicon Valley companies, and we could probably include Microsoft in that, even though they're not in the Valley, is really dream big and have visionary goals and believe that technology will conquer all. And also the self confidence and the self entitlement that whatever they produce, the whole world should use and must use. And those are historically important, I think. Steve Jobs famous quote that he doesn't do focus groups, he looks in the mirror and asks the person in the mirror, what do you want? And that really is an inspirational comment that says, the great company shouldn't just ask users what they want, but develop something that users will know they want when they see it, but they could never come up with themselves. I think that is probably the most exhilarating description of what the essence of Silicon Valley is, that this brilliant idea could cause you to build something that couldn't come out of the focus groups or AB tests. And iPhone would be an example of that. No one in the age of Blackberry would write down they want an iPhone or multi touch. A browser might be another example. No one would say they want that in the days of FTP, but once they see it, they want it. So I think that is what Silicon Valley is best at. But it also comes with, it came with a lot of success. These products became global platforms and there were basically no competitors anywhere. And that has also led to a belief that these are the only things that one should do, that companies should not tread on other companies territory so that a Groupon and a Yelp and then OpenTable and the Grubhub would each feel, okay, I'm not gonna do the other company's business because that would not be the pride of innovating what each of these four companies have innovated. But I think the Chinese approach is do whatever it takes to win. And it's a winner take all market. And in fact, in the internet space, the market leader will get predominantly all the value extracted out of the system. So, and the system isn't just defined as one narrow category, but gets broader and broader. So it's amazing ambition for success and domination of increasingly larger product categories leading to clear market winner status and the opportunity to extract tremendous value. And that develops a practical, result oriented, ultra ambitious winner take all gladiatorial mentality. And if what it takes is to build what the competitors built, essentially a copycat that can be done without infringing laws. If what it takes is to satisfy a foreign country's need by forking the code base and building something that looks really ugly and different, they'll do it. So it's contrasted very sharply with the Silicon Valley approach. And I think the flexibility and the speed and execution has helped the Chinese approach. And I think the Silicon Valley approach is potentially challenged if every Chinese entrepreneur is learning from the whole world, US and China, and the American entrepreneurs only look internally and write off China as a copycat. And the second part of your question about the three companies. The unique elements of the three companies perhaps. Yeah, I think Apple represents while the user please the user and the essence of design and brand and it's the one company and perhaps the only tech company that draws people with a strong, serious desire for the product and the willingness to pay a premium because of the halo effect of the brand which came from the attention to detail and great respect for user needs. Microsoft represents a platform approach that builds giant products that become very strong modes that others can't do because it's well architected at the bottom level and the work is efficiently delegated to individuals and then the whole product is built by adding small parts that sum together. So it's probably the most effective high tech assembly line that builds a very difficult product that and the whole process of doing that is kind of a differentiation and something competitors can't easily repeat. Are there elements of the Chinese approach in the way Microsoft went about assembling those little pieces and dominating, essentially dominating the market for a long time or do you see those as distinct? I think there are elements that are the same. I think the three American companies that had or have Chinese characteristics and obviously as well as American characteristics are Microsoft, Facebook and Amazon. Yes, that's right, Amazon. Because these are companies that will tenaciously go after adjacent markets, build up strong product offering and find ways to extract greater value from a sphere that's ever increasing and they understand the value of the platforms. So that's the similarity and then with Google, I think it's a genuinely value oriented company that does have a heart and soul and that wants to do great things for the world by connecting information and that has also very strong technology genes and wants to use technology and has found out of the box ways to use technology to deliver incredible value to the end user. If you can look at Google, for example, you mentioned heart and soul. There seems to be an element where Google is after making the world better. There's a more positive view. They used to have the slogan, don't be evil. And Facebook a little bit more has a negative tend to it. At least in the perception of privacy and so on. Do you have a sense of how these different companies can achieve, because you've talked about how much we can make the world better in all these kinds of ways with AI. What is it about a company that can make, give it a heart and soul, gain the trust of the public and just actually just not be evil and do good for the world? It's really hard and I think Google has struggled with that. First, the don't do evil mantra is very dangerous because every employee's definition of evil is different. And that has led to some difficult employee situations for them. So I don't necessarily think that's a good value statement, but just watching the kinds of things Google or its parent company Alphabet does in new areas like healthcare, like eradicating mosquitoes, things that are really not in the business of a internet tech company. I think that shows that there's a heart and soul and desire to do good and willingness to put in the resources to do something when they see it's good, they will pursue it. That doesn't necessarily mean it has all the trust of the users. I realize while most people would view Facebook as the primary target of their recent unhappiness about Silicon Valley companies, many would put Google in that category. And some have named Google's business practices as predatory also. So it's kind of difficult to have the two parts of a body. The brain wants to do what it's supposed to do for a shareholder, maximize profit. And then the heart and soul wants to do good things that may run against what the brain wants to do. So in this complex balancing that these companies have to do, you've mentioned that you're concerned about a future where too few companies like Google, Facebook, Amazon are controlling our data or controlling too much of our digital lives. Can you elaborate on this concern and perhaps do you have a better way forward? I think I'm hardly the most vocal complainer of this. Sure, of course. There are a lot louder complainers out there. I do observe that having a lot of data does perpetuate their strength and limits competition in many spaces. But I also believe AI is much broader than the internet space. So the entrepreneurial opportunities still exists in using AI to empower financial, retail, manufacturing, education applications. So I don't think it's quite a case of full monopolistic dominance that totally stifles innovation. But I do believe in their areas of strength it's hard to dislodge them. I don't know if I have a good solution. Probably the best solution is let the entrepreneurial VC ecosystem work well and find all the places that can create the next Google, the next Facebook. So there will always be increasing number of challengers. In some sense that has happened a little bit. You see Uber, Airbnb having emerged despite the strength of the big three. And I think China as an environment may be more interesting for the emergence because if you look at companies between let's say 50 to $300 billion, China has emerged more of such companies than the US in the last three to four years because of the larger marketplace, because of the more fearless nature of the entrepreneurs. And the Chinese giants are just as powerful as American ones. Tencent, Alibaba are very strong, but ByteDance has emerged worth 75 billion and financial while it's Alibaba affiliated, it's nevertheless independent and worth 150 billion. And so I do think if we start to extend to traditional businesses, we will see very valuable companies. So it's probably not the case that in five or 10 years we'll still see the whole world with these five companies having such dominance. So you've mentioned a couple of times this fascinating world of entrepreneurship in China of the fearless nature of the entrepreneur. So can you maybe talk a little bit about what it takes to be an entrepreneur in China? What are the strategies that are undertaken? What are the ways to achieve success? What is the dynamic of VCF funding of the way the government helps companies and so on? What are the interesting aspects here that are distinct from, that are different from the Silicon Valley world of entrepreneurship? Well, many of the listeners probably still would brand Chinese entrepreneur as copycats. And no doubt 10 years ago, that would not be an inaccurate description. Back 10 years ago, an entrepreneur probably could not get funding if he or she could not describe what product he or she is copying from the US. The first question is who has proven this business model which is a nice way of asking who are you copying? And that reason is understandable because China had a much lower internet penetration and didn't have enough indigenous experience to build innovative products. And secondly, internet was emerging. Link startup was the way to do things, building a first minimally viable product and then expanding was the right way to go. And the American successes have given a shortcut that if you built your minimally viable product based on an American product, it's guaranteed to be a decent starting point. Then you tweak it afterwards. So as long as there are no IP infringement, which as far as I know there hasn't been in the mobile and AI spaces, that's a much better shortcut. And I think Silicon Valley would view that as still not very honorable because that's not your own idea to start with, but you can't really at the same time believe every idea must be your own and believe in the link startup methodology because link startup is intended to try many, many things and then converge when that works. And it's meant to be iterated and changed. So finding a decent starting point without legal violations, there should be nothing morally dishonorable about that. Yeah, so just a quick pause on that. It's fascinating that that's, why is that not honorable, right? It's exactly as you formulated. It seems like a perfect start for business. Is to take, look at Amazon and say, okay, we'll do exactly what Amazon is doing. Let's start there in this particular market and then let's out innovate them from that starting point. Come up with new ways. I mean, is it wrong to be, except the word copycat just sounds bad, but is it wrong to be a copycat? It just seems like a smart strategy, but yes, it doesn't have a heroic nature to it that like Steve Jobs, Elon Musk, sort of in something completely, coming up with something completely new. Yeah, I like the way you describe it. It's a nonheroic, acceptable way to start the company and maybe more expedient. So that's, I think, a baggage for Silicon Valley that if it doesn't let go, then it may limit the ultimate ceiling of the company. Take Snapchat as an example. I think, you know, Evan's brilliant. He built a great product, but he's very proud that he wants to build his own features, not copy others. While Facebook was more willing to copy his features and you see what happens in the competition. So I think putting that handcuff on the company would limit its ability to reach the maximum potential. So back to the Chinese environment, copying was merely a way to learn from the American masters. Just like we, if we learned to play piano or painting, you start by copying. You don't start by innovating when you don't have the basic skill sets. So very amazingly, the Chinese entrepreneurs about six years ago started to branch off with these lean startups built on American ideas to build better products than American products. But they did start from the American idea. And today WeChat is better than WhatsApp, Weibo is better than Twitter, Zhihu is better than Quora and so on. So that I think is Chinese entrepreneurs going to step two. And then step three is once these entrepreneurs have done one or two of these companies, they now look at the Chinese market and the opportunities and come up with ideas that didn't exist elsewhere. So products like Ant Financial, under which includes Alipay, which is mobile payments, and also the financial products for loans built on that. And also in education, VIPKID, and in social video, social network, TikTok, and in social eCommerce, Pinduoduo, and then in ride sharing, Mobike, these are all Chinese innovated products that now are being copied elsewhere. So an additional interesting observation is some of these products are built on unique Chinese demographics, which may not work in the US, but may work very well in Southeast Asia, Africa, and other developing worlds that are a few years behind China. And a few of these products maybe are universal and are getting traction even in the United States, such as TikTok. So this whole ecosystem is supported by VCs as a virtuous cycle, because a large market with innovative entrepreneurs will draw a lot of money and then invest in these companies. As the market gets larger and larger, the China market is easily three, four times larger than the US, they will create greater value and greater returns for the VCs, thereby raising even more money. So at Sinovation Ventures, our first fund was 15 million, our last fund was 500 million. So it reflects the valuation of the companies and our us going multi stage and things like that. It also has government support, but not in the way most Americans would think of it. The government actually leaves the entrepreneurial space as a private enterprise, sort of self regulating, and the government would build infrastructures that would around it to make it work better. For example, the Mass Entrepreneur Mass Innovation Plan builds 8,000 incubators, so the pipeline is very strong to the VCs. For autonomous vehicles, the Chinese government is building smart highways with sensors, smart cities that separate pedestrians from cars that may allow initially an inferior autonomous vehicle company to launch a car without increasing with lower casualty because the roads or the city is smart. And the Chinese government at local levels would have these guiding funds acting as LPs, passive LPs to funds. And when the fund makes money, part of the money made is given back to the GPs and potentially other LPs to increase everybody's return at the expense of the government's return. So that's an interesting incentive that entrusts the task of choosing entrepreneurs to VCs who are better at it than the government by letting some of the profits move that way. So this is really fascinating, right? So I look at the Russian government as a case study where, let me put it this way, there's no such government driven large scale support of entrepreneurship. And probably the same is true in the United States, but the entrepreneurs themselves kind of find a way. So maybe in a form of advice or explanation, how did the Chinese government arrive to be this way so supportive on entrepreneurship to be in this particular way so forward thinking at such a large scale? And also perhaps, how can we copy it in other countries? How can we encourage other governments, like even the United States government, to support infrastructure for autonomous vehicles in that same kind of way, perhaps? Yes, so these techniques are the result of several key things, some of which may be learnable, some of which may be very hard. One is just trial and error and watching what everyone else is doing. I think it's important to be humble and not feel like you know all the answers. The guiding funds idea came from Singapore, which came from Israel. And China made a few tweaks and turned it into a, because the Chinese cities and government officials kind of compete with each other because they all want to make their city more successful so they can get the next level in their political career. And it's somewhat competitive. So the central government made it a bit of a competition. Everybody has a budget. They can put it on AI or they can put it on bio or they can put it on energy. And then whoever gets the results, the city shines, the people are better off, the mayor gets a promotion. So the tools is kind of almost like an entrepreneurial environment for local governments to see who can do a better job. And also many of them try different experiments. Some have given award to very smart researchers. Just give them money and hope they'll start a company. Some have given money to academic research labs, maybe government research labs to see if they can spin off some companies from the science lab or something like that. Some have tried to recruit overseas Chinese to come back and start companies. And they've had mixed results. The one that worked the best was the guiding funds. So it's almost like a lean startup idea where people try different things and what works sticks and everybody copies. So now every city has a guiding fund. So that's how that came about. The autonomous vehicle and the massive spending in highways and smart cities, that's a Chinese way. It's about building infrastructure to facilitate. It's a clear division of the government's responsibility from the market. The market should do everything in a private freeway, but there are things the market can't afford to do like infrastructure. So the government always appropriates large amounts of money for infrastructure building. This happens with not only autonomous vehicle and AI, but happened with the 3G and 4G. You'll find that the Chinese wireless reception is better than the US because massive spending that tries to cover the whole country, whereas in the US it may be a little spotty. It's a government driven because I think they view the coverage of cell access and 3G, 4G access to be a governmental infrastructure spending as opposed to capitalistic. So that's, of course, the state owned enterprises are also publicly traded, but they also carry a government responsibility to deliver infrastructure to all. So it's a different way of thinking that may be very hard to inject into Western countries to say starting tomorrow, bandwidth infrastructure and highways are gonna be governmental spending with some characteristics. What's your sense, and sorry to interrupt, but because it's such a fascinating point, do you think on the autonomous vehicle space it's possible to solve the problem of full autonomy without significant investment in infrastructure? Well, that's really hard to speculate. I think it's not a yes, no question, but how long does it take question? 15 years, 30 years, 45 years. Clearly with infrastructure augmentation, whether it's road, the city or whole city planning, building a new city, I'm sure that will accelerate the day of the L5. I'm not knowledgeable enough, and it's hard to predict even when we're knowledgeable because a lot of it is speculative. But in the US, I don't think people would consider building a new city the size of Chicago to make it the AI slash autonomous city. There are smaller ones being built, I'm aware of that. But is infrastructure spend really impossible for US or Western countries? I don't think so. The US highway system was built, was that during President Eisenhower or Kennedy? Eisenhower, yeah. So maybe historians can study how the President Eisenhower get the resources to build this massive infrastructure that surely gave US a tremendous amount of prosperity over the next decade, if not century. If I may comment on that then, it takes us to artificial intelligence a little bit because in order to build infrastructure, it creates a lot of jobs. So I'll be actually interested if you would say that you talk in your book about all kinds of jobs that could and could not be automated. I wonder if building infrastructure is one of the jobs that would not be easily automated. Something you could think about because I think you've mentioned somewhere in the talk or that there might be, as jobs are being automated, a role for government to create jobs that can't be automated. Yes, I think that's a possibility. Back in the last financial crisis, China put a lot of money to basically give this economy a boost and a lot of it went into infrastructure building. And I think that's a legitimate way at the government level to deal with the employment issues as well as build out the infrastructure as long as the infrastructures are truly needed and as long as there is an employment problem, which no, we don't know. So maybe taking a little step back, if you've been a leader and a researcher in AI for several decades, at least 30 years, so how has AI changed in the West and the East as you've observed, as you've been deep in it over the past 30 years? Well, AI began as the pursuit of understanding human intelligence and the term itself represents that, but it kind of drifted into the one sub area that worked extremely well, which is machine intelligence. And that's actually more using pattern recognition techniques to basically do incredibly well on a limited domain, large amount of data, but relatively simple kinds of planning tasks and not very creative. So we didn't end up building human intelligence. We built a different machine that was a lot better than us, some problems, but nowhere close to us on other problems. So today, I think a lot of people still misunderstand when we say artificial intelligence and what various products can do, people still think it's about replicating human intelligence, but the products out there really are closer to having invented the internet or the spreadsheet or the database and getting broader adoption. And speaking further to the fears, near term fears that people have about AI, so you're commenting on the sort of general intelligence that people in the popular culture from sci fi movies have a sense about AI, but there's practical fears about AI, the narrow AI that you're talking about of automating particular kinds of jobs and you talk about them in the book. So what are the kinds of jobs in your view that you see in the next five, 10 years beginning to be automated by AI systems algorithms? Yes, this is also maybe a little bit counterintuitive because it's the routine jobs that will be displaced the soonest and they may not be displaced entirely, maybe 50%, 80% of a job, but when the workload drops by that much, employment will come down. And also another part of misunderstanding is most people think of AI replacing routine jobs than they think of the assembly line, the workers. Well, that will have some effect, but it's actually the routine white collar workers that's easiest to replace because to replace a white collar worker, you just need software. To replace a blue collar worker, you need robotics, mechanical excellence, and the ability to deal with dexterity and maybe even unknown environments, very, very difficult. So if we were to categorize the most dangerous white collar jobs, they would be things like back office, people who copy and paste and deal with simple computer programs and data and maybe paper and OCR, and they don't make strategic decisions. They basically facilitate the process. These softwares and paper systems don't work. So you have people dealing with new employee orientation, searching for past lawsuits and financial documents, and doing reference check. So basic searching and management of data. That's the most endangered being lost. In addition to the white collar repetitive work, a lot of simple interaction work can also be taken care of such as telesales, telemarketing, customer service, as well as many physical jobs that are in the same location and don't require a high degree of dexterity. So fruit picking, dishwashing, assembly line inspection are jobs in that category. So altogether, back office is a big part. And the blue collar may be smaller initially, but over time, AI will get better. And when we start to get to over the next 15, 20 years, the ability to actually have the dexterity of doing assembly line, that's a huge chunk of jobs. And when autonomous vehicles start to work, initially starting with truck drivers, but eventually to all drivers, that's another huge group of workers. So I see modest numbers in the next five years, but increasing rapidly after that. On the worry of the jobs that are in danger and the gradual loss of jobs, I'm not sure if you're familiar with Andrew Yang. Yes, I am. So there's a candidate for president of the United States whose platform Andrew Yang is based around, in part around job loss due to automation. And also in addition, the need perhaps of universal basic income to support jobs that are, folks who lose their job due to automation and so on. And in general, support people under complex, unstable job market. So what are your thoughts about his concerns, him as a candidate, his ideas in general? I think his thinking is generally in the right direction, but his approach as a presidential candidate may be a little bit ahead of the time. And I think the displacements will happen, but will they happen soon enough for people to agree to vote for him? The unemployment numbers are not very high yet. And I think he and I have the same challenge. If I want to theoretically convince people this is an issue and he wants to become the president, people have to see how can this be the case when unemployment numbers are low. So that is the challenge. And I think I do agree with him on the displacement issue, on universal basic income. At a very vanilla level, I don't agree with it because I think the main issue is retraining. So people need to be incented not by just giving a monthly $2,000 check or $1,000 check and do whatever they want because they don't have the know how to know what to retrain to go into what type of a job. And guidance is needed. And retraining is needed because historically when technology revolutions, when routine jobs were displaced, new routine jobs came up. So there was always room for that. But with AI and automation, the whole point is replacing all routine jobs eventually. So there will be fewer and fewer routine jobs. And AI will create jobs, but it won't create routine jobs because if it creates routine jobs, why wouldn't AI just do it? So therefore the people who are losing the jobs are losing routine jobs. The jobs that are becoming available are non routine jobs. So the social stipend needs to be put in place is for the routine workers who lost their jobs to be retrained maybe in six months, maybe in three years, takes a while to retrain on the non routine job and then take on a job that will last for that person's lifetime. Now, having said that, if you look deeply into Andrew's document, he does cater for that. So I'm not disagreeing with what he's trying to do. But for simplification, sometimes he just says UBI, but simple UBI wouldn't work. And I think you've mentioned elsewhere that the goal isn't necessarily to give people enough money to survive or live, or even to prosper. The point is to give them a job that gives them meaning. That meaning is extremely important. That our employment, at least in the United States and perhaps it carries across the world, provides something that's, forgive me for saying, greater than money. It provides meaning. So now, what kind of jobs do you think can't be automated? Can you talk a little bit about creativity and compassion in your book? What aspects do you think it's difficult to automate for an AI system? Because an AI system is currently merely optimizing. It's not able to reason, plan, or think creatively or strategically. It's not able to deal with complex problems. It can't come up with a new problem and solve it. A human needs to find the problem and pose it as an optimization problem, then have the AI work at it. So an AI would have a very hard time discovering a new drug or discovering a new style of painting or dealing with complex tasks such as managing a company that isn't just about optimizing the bottom line, but also about employee satisfaction, corporate brand, and many, many other things. So that is one category of things. And because these things are challenging, creative, complex, doing them creates a high degree of satisfaction and therefore appealing to our desire for working, which isn't just to make the money, make the ends meet, but also that we've accomplished something that others maybe can't do or can't do as well. Another type of job that is much numerous would be compassionate jobs, jobs that require compassion, empathy, human touch, human trust. AI can't do that because AI is cold, calculating, and even if it can fake that to some extent, it will make errors and that will make it look very silly. And also, I think even if AI did okay, people would want to interact with another person, whether it's for some kind of a service or a teacher or a doctor or a concierge or a masseuse or a bartender. There are so many jobs where people just don't want to interact with a cold robot or software. I've had an entrepreneur who built an elderly care robot and they found that the elderly really only use it for customer service. And not, but not to service the product, but they click on customer service and the video of a person comes up and then the person says, how come my daughter didn't call me? Let me show you a picture of her grandkids. So people yearn for that people, people interaction. So even if robots improved, people just don't want it. And those jobs are going to be increasing because AI will create a lot of value, $16 trillion to the world in the next 10 years. Next 11 years, according to PWC. And that will give people money to enjoy services, whether it's eating a gourmet meal or tourism and traveling or having concierge services, the services revolving around every dollar of that $16 trillion will be tremendous. It will create more opportunities that are to service the people who did well through AI with things. But even at the same time, the entire society is very much short in need of many service oriented, compassionate oriented jobs. The best example is probably in healthcare services. There's going to be 2 million new jobs, not counting replacement, just brand new incremental jobs in the next six years in healthcare services. That includes nurses, orderly in the hospital, elderly care and also at home care is particularly lacking. And those jobs are not likely to be filled. So there's likely to be a shortage. And the reason they're not filled is simply because they don't pay very well and that the social status of these jobs are not very good. So they pay about half as much as a heavy equipment operator, which will be replaced a lot sooner. And they pay probably comparably to someone on the assembly line. And so if we ignoring all the other issues and just think about satisfaction from one's job, someone repetitively doing the same manual action at an assembly line, that can't create a lot of job satisfaction, but someone taking care of a sick person and getting a hug and thank you from that person and the family, I think is quite satisfying. So if only we could fix the pay for service jobs, there are plenty of jobs that require some training or a lot of training for the people coming off the routine jobs to take. We can easily imagine someone who was maybe a cashier at the grocery store as stores become automated, learns to become a nurse or an at home care. I also do want to point out the blue collar jobs are going to stay around a bit longer. Some of them quite a bit longer. AI cannot be told go clean an arbitrary home. That's incredibly hard. Arguably it's an L5 level of difficulty, right? And then AI cannot be a good plumber because plumber is almost like a mini detective that has to figure out where the leak came from. So yet AI probably can be an assembly line and auto mechanic and so on. So one has to study which blue collar jobs are going away and facilitate retraining for the people to go into the ones that won't go away or maybe even will increase. I mean, it is fascinating that it's easier to build a world champion chess player than it is to build a mediocre plumber. Yes, right. Very true. And to AI and that goes counterintuitive to a lot of people's understanding of what artificial intelligence is. So it sounds, I mean, you're painting a pretty optimistic picture about retraining about the number of jobs and actually the meaningful nature of those jobs once we automate the repetitive tasks. So overall, are you optimistic about the future where much of the repetitive tasks are automated? That there is a lot of room for humans for the compassionate, for the creative input that only humans can provide? I am optimistic if we start to take action. If we have no action in the next five years, I think it's going to be hard to deal with the devastating losses that will emerge. So if we start thinking about retraining, maybe with the low hanging fruits, explaining to vocational schools why they should train more plumbers than auto mechanics, maybe starting with some government subsidy for corporations to have more training positions. We start to explain to people why retraining is important. We start to think about what the future of education, how that needs to be tweaked for the era of AI. If we start to make incremental progress and the greater number of people understand, then there's no reason to think we can't deal with this because this technological revolution is arguably similar to what electricity, industrial revolutions, and internet brought about. Do you think there's a role for policy, for governments to step in, to help with policy to create a better world? Absolutely, and the governments don't have to believe an employment will go up, and they don't have to believe automation will be this fast to do something. Revamping vocational school would be one example. Another is if there's a big gap in healthcare service employment, and we know that a country's population is growing older, more longevity, living older, because people over 80 require five times as much care as those under 80, then it is a good time to incent training programs for elderly care to find ways to improve the pay. Maybe one way would be to offer as part of Medicare or the equivalent program for people over 80 to be entitled to a few hours of elderly care at home, and then that might be reimbursable, and that will stimulate the service industry around the policy. Do you have concerns about large entities, whether it's governments or companies, controlling the future of AI development in general? So we talked about companies. Do you have a better sense that governments can better represent the interests of the people than companies, or do you believe companies are better at representing the interests of the people? Or is there no easy answer? I don't think there's an easy answer because it's a double edged sword. The companies and governments can provide better services with more access to data and more access to AI, but that also leads to greater power, which can lead to uncontrollable problems, whether it's monopoly or corruption in the government. So I think one has to be careful to look at how much data that companies and governments have and some kind of checks and balances would be helpful. So again, I come from Russia. There's something called the Cold War. So let me ask a difficult question here looking at conflict. Steven Pinker written a great book that conflict all over the world is decreasing in general. But do you have a sense that having written the book AI Superpowers, do you see a major international conflict potentially arising between major nations, whatever they are, whether it's Russia, China, European nations, United States or others in the next 10, 20, 50 years around AI, around the digital space, cyberspace? Do you worry about that? Is that something we need to think about and try to alleviate or prevent? I believe in greater engagement. A lot of the worries about more powerful AI are based on a arms race metaphor. And when you extrapolate into military kinds of scenarios, AI can automate and autonomous weapons that needs to be controlled somehow and autonomous decision making can lead to not enough time to fix international crises. So I actually believe a Cold War mentality would be very dangerous because should two countries rely on AI to make certain decisions and they don't even talk to each other, they do their own scenario planning, then something could easily go wrong. I think engagement, interaction, some protocols to avoid inadvertent disasters is actually needed. So it's natural for each country to want to be the best, whether it's in nuclear technologies or AI or bio. But I think it's important to realize if each country has a black box AI and don't talk to each other, that probably presents greater challenges to humanity than if they interacted. I think there can still be competition, but with some degree of protocol for interaction, just like when there was a nuclear competition, there were some protocol for deterrence among US, Russia, and China. And I think that engagement is needed. So of course, we're still far from AI presenting that kind of danger. But what I worry the most about is the level of engagement seems to be coming down. The level of distrust seems to be going up, especially from the US towards other large countries such as China and of course, and Russia, yes. Is there a way to make that better? So let's beautifully put level of engagement and even just basic trust and communication as opposed to sort of making artificial enemies out of particular countries. Do you have a sense how we can make it better? Actionable items that as a society we can take on? I'm not an expert at geopolitics, but I would say that we look pretty foolish as humankind when we are faced with the opportunity to create $16 trillion for humanity, and yet we're not solving fundamental problems with parts of the world still in poverty. And for the first time, we have the resources to overcome poverty and hunger. We're not using it on that, but we're fueling competition among superpowers. And that's a very unfortunate thing. If we become utopian for a moment, imagine a benevolent world government that has this $16 trillion and maybe some AI to figure out how to use it to deal with diseases and problems and hate and things like that. World would be a lot better off. So what is wrong with the current world? I think the people with more skill than I should think about this. And then the geopolitics issue with superpower competition is one side of the issue. There's another side which I worry maybe even more, which is as the $16 trillion all gets made by US and China and a few of the other developed countries, the poorer country will get nothing because they don't have technology and the wealth disparity and inequality will increase. So a poorer country with a large population will not only benefit from the AI boom or other technology booms, but they will have their workers who previously had hoped they could do the China model and do outsource manufacturing or the India model so they could do the outsource process or call center. Well, all those jobs are gonna be gone in 10 or 15 years. So the individual citizen may be a net liability, I mean, financially speaking to a poorer country and not an asset to claw itself out of poverty. So in that kind of situation, these large countries with not much tech are going to be facing a downward spiral and it's unclear what could be done. And then when we look back and say there's $16 trillion being created and it's all being kept by US, China and other developed countries, it just doesn't feel right. So I hope people who know about geopolitics can find solutions that's beyond my expertise. So different countries that we've talked about have different value systems. If you look at the United States, to an almost extreme degree, there is an absolute desire for freedom of speech. If you look at a country where I was raised, that desire just amongst the people is not as elevated as it is to basically fundamental level to the essence of what it means to be America, right? And the same is true with China, there's different value systems. There's some censorship of internet content that China and Russia and many other countries undertake. Do you see that having effects on innovation, other aspects of some of the tech stuff, AI development we talked about, and maybe from another angle, do you see that changing in different ways over the next 10 years, 20 years, 50 years as China continues to grow as it does now in its tech innovation? There's a common belief that full freedom of speech and expression is correlated with creativity, which is correlated with entrepreneurial success. I think empirically we have seen that is not true and China has been successful. That's not to say the fundamental values are not right or not the best, but it's just that perfect correlation isn't there. It's hard to read the tea leaves on opening up or not in any country, and I've not been very good at that in my past predictions, but I do believe every country shares a lot of fundamental values for the longterm. So China is drafting its privacy policy for individual citizens, and they don't look that different from the American or European ones. So people do want to protect their privacy and have the opportunity to express and I think the fundamental values are there. The question is in the execution and timing, how soon or when will that start to open up? So as long as each government knows ultimately people want that kind of protection, there should be a plan to move towards that as to when or how and I'm not an expert. On the point of privacy to me, it's really interesting. So AI needs data to create a personalized awesome experience, right? I'm just speaking generally in terms of products. And then we have currently, depending on the age and depending on the demographics of who we're talking about, some people are more or less concerned about the amount of data they hand over. So in your view, how do we get this balance right that we provide an amazing experience to people that use products? You look at Facebook, the more Facebook knows about you, yes, it's scary to say, the better it can probably, better experience it can probably create. So in your view, how do we get that balance right? Yes, I think a lot of people have a misunderstanding that it's okay and possible to just rip all the data out from a provider and give it back to you. So you can deny them access to further data and still enjoy the services we have. If we take back all the data, all the services will give us nonsense. We'll no longer be able to use products that function well in terms of right ranking, right products, right user experience. So yet I do understand we don't want to permit misuse of the data from legal policy standpoint. I think there can be severe punishment for those who have egregious misuse of the data. That's I think a good first step. Actually China in this side on this aspect has very strong laws about people who sell or give data to other companies. And that over the past few years, since that law came into effect, pretty much eradicated the illegal distribution, sharing of data. Additionally, I think giving, I think technology is often a very good way to solve technology misuse. So can we come up with new technologies that will let us have our cake and eat it too? People are looking into homomorphic encryption, which is letting you keep the data, have it encrypted and train on encrypted data. Of course, we haven't solved that one yet, but that kind of direction may be worth pursuing. Also federated learning, which would allow one hospital to train on its hospital's patient data fully because they have a license for that. And then hospitals would then share their models, not data, but models to create a super AI. And that also maybe has some promise. So I would want to encourage us to be open minded and think of this as not just the policy binary, yes, no, but letting the technologists try to find solutions to let us have our cake and eat it too, or have most of our cake and eat most of it too. Finally, I think giving each end user a choice is important and having transparency is important. Also, I think that's universal, but the choice you give to the user should not be at a granular level that the user cannot understand. GDPR today causes all these popups of yes, no, will you give this site this right to use this part of your data? I don't think any user understands what they're saying yes or no to. And I suspect most are just saying yes because they don't understand it. So while GDPR in its current implementation has lived up to its promise of transparency and user choice, it implemented it in such a way that really didn't deliver the spirit of GDPR. It fit the letter, but not the spirit. So again, I think we need to think about is there a way to fit the spirit of GDPR by using some kind of technology? Can we have a slider that's an AI trying to figure out how much you want to slide between perfect protection security of your personal data versus a high degree of convenience with some risks of not having full privacy? Each user should have some preference and that gives you the user choice. But maybe we should turn the problem on its head and ask can there be an AI algorithm that can customize this? Because we can understand the slider, but we sure cannot understand every popup question. And I think getting that right requires getting the balance between what we talked about earlier, which is heart and soul versus profit driven decisions and strategy. I think from my perspective, the best way to make a lot of money in the long term is to keep your heart and soul intact. I think getting that slider right in the short term may feel like you'll be sacrificing profit, but in the long term, you'll be gaining user trust and providing a great experience. Do you share that kind of view in general? Yes, absolutely. I sure would hope there is a way we can do long term projects that really do the right thing. I think a lot of people who embrace GDPR, their heart's in the right place. I think they just need to figure out how to build a solution. I've heard utopians talk about solutions that get me excited, but I'm not sure how in the current funding environment they can get started. People talk about, imagine this crowdsourced data collection that we all trust. And then we have these agents that we ask the trusted agent to... That agent only, that platform, so a trusted joint platform that we all believe is trustworthy, that can give us all the closed loop personal suggestions by the new social network, new search engine, new eCommerce engine that has access to even more of our data, but not directly, but indirectly. So I think that general concept of licensing to some trusted engine and finding a way to trust that engine seems like a great idea. But if you think how long it's gonna take to implement and tweak and develop it right, as well as to collect all the trusts and the data from the people, it's beyond the current cycle of venture capital. So how do you do that is a big question. You've recently had a fight with cancer, stage four lymphoma and in a sort of deep personal level, what did it feel like in the darker moments to face your own mortality? Well, I've been the workaholic my whole life and I've basically worked nine, nine, six, nine a.m. to nine p.m. six days a week, roughly. And I didn't really pay a lot of attention to my family, friends, and people who loved me. And my life revolved around optimizing for work. While my work was not routine, my optimization really what made my life basically very mechanical process. But I got a lot of highs out of it because of accomplishments that I thought were really important and dear and the highest priority to me. But when I faced mortality and the possible death in matter of months, I suddenly realized that this really meant nothing to me, that I didn't feel like working for another minute, that if I had six months left in my life, I would spend it all with my loved ones and thanking them, giving them love back and apologizing to them that I lived my life the wrong way. So that moment of reckoning caused me to really rethink that why we exist in this world is something that we might be too much shaped by the society to think that success and accomplishments is why we live. But while that can get you periodic successes and satisfaction, it's really in the facing death you see what's truly important to you. So as a result of going through the challenges with cancer, I've resolved to live a more balanced lifestyle. I'm now in remission, knock on wood, and I'm spending more time with my family. My wife travels with me. When my kids need me, I spend more time with them. And before I used to prioritize everything around work. When I had a little bit of time, I would dole it out to my family. Now, when my family needs something, really needs something, I drop everything at work and go to them. And then in the time remaining, I allocate to work. But one's family is very understanding. It's not like they will take 50 hours a week from me. So I'm actually able to still work pretty hard, maybe 10 hours less per week. So I realized the most important thing in my life is really love and the people I love. And I give that the highest priority. It isn't the only thing I do, but when that is needed, I put that at the top priority and I feel much better and I feel much more balanced. And I think this also gives a hint as to a life of routine work, a life of pursuit of numbers. While my job was not routine, it was in pursuit of numbers, pursuit of can I make more money? Can I fund more great companies? Can I raise more money? Can I make sure our VC is ranked higher and higher every year? This competitive nature of driving for bigger numbers and better numbers became a endless pursuit that's mechanical. And bigger numbers really didn't make me happier. And faced with death, I realized bigger numbers really meant nothing. And what was important is that people who have given their heart and their love to me deserve for me to do the same. So there's deep, profound truth in that, that everyone should hear and internalize. I mean, that's really powerful for you to say that. I have to ask sort of a difficult question here. So I've competed in sports my whole life, looking historically, I'd like to challenge some aspect of that a little bit on the point of hard work. That it feels that there are certain aspects that is the greatest, the most beautiful aspects of human nature is the ability to become obsessed, of becoming extremely passionate to the point where yes, flaws are revealed and just giving yourself fully to a task. That is, in another sense, you mentioned love being important, but in another sense, this kind of obsession, this pure exhibition of passion and hard work is truly what it means to be human. What lessons should we take that's deeper? Because you've accomplished incredible things. You say it chasing numbers, but really there's some incredible work there. So how do you think about that when you look back in your 20s, your 30s, what would you do differently? Would you really take back some of the incredible hard work? I would, but it's in percentages, right? We're both computer scientists. So I think when one balances one's life, when one is younger, you might give a smaller percentage to family, but you would still give them high priority. And when you get older, you would give a larger percentage to them and still the high priority. And when you're near retirement, you give most of it to them and the highest priority. So I think the key point is not that we would work 20 hours less for the whole life and just spend it aimlessly with the family, but that's when the family has a need, when your wife is having a baby, when your daughter has a birthday or when they're depressed or when they're celebrating something or when they have a get together or when we have family time that it's important for us to put down our phone and PC and be a hundred percent with them. And that priority on the things that really matter isn't going to be so taxing that it would eliminate or even dramatically reduce our accomplishments. It might have some impact, but it might also have other impact because if you have a happier family, maybe you fight less. If you fight less, you don't spend time taking care of all the aftermath of a fight. So it's unclear that it would take more time. And if it did, I'd be willing to take that reduction. And it's not a dramatic number, but it's a number that I think would give me a greater degree of happiness and knowing that I've done the right thing and still have plenty of hours to get the success that I want to get. So given the many successful companies that you've launched and much success throughout your career, what advice would you give to young people today looking, or it doesn't have to be young, but people today looking to launch and to create the next $1 billion tech startup or even AI based startup? I would suggest that people understand technology waves move quickly. What worked two years ago may not work today. And that is very much case in point for AI. I think two years ago, or maybe three years ago, you certainly could say I have a couple of super smart PhDs and we're not sure what we're gonna do, but here's how we're gonna start and get funding for a very high valuation. Those days are over because AI is going from rocket science towards mainstream, not yet commodity, but more mainstream. So first the creation of any company to a venture capitalists has to be creation of business value and monetary value. And when you have a very scarce commodity, VCs may be willing to accept greater uncertainty. But now the number of people who have the equivalent of PhD three years ago, because that can be learned more quickly, platforms are emerging, the cost to become a AI engineer is much lower and there are many more AI engineers. So the market is different. So I would suggest someone who wants to build an AI company be thinking about the normal business questions. What customer cases are you trying to address? What kind of pain are you trying to address? How does that translate to value? How will you extract value and get paid through what channel and how much business value will get created? That today needs to be thought about much earlier upfront than it did three years ago. The scarcity question of AI talent has changed. The number of AI talent has changed. So now you need not just AI, but also understanding of business customer and the marketplace. So I also think you should have a more reasonable valuation expectation and growth expectation. There's gonna be more competition. But the good news though, is that AI technologies are now more available in open source. TensorFlow, PyTorch and such tools are much easier to use. So you should be able to experiment and get results iteratively faster than before. So take more of a business mindset to this, think less of this as a laboratory taken into a company, because we've gone beyond that stage. The only exception is if you truly have a breakthrough in some technology that really no one has, then the old way still works. But I think that's harder and harder now. So I know you believe as many do that we're far from creating an artificial general intelligence system. But say once we do, and you get to ask her one question, what would that question be? What is it that differentiates you and me? Beautifully put, Kaifu, thank you so much for your time today. Thank you.
Kai-Fu Lee: AI Superpowers - China and Silicon Valley | Lex Fridman Podcast #27
The following is a conversation with Chris Sampson. He was a CTO of the Google self driving car team, a key engineer and leader behind the Carnegie Mellon University autonomous vehicle entries in the DARPA Grand Challenges and the winner of the DARPA Urban Challenge. Today, he's the CEO of Aurora Innovation, an autonomous vehicle software company. He started with Sterling Anderson, who was the former director of Tesla Autopilot, and drew back now, Uber's former autonomy and perception lead. Chris is one of the top roboticists and autonomous vehicle experts in the world, and a longtime voice of reason in a space that is shrouded in both mystery and hype. He both acknowledges the incredible challenges involved in solving the problem of autonomous driving and is working hard to solve it. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Chris Sampson. You were part of both the DARPA Grand Challenge and the DARPA Urban Challenge teams at CMU with Red Whitaker. What technical or philosophical things have you learned from these races? I think the high order bit was that it could be done. I think that was the thing that was incredible about the first of the Grand Challenges, that I remember I was a grad student at Carnegie Mellon, and there was kind of this dichotomy of it seemed really hard, so that would be cool and interesting. But at the time, we were the only robotics institute around, and so if we went into it and fell on our faces, that would be embarrassing. So I think just having the will to go do it, to try to do this thing that at the time was marked as darn near impossible, and then after a couple of tries, be able to actually make it happen, I think that was really exciting. But at which point did you believe it was possible? Did you from the very beginning? Did you personally? Because you're one of the lead engineers. You actually had to do a lot of the work. Yeah, I was the technical director there, and did a lot of the work, along with a bunch of other really good people. Did I believe it could be done? Yeah, of course. Why would you go do something you thought was completely impossible? We thought it was going to be hard. We didn't know how we were going to be able to do it. We didn't know if we'd be able to do it the first time. Turns out we couldn't. That, yeah, I guess you have to. I think there's a certain benefit to naivete, right? That if you don't know how hard something really is, you try different things, and it gives you an opportunity that others who are wiser maybe don't have. What were the biggest pain points? Mechanical, sensors, hardware, software, algorithms for mapping, localization, just general perception, control? Like hardware, software, first of all? I think that's the joy of this field, is that it's all hard and that you have to be good at each part of it. So for the urban challenges, if I look back at it from today, it should be easy today, that it was a static world. There weren't other actors moving through it, is what that means. It was out in the desert, so you get really good GPS. So that went, and we could map it roughly. And so in retrospect now, it's within the realm of things we could do back then. Just actually getting the vehicle and the, there's a bunch of engineering work to get the vehicle so that we could control it and drive it. That's still a pain today, but it was even more so back then. And then the uncertainty of exactly what they wanted us to do was part of the challenge as well. Right, you didn't actually know the track heading in here. You knew approximately, but you didn't actually know the route that was going to be taken. That's right, we didn't know the route. We didn't even really, the way the rules had been described, you had to kind of guess. So if you think back to that challenge, the idea was that the government would give us, the DARPA would give us a set of waypoints and kind of the width that you had to stay within between the line that went between each of those waypoints. And so the most devious thing they could have done is set a kilometer wide corridor across a field of scrub brush and rocks and said, go figure it out. Fortunately, it really, it turned into basically driving along a set of trails, which is much more relevant to the application they were looking for. But no, it was a hell of a thing back in the day. So the legend, Red, was kind of leading that effort in terms of just broadly speaking. So you're a leader now. What have you learned from Red about leadership? I think there's a couple things. One is go and try those really hard things. That's where there is an incredible opportunity. I think the other big one, though, is to see people for who they can be, not who they are. It's one of the things that I actually, one of the deepest lessons I learned from Red was that he would look at undergraduates or graduate students and empower them to be leaders, to have responsibility, to do great things that I think another person might look at them and think, oh, well, that's just an undergraduate student. What could they know? And so I think that kind of trust but verify, have confidence in what people can become, I think is a really powerful thing. So through that, let's just fast forward through the history. Can you maybe talk through the technical evolution of autonomous vehicle systems from the first two Grand Challenges to the Urban Challenge to today, are there major shifts in your mind or is it the same kind of technology just made more robust? I think there's been some big, big steps. So for the Grand Challenge, the real technology that unlocked that was HD mapping. Prior to that, a lot of the off road robotics work had been done without any real prior model of what the vehicle was going to encounter. And so that innovation that the fact that we could get decimeter resolution models was really a big deal. And that allowed us to kind of bound the complexity of the driving problem the vehicle had and allowed it to operate at speed because we could assume things about the environment that it was going to encounter. So that was the big step there. For the Urban Challenge, one of the big technological innovations there was the multi beam LIDAR and being able to generate high resolution, mid to long range 3D models of the world and use that for understanding the world around the vehicle. And that was really kind of a game changing technology. In parallel with that, we saw a bunch of other technologies that had been kind of converging half their day in the sun. So Bayesian estimation had been, SLAM had been a big field in robotics. You would go to a conference a couple of years before that and every paper would effectively have SLAM somewhere in it. And so seeing that the Bayesian estimation techniques play out on a very visible stage, I thought that was pretty exciting to see. And mostly SLAM was done based on LIDAR at that time. Yeah, and in fact, we weren't really doing SLAM per se in real time because we had a model ahead of time, we had a roadmap, but we were doing localization. And we were using the LIDAR or the cameras depending on who exactly was doing it to localize to a model of the world. And I thought that was a big step from kind of naively trusting GPS, INS before that. And again, lots of work had been going on in this field. Certainly this was not doing anything particularly innovative in SLAM or in localization, but it was seeing that technology necessary in a real application on a big stage, I thought was very cool. So for the urban challenge, those are already maps constructed offline in general. And did people do that individually, did individual teams do it individually so they had their own different approaches there or did everybody kind of share that information at least intuitively? So DARPA gave all the teams a model of the world, a map. And then one of the things that we had to figure out back then was, and it's still one of these things that trips people up today is actually the coordinate system. So you get a latitude longitude and to so many decimal places, you don't really care about kind of the ellipsoid of the earth that's being used. But when you want to get to 10 centimeter or centimeter resolution, you care whether the coordinate system is NADS 83 or WGS 84 or these are different ways to describe both the kind of non sphericalness of the earth, but also kind of the, I think, I can't remember which one, the tectonic shifts that are happening and how to transform the global datum as a function of that. So getting a map and then actually matching it to reality to centimeter resolution, that was kind of interesting and fun back then. So how much work was the perception doing there? So how much were you relying on localization based on maps without using perception to register to the maps? And I guess the question is how advanced was perception at that point? It's certainly behind where we are today, right? We're more than a decade since the urban challenge. But the core of it was there. That we were tracking vehicles. We had to do that at 100 plus meter range because we had to merge with other traffic. We were using, again, Bayesian estimates for state of these vehicles. We had to deal with a bunch of the problems that you think of today, of predicting where that vehicle's going to be a few seconds into the future. We had to deal with the fact that there were multiple hypotheses for that because a vehicle at an intersection might be going right or it might be going straight or it might be making a left turn. And we had to deal with the challenge of the fact that our behavior was going to impact the behavior of that other operator. And we did a lot of that in relatively naive ways, but it kind of worked. Still had to have some kind of solution. And so where does that, 10 years later, where does that take us today from that artificial city construction to real cities to the urban environment? Yeah, I think the biggest thing is that the actors are truly unpredictable. That most of the time, the drivers on the road, the other road users are out there behaving well, but every once in a while they're not. The variety of other vehicles is, you have all of them. In terms of behavior, in terms of perception, or both? Both. Back then we didn't have to deal with cyclists, we didn't have to deal with pedestrians, didn't have to deal with traffic lights. The scale over which that you have to operate is now is much larger than the air base that we were thinking about back then. So what, easy question, what do you think is the hardest part about driving? Easy question. Yeah, no, I'm joking. I'm sure nothing really jumps out at you as one thing, but in the jump from the urban challenge to the real world, is there something that's a particular, you foresee as very serious, difficult challenge? I think the most fundamental difference is that we're doing it for real. That in that environment, it was both a limited complexity environment because certain actors weren't there, because the roads were maintained, there were barriers keeping people separate from robots at the time, and it only had to work for 60 miles. Which, looking at it from 2006, it had to work for 60 miles, right? Looking at it from now, we want things that will go and drive for half a million miles, and it's just a different game. So how important, you said LiDAR came into the game early on, and it's really the primary driver of autonomous vehicles today as a sensor. So how important is the role of LiDAR in the sensor suite in the near term? So I think it's essential. I believe, but I also believe that cameras are essential, and I believe the radar is essential. I think that you really need to use the composition of data from these different sensors if you want the thing to really be robust. The question I wanna ask, let's see if we can untangle it, is what are your thoughts on the Elon Musk provocative statement that LiDAR is a crutch, that it's a kind of, I guess, growing pains, and that much of the perception task can be done with cameras? So I think it is undeniable that people walk around without lasers in their foreheads, and they can get into vehicles and drive them, and so there's an existence proof that you can drive using passive vision. No doubt, can't argue with that. In terms of sensors, yeah, so there's proof. Yeah, in terms of sensors, right? So there's an example that we all go do it, many of us every day. In terms of LiDAR being a crutch, sure. But in the same way that the combustion engine was a crutch on the path to an electric vehicle, in the same way that any technology ultimately gets replaced by some superior technology in the future, and really the way that I look at this is that the way we get around on the ground, the way that we use transportation is broken, and that we have this, I think the number I saw this morning, 37,000 Americans killed last year on our roads, and that's just not acceptable. And so any technology that we can bring to bear that accelerates this self driving technology coming to market and saving lives is technology we should be using. And it feels just arbitrary to say, well, I'm not okay with using lasers because that's whatever, but I am okay with using an eight megapixel camera or a 16 megapixel camera. These are just bits of technology, and we should be taking the best technology from the tool bin that allows us to go and solve a problem. The question I often talk to, well, obviously you do as well, to sort of automotive companies, and if there's one word that comes up more often than anything, it's cost, and trying to drive costs down. So while it's true that it's a tragic number, the 37,000, the question is, and I'm not the one asking this question because I hate this question, but we want to find the cheapest sensor suite that creates a safe vehicle. So in that uncomfortable trade off, do you foresee LiDAR coming down in cost in the future, or do you see a day where level four autonomy is possible without LiDAR? I see both of those, but it's really a matter of time. And I think really, maybe I would talk to the question you asked about the cheapest sensor. I don't think that's actually what you want. What you want is a sensor suite that is economically viable. And then after that, everything is about margin and driving costs out of the system. What you also want is a sensor suite that works. And so it's great to tell a story about how it would be better to have a self driving system with a $50 sensor instead of a $500 sensor. But if the $500 sensor makes it work and the $50 sensor doesn't work, who cares? So long as you can actually have an economic opportunity, there's an economic opportunity there. And the economic opportunity is important because that's how you actually have a sustainable business and that's how you can actually see this come to scale and be out in the world. And so when I look at LiDAR, I see a technology that has no underlying fundamentally expense to it, fundamental expense to it. It's going to be more expensive than an imager because CMOS processes or FAP processes are dramatically more scalable than mechanical processes. But we still should be able to drive costs down substantially on that side. And then I also do think that with the right business model you can absorb more, certainly more cost on the bill of materials. Yeah, if the sensor suite works, extra value is provided, thereby you don't need to drive costs down to zero. It's the basic economics. You've talked about your intuition that level two autonomy is problematic because of the human factor of vigilance, decrement, complacency, over trust and so on, just us being human. We over trust the system, we start doing even more so partaking in the secondary activities like smartphones and so on. Have your views evolved on this point in either direction? Can you speak to it? So, and I want to be really careful because sometimes this gets twisted in a way that I certainly didn't intend. So active safety systems are a really important technology that we should be pursuing and integrating into vehicles. And there's an opportunity in the near term to reduce accidents, reduce fatalities, and we should be pushing on that. Level two systems are systems where the vehicle is controlling two axes. So braking and throttle slash steering. And I think there are variants of level two systems that are supporting the driver. That absolutely we should encourage to be out there. Where I think there's a real challenge is in the human factors part around this and the misconception from the public around the capability set that that enables and the trust that they should have in it. And that is where I kind of, I'm actually incrementally more concerned around level three systems and how exactly a level two system is marketed and delivered and how much effort people have put into those human factors. So I still believe several things around this. One is people will overtrust the technology. We've seen over the last few weeks a spate of people sleeping in their Tesla. I watched an episode last night of Trevor Noah talking about this and him, this is a smart guy who has a lot of resources at his disposal describing a Tesla as a self driving car and that why shouldn't people be sleeping in their Tesla? And it's like, well, because it's not a self driving car and it is not intended to be and these people will almost certainly die at some point or hurt other people. And so we need to really be thoughtful about how that technology is described and brought to market. I also think that because of the economic challenges we were just talking about, that these level two driver assistance systems, that technology path will diverge from the technology path that we need to be on to actually deliver truly self driving vehicles, ones where you can get in it and drive it. Can get in it and sleep and have the equivalent or better safety than a human driver behind the wheel. Because again, the economics are very different in those two worlds and so that leads to divergent technology. So you just don't see the economics of gradually increasing from level two and doing so quickly enough to where it doesn't cause safety, critical safety concerns. You believe that it needs to diverge at this point into basically different routes. And really that comes back to what are those L2 and L1 systems doing? And they are driver assistance functions where the people that are marketing that responsibly are being very clear and putting human factors in place such that the driver is actually responsible for the vehicle and that the technology is there to support the driver. And the safety cases that are built around those are dependent on that driver attention and attentiveness. And at that point, you can kind of give up to some degree for economic reasons, you can give up on say false negatives. And the way to think about this is for a four collision mitigation braking system, if it half the times the driver missed a vehicle in front of it, it hit the brakes and brought the vehicle to a stop, that would be an incredible, incredible advance in safety on our roads, right? That would be equivalent to seat belts. But it would mean that if that vehicle wasn't being monitored, it would hit one out of two cars. And so economically, that's a perfectly good solution for a driver assistance system. What you should do at that point, if you can get it to work 50% of the time, is drive the cost out of that so you can get it on as many vehicles as possible. But driving the cost out of it doesn't drive up performance on the false negative case. And so you'll continue to not have a technology that could really be available for a self driven vehicle. So clearly the communication, and this probably applies to all four vehicles as well, the marketing and communication of what the technology is actually capable of, how hard it is, how easy it is, all that kind of stuff is highly problematic. So say everybody in the world was perfectly communicated and were made to be completely aware of every single technology out there, what it's able to do. What's your intuition? And now we're maybe getting into philosophical ground. Is it possible to have a level two vehicle where we don't over trust it? I don't think so. If people truly understood the risks and internalized it, then sure, you could do that safely. But that's a world that doesn't exist. The people are going to, if the facts are put in front of them, they're gonna then combine that with their experience. And let's say they're using an L2 system and they go up and down the 101 every day and they do that for a month. And it just worked every day for a month. Like that's pretty compelling at that point, just even if you know the statistics, you're like, well, I don't know, maybe there's something funny about those. Maybe they're driving in difficult places. Like I've seen it with my own eyes, it works. And the problem is that that sample size that they have, so it's 30 miles up and down, so 60 miles times 30 days, so 60, 180, 1,800 miles. Like that's a drop in the bucket compared to the, what, 85 million miles between fatalities. And so they don't really have a true estimate based on their personal experience of the real risks, but they're gonna trust it anyway, because it's hard not to. It worked for a month, what's gonna change? So even if you start a perfect understanding of the system, your own experience will make it drift. I mean, that's a big concern. Over a year, over two years even, it doesn't have to be months. And I think that as this technology moves from what I would say is kind of the more technology savvy ownership group to the mass market, you may be able to have some of those folks who are really familiar with technology, they may be able to internalize it better. And your kind of immunization against this kind of false risk assessment might last longer, but as folks who aren't as savvy about that read the material and they compare that to their personal experience, I think there it's going to move more quickly. So your work, the program that you've created at Google and now at Aurora is focused more on the second path of creating full autonomy. So it's such a fascinating, I think it's one of the most interesting AI problems of the century, right? It's, I just talked to a lot of people, just regular people, I don't know, my mom, about autonomous vehicles, and you begin to grapple with ideas of giving your life control over to a machine. It's philosophically interesting, it's practically interesting. So let's talk about safety. How do you think we demonstrate, you've spoken about metrics in the past, how do you think we demonstrate to the world that an autonomous vehicle, an Aurora system is safe? This is one where it's difficult because there isn't a soundbite answer. That we have to show a combination of work that was done diligently and thoughtfully, and this is where something like a functional safety process is part of that. It's like here's the way we did the work, that means that we were very thorough. So if you believe that what we said about this is the way we did it, then you can have some confidence that we were thorough in the engineering work we put into the system. And then on top of that, to kind of demonstrate that we weren't just thorough, we were actually good at what we did, there'll be a kind of a collection of evidence in terms of demonstrating that the capabilities worked the way we thought they did, statistically and to whatever degree we can demonstrate that, both in some combination of simulations, some combination of unit testing and decomposition testing, and then some part of it will be on road data. And I think the way we'll ultimately convey this to the public is there'll be clearly some conversation with the public about it, but we'll kind of invoke the kind of the trusted nodes and that we'll spend more time being able to go into more depth with folks like NHTSA and other federal and state regulatory bodies and kind of given that they are operating in the public interest and they're trusted, that if we can show enough work to them that they're convinced, then I think we're in a pretty good place. That means you work with people that are essentially experts at safety to try to discuss and show. Do you think, the answer's probably no, but just in case, do you think there exists a metric? So currently people have been using number of disengagements. And it quickly turns into a marketing scheme to sort of you alter the experiments you run to adjust. I think you've spoken that you don't like. Don't love it. No, in fact, I was on the record telling DMV that I thought this was not a great metric. Do you think it's possible to create a metric, a number that could demonstrate safety outside of fatalities? So I do. And I think that it won't be just one number. So as we are internally grappling with this, and at some point we'll be able to talk more publicly about it, is how do we think about human performance in different tasks, say detecting traffic lights or safely making a left turn across traffic? And what do we think the failure rates are for those different capabilities for people? And then demonstrating to ourselves and then ultimately folks in the regulatory role and then ultimately the public that we have confidence that our system will work better than that. And so these individual metrics will kind of tell a compelling story ultimately. I do think at the end of the day what we care about in terms of safety is life saved and injuries reduced. And then ultimately kind of casualty dollars that people aren't having to pay to get their car fixed. And I do think that in aviation they look at a kind of an event pyramid where a crash is at the top of that and that's the worst event obviously and then there's injuries and near miss events and whatnot and violation of operating procedures and you kind of build a statistical model of the relevance of the low severity things or the high severity things. And I think that's something where we'll be able to look at as well because an event per 85 million miles is statistically a difficult thing even at the scale of the U.S. to kind of compare directly. And that event fatality that's connected to an autonomous vehicle is significantly at least currently magnified in the amount of attention it gets. So that speaks to public perception. I think the most popular topic about autonomous vehicles in the public is the trolley problem formulation, right? Which has, let's not get into that too much but is misguided in many ways. But it speaks to the fact that people are grappling with this idea of giving control over to a machine. So how do you win the hearts and minds of the people that autonomy is something that could be a part of their lives? I think you let them experience it, right? I think it's right. I think people should be skeptical. I think people should ask questions. I think they should doubt because this is something new and different. They haven't touched it yet. And I think that's perfectly reasonable. And, but at the same time, it's clear there's an opportunity to make the road safer. It's clear that we can improve access to mobility. It's clear that we can reduce the cost of mobility. And that once people try that and understand that it's safe and are able to use in their daily lives, I think it's one of these things that will just be obvious. And I've seen this practically in demonstrations that I've given where I've had people come in and they're very skeptical. Again, in a vehicle, my favorite one is taking somebody out on the freeway and we're on the 101 driving at 65 miles an hour. And after 10 minutes, they kind of turn and ask, is that all it does? And you're like, it's a self driving car. I'm not sure exactly what you thought it would do, right? But it becomes mundane, which is exactly what you want a technology like this to be, right? We don't really, when I turn the light switch on in here, I don't think about the complexity of those electrons being pushed down a wire from wherever it was and being generated. It's like, I just get annoyed if it doesn't work, right? And what I value is the fact that I can do other things in this space. I can see my colleagues. I can read stuff on a paper. I can not be afraid of the dark. And I think that's what we want this technology to be like is it's in the background and people get to have those life experiences and do so safely. So putting this technology in the hands of people speaks to scale of deployment, right? So what do you think the dreaded question about the future because nobody can predict the future, but just maybe speak poetically about when do you think we'll see a large scale deployment of autonomous vehicles, 10,000, those kinds of numbers? We'll see that within 10 years. I'm pretty confident. What's an impressive scale? What moment, so you've done the DARPA challenge where there's one vehicle. At which moment does it become, wow, this is serious scale? So I think the moment it gets serious is when we really do have a driverless vehicle operating on public roads and that we can do that kind of continuously. Without a safety driver. Without a safety driver in the vehicle. I think at that moment, we've kind of crossed the zero to one threshold. And then it is about how do we continue to scale that? How do we build the right business models? How do we build the right customer experience around it so that it is actually a useful product out in the world? And I think that is really, at that point it moves from what is this kind of mixed science engineering project into engineering and commercialization and really starting to deliver on the value that we all see here and actually making that real in the world. What do you think that deployment looks like? Where do we first see the inkling of no safety driver, one or two cars here and there? Is it on the highway? Is it in specific routes in the urban environment? I think it's gonna be urban, suburban type environments. Yeah, with Aurora, when we thought about how to tackle this, it was kind of in vogue to think about trucking as opposed to urban driving. And again, the human intuition around this is that freeways are easier to drive on because everybody's kind of going in the same direction and lanes are a little wider, et cetera. And I think that that intuition is pretty good, except we don't really care about most of the time. We care about all of the time. And when you're driving on a freeway with a truck, say 70 miles an hour, and you've got 70,000 pound load with you, that's just an incredible amount of kinetic energy. And so when that goes wrong, it goes really wrong. And those challenges that you see occur more rarely, so you don't get to learn as quickly. And they're incrementally more difficult than urban driving, but they're not easier than urban driving. And so I think this happens in moderate speed urban environments because if two vehicles crash at 25 miles per hour, it's not good, but probably everybody walks away. And those events where there's the possibility for that occurring happen frequently. So we get to learn more rapidly. We get to do that with lower risk for everyone. And then we can deliver value to people that need to get from one place to another. And once we've got that solved, then the freeway driving part of this just falls out. But we're able to learn more safely, more quickly in the urban environment. So 10 years and then scale 20, 30 year, who knows if a sufficiently compelling experience is created, it could be faster and slower. Do you think there could be breakthroughs and what kind of breakthroughs might there be that completely change that timeline? Again, not only am I asking you to predict the future, I'm asking you to predict breakthroughs that haven't happened yet. So what's the, I think another way to ask that would be if I could wave a magic wand, what part of the system would I make work today to accelerate it as quickly as possible? Don't say infrastructure, please don't say infrastructure. No, it's definitely not infrastructure. It's really that perception forecasting capability. So if tomorrow you could give me a perfect model of what's happened, what is happening and what will happen for the next five seconds around a vehicle on the roadway, that would accelerate things pretty dramatically. Are you, in terms of staying up at night, are you mostly bothered by cars, pedestrians or cyclists? So I worry most about the vulnerable road users about the combination of cyclists and cars, right? Or cyclists and pedestrians because they're not in armor. The cars, they're bigger, they've got protection for the people and so the ultimate risk is lower there. Whereas a pedestrian or a cyclist, they're out on the road and they don't have any protection and so we need to pay extra attention to that. Do you think about a very difficult technical challenge of the fact that pedestrians, if you try to protect pedestrians by being careful and slow, they'll take advantage of that. So the game theoretic dance, does that worry you of how, from a technical perspective, how we solve that? Because as humans, the way we solve that is kind of nudge our way through the pedestrians which doesn't feel, from a technical perspective, as a appropriate algorithm. But do you think about how we solve that problem? Yeah, I think there's two different concepts there. So one is, am I worried that because these vehicles are self driving, people will kind of step in the road and take advantage of them? And I've heard this and I don't really believe it because if I'm driving down the road and somebody steps in front of me, I'm going to stop. Even if I'm annoyed, I'm not gonna just drive through a person stood in the road. And so I think today people can take advantage of this and you do see some people do it. I guess there's an incremental risk because maybe they have lower confidence that I'm gonna see them than they might have for an automated vehicle and so maybe that shifts it a little bit. But I think people don't wanna get hit by cars. And so I think that I'm not that worried about people walking out of the 101 and creating chaos more than they would today. Regarding kind of the nudging through a big stream of pedestrians leaving a concert or something, I think that is further down the technology pipeline. I think that you're right, that's tricky. I don't think it's necessarily, I think the algorithm people use for this is pretty simple. It's kind of just move forward slowly and if somebody's really close then stop. And I think that that probably can be replicated pretty easily and particularly given that you don't do this at 30 miles an hour, you do it at one, that even in those situations the risk is relatively minimal. But it's not something we're thinking about in any serious way. And probably that's less an algorithm problem and more creating a human experience. So the HCI people that create a visual display that you're pleasantly as a pedestrian nudged out of the way, that's an experience problem, not an algorithm problem. Who's the main competitor to Aurora today? And how do you outcompete them in the long run? So we really focus a lot on what we're doing here. I think that, I've said this a few times, that this is a huge difficult problem and it's great that a bunch of companies are tackling it because I think it's so important for society that somebody gets there. So we don't spend a whole lot of time thinking tactically about who's out there and how do we beat that person individually. What are we trying to do to go faster ultimately? Well part of it is the leadership team we have has got pretty tremendous experience. And so we kind of understand the landscape and understand where the cul de sacs are to some degree and we try and avoid those. I think there's a part of it, just this great team we've built. People, this is a technology and a company that people believe in the mission of and so it allows us to attract just awesome people to go work. We've got a culture I think that people appreciate that allows them to focus, allows them to really spend time solving problems. And I think that keeps them energized. And then we've invested hard, invested heavily in the infrastructure and architectures that we think will ultimately accelerate us. So because of the folks we're able to bring in early on, because of the great investors we have, we don't spend all of our time doing demos and kind of leaping from one demo to the next. We've been given the freedom to invest in infrastructure to do machine learning, infrastructure to pull data from our on road testing, infrastructure to use that to accelerate engineering. And I think that early investment and continuing investment in those kind of tools will ultimately allow us to accelerate and do something pretty incredible. Chris, beautifully put. It's a good place to end. Thank you so much for talking today. Thank you very much. Really enjoyed it.
Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | Lex Fridman Podcast #28
The following is a conversation with Gustav Sorenstrom. He's the chief research and development officer at Spotify, leading their product design, data technology and engineering teams. As I've said before, in my research and in life in general, I love music, listening to it and creating it. And using technology, especially personalization through machine learning, to enrich the music discovery and listening experience. That is what Spotify has been doing for years, continually innovating, defining how we experience music as a society in the digital age. That's what Gustav and I talk about, among many other topics, including our shared appreciation of the movie True Romance, in my view, one of the great movies of all time. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support on Patreon or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Gustav Sorenstrom. Spotify has over 50 million songs in its catalog. So let me ask the all important question. I feel like you're the right person to ask. What is the definitive greatest song of all time? It varies for me, personally. So you can't speak definitively for everyone? I wouldn't believe very much in machine learning if I did, right? Because everyone had the same taste. So for you, what is... you have to pick. What is the song? All right, so it's pretty easy for me. There's this song called You're So Cool, Hans Zimmer, a soundtrack to True Romance. It was a movie that made a big impression on me. And it's kind of been following me through my life. I actually had it play at my wedding. I sat with the organist and helped him play it on an organ, which was a pretty interesting experience. That is probably my, I would say, top three movie of all time. Yeah, this is an incredible movie. Yeah, and it came out during my formative years. And as I've discovered in music, you shape your music taste during those years. So it definitely affected me quite a bit. Did it affect you in any other kind of way? Well, the movie itself affected me back then. It was a big part of culture. I didn't really adopt any characters from the movie, but it was a great story of love, fantastic actors. And really, I didn't even know who Hans Zimmer was at the time, but fantastic music. And so that song has followed me. And the movie actually has followed me throughout my life. That was Quentin Tarantino, actually, I think, director or producer. So it's not Stairway to Heaven or Bohemian Rhapsody. Those are great. They're not my personal favorites, but I've realized that people have different tastes. And that's a big part of what we do. Well, for me, I would have to stick with Stairway to Heaven. So 35,000 years ago, I looked this up on Wikipedia, flute like instruments started being used in caves as part of hunting rituals. And primitive cultural gatherings, things like that. This is the birth of music. Since then, we had a few folks, Beethoven, Elvis, Beatles, Justin Bieber, of course, Drake. So in your view, let's start like high level philosophical. What is the purpose of music on this planet of ours? I think music has many different purposes. I think there's certainly a big purpose, which is the same as much of entertainment, which is escapism and to be able to live in some sort of other mental state for a while. But I also think you have the opposite of escaping, which is to help you focus on something you are actually doing. Because I think people use music as a tool to tune the brain to the activities that they are actually doing. And it's kind of like, in one sense, maybe it's the rawest signal. If you think about the brain as neural networks, it's maybe the most efficient hack we can do to actually actively tune it into some state that you want to be. You can do it in other ways. You can tell stories to put people in a certain mood. But music is probably very effective to get you to a certain mood very fast, I think. You know, there's a social component historically to music, where people listen to music together. I was just thinking about this, that to me, and you mentioned machine learning, but to me personally, music is a really private thing. I'm speaking for myself, I listen to music, like almost nobody knows the kind of things I have in my library, except people who are really close to me and they really only know a certain percentage. There's like some weird stuff that I'm almost probably embarrassed by, right? It's called the guilty pleasures, right? Everyone has the guilty pleasures, yeah. Hopefully they're not too bad, but for me, it's personal. Do you think of music as something that's social or as something that's personal? Or does it vary? So I think it's the same answer that you use it for both. We've thought a lot about this during these 10 years at Spotify, obviously. In one sense, as you said, music is incredibly social, you go to concerts and so forth. On the other hand, it is your escape and everyone has these things that are very personal to them. So what we've found is that when it comes to, most people claim that they have a friend or two that they are heavily inspired by and that they listen to. So I actually think music is very social, but in a smaller group setting, it's an intimate form of, it's an intimate relationship. It's not something that you necessarily share broadly. Now, at concerts, you can argue you do, but then you've gathered a lot of people that you have something in common with. I think this broadcast sharing of music is something we tried on social networks and so forth. But it turns out that people aren't super interested in sharing their music. They aren't super interested in what their friends listen to. They're interested in understanding if they have something in common perhaps with a friend, but not just as information. Right, that's really interesting. I was just thinking of it this morning, listening to Spotify. I really have a pretty intimate relationship with Spotify, with my playlists, right? I've had them for many years now and they've grown with me together. There's an intimate relationship you have with a library of music that you've developed. And we'll talk about different ways we can play with that. Can you do the impossible task and try to give a history of music listening from your perspective from before the internet and after the internet and just kind of everything leading up to streaming with Spotify and so on? I'll try. It could be a 100 year podcast. I'll try to do a brief version. There are some things that I think are very interesting during the history of music, which is that before recorded music, to be able to enjoy music, you actually had to be where the music was produced because you couldn't record it and time shift it, right? Creation and consumption had to happen at the same time, basically concerts. And so you either had to get to the nearest village to listen to music. And while that was cumbersome and it severely limited the distribution of music, it also had some different qualities, which was that the creator could always interact with the audience. It was always live. And also there was no time cap on the music. So I think it's not a coincidence that these early classical works, they're much longer than the three minutes. The three minutes came in as a restriction of the first wax disc that could only contain a three minute song on one side, right? So actually the recorded music severely limited or put constraints. I won't say limit. I mean, constraints are often good, but it put very hard constraints on the music format. So you kind of said, instead of doing this opus on many tens of minutes or something, now you get three and a half minutes because then you're out of wax on this disc. But in return, you get an amazing distribution. Your reach will widen, right? Just on that point real quick. Without the mass scale distribution, there's a scarcity component where you kind of look forward to it. We had that, it's like the Netflix versus HBO Game of Thrones. You like wait for the event because you can't really listen to it. So you like look forward to it and then it's like, you derive perhaps more pleasure because it's more rare for you to listen to a particular piece. You think there's value to that scarcity? Yeah, I think that that is definitely a thing. And there's always this component of if you have something in infinite amounts, will you value it as much? Probably not. Humanity is always seeking some, it's relative. So you're always seeking something you didn't have. And when you have it, you don't appreciate it as much. So I think that's probably true. But I think that that's probably true. But I think that's why concerts exist. So you can actually have both. But I think net, if you couldn't listen to music in your car driving, that'd be worse. That cost will be bigger than the benefit of the anticipation I think that you would have. So, yeah, it started with live concerts. Then it's being able to, you know, the phonograph invented, right? That you start to be able to record music. Exactly. So then you got this massive distribution that made it possible to create two things. I think, first of all, cultural phenomenons, they probably need distribution to be able to happen. But it also opened access to, you know, for a new kind of artist. So you started to have these phenomenons like Beatles and Elvis and so forth. That would really, a function of distribution, I think, obviously of talent and innovation. But there was also technical component. And of course, the next big innovation to come along was radio. Broadcast radio. And I think radio is interesting because it started not as a music medium. It started as an information medium for news. And then radio needed to find something to fill the time with so that they could honestly play more ads and make more money. And music was free. So then you had this massive distribution where you could program to people. I think those things, that ecosystem, is what created the ability for hits. But it was also a very broadcast medium. So you would tend to get these massive, massive hits, but maybe not such a long tail. In terms of choice of everybody listens to the same stuff. Yeah. And as you said, I think there are some social benefits to that. I think, for example, there's a high statistical chance that if I talk about the latest episode of Game of Thrones, we have something to talk about, just statistically. In the age of individual choice, maybe some of that goes away. So I do see the value of shared cultural components, but I also obviously love personalization. And so let's catch this up to the internet. So maybe Napster, well, first of all, there's MP3s, tapes, CDs. There was a digitalization of music with a CD, really. It was physical distribution, but the music became digital. And so they were files, but basically boxed software, to use a software analogy. And then you could start downloading these files. And I think there are two interesting things that happened. Back to music used to be longer before it was constrained by the distribution medium. I don't think that was a coincidence. And then really the only music genre to have developed mostly after music was a file again on the internet is EDM. And EDM is often much longer than the traditional music. I think it's interesting to think about the fact that music is no longer constrained in minutes per song or something. It's a legacy of an old distribution technology. And you see some of this new music that breaks the format. Not so much as I would have expected actually by now, but it still happens. So first of all, I don't really know what EDM is. Electronic dance music. Yeah. You could say Avicii. Avicii was one of the biggest in this genre. So the main constraint is of time. Something like a three, four, five minute song. So you could have songs that were eight minutes, 10 minutes and so forth. Because it started as a digital product that you downloaded. So you didn't have this constraint anymore. So I think it's something really interesting that I don't think has fully happened yet. We're kind of jumping ahead a little bit to where we are, but I think there's tons of format innovation in music that should happen now, that couldn't happen when you needed to really adhere to the distribution constraints. If you didn't adhere to that, you would get no distribution. So Björk, for example, the Icelandic artist, she made a full iPad app as an album. That was very expensive. Even though the app store has great distribution, she gets nowhere near the distribution versus staying within the three minute format. So I think now that music is fully digital inside these streaming services, there is the opportunity to change the format again and allow creators to be much more creative without limiting their distribution ability. That's interesting that you're right. It's surprising that we don't see that taken advantage more often. It's almost like the constraints of the distribution from the 50s and 60s have molded the culture to where we want the five, three to five minute song than anything else, not just. So we want the song as consumers and as artists, because I write a lot of music and I never even thought about writing something longer than 10 minutes. It's really interesting that those constraints. Because all your training data has been three and a half minute songs, right? It's right. Okay, so yes, digitization of data led to then mp3s. Yeah, so I think you had this file then that was distributed physically, but then you had the components of digital distribution and then the internet happened and there was this vacuum where you had a format that could be digitally shipped, but there was no business model. And then all these pirate networks happened, Napster and in Pirate Island. Napster and in Sweden Pirate Bay, which was one of the biggest. And I think from a consumer point of view, which kind of leads up to the inception of Spotify, from a consumer point of view, consumers for the first time had this access model to music where they could, without kind of any marginal cost, they could try different tracks. You could use music in new ways. There was no marginal cost. And that was a fantastic consumer experience to have access to all the music ever made, I think was fantastic. But it was also horrible for artists because there was no business model around it. So they didn't make any money. So the user need almost drove the user interface before there was a business model. And then there were these download stores that allowed you to download files, which was a solution, but it didn't solve the access problem. There was still a marginal cost of 99 cents to try one more track. And I think that that heavily limits how you listen to music. The example I always give is, you know, in Spotify, a huge amount of people listen to music while they sleep, while they go to sleep and while they sleep. If that costed you 99 cents per three minutes, you probably wouldn't do that. And you would be much less adventurous if there was a real dollar cost to exploring music. So the access model is interesting in that it changes your music behavior. You can be, you can take much more risk because there's no marginal cost to it. Maybe let me linger on piracy for a second, because I find, especially coming from Russia, piracy is something that's very interesting to me. Not me, of course, ever, but I have friends who have partook in piracy of music, software, TV shows, sporting events. And usually to me, what that shows is not that they're, they can actually pay the money and they're not trying to save money. They're choosing the best experience. So what to me, piracy shows is a business opportunity in all these domains. And that's where I think you're right. Spotify stepped in is basically piracy was an experience. You can explore with fine music you like, and actually the interface of piracy is horrible because it's, I mean, it's bad metadata, long download times, all kinds of stuff. And what Spotify does is basically first rewards artists and second makes the experience of exploring music much better. I mean, the same is true, I think for movies and so on. That piracy reveals in the software space, for example, I'm a huge user and fan of Adobe products and there was much more incentive to pirate Adobe products before they went to a monthly subscription plan. And now all of the said friends that used to pirate Adobe products that I know now actually pay gladly for the monthly subscription. Yeah, I think you're right. I think it's a sign of an opportunity for product development. And that sometimes there's a product market fit before there's a business model fit in product development. I think that's a sign of it. In Sweden, I think it was a bit of both. There was a culture where we even had a political party called the Pirate Party. And this was during the time when people said that information should be free. It was somehow wrong to charge for ones and zeros. So I think people felt that artists should probably make some money somehow else and concerts or something. So at least in Sweden, it was part really social acceptance, even at the political level. But that also forced Spotify to compete with free, which I don't think would actually could have happened anywhere else in the world. The music industry needed to be doing bad enough to take that risk. And Sweden was like the perfect testing ground. It had government funded high bandwidth, low latency broadband, which meant that the product would work. And it was also there was no music revenue anyway. So they were kind of like, I don't think this is going to work, but why not? So this product is one that I don't think could have happened in America, the world's largest music market, for example. So how do you compete with free? Because that's an interesting world of the internet where most people don't like to pay for things. So Spotify steps in and tries to, yes, compete with free. How do you do it? So I think two things. One is people are starting to pay for things on the internet. I think one way to think about it was that advertising was the first business model because no one would put a credit card on the internet. Transactional with Amazon was the second. And maybe subscription is the third. And if you look offline, subscription is the biggest of those. So that may still happen. I think people are starting to pay for things. But definitely back then, we needed to compete with free. And the first thing you need to do is obviously to lower the price to free and then you need to be better somehow. And the way that Spotify was better was on the user experience, on the actual performance, the latency of, you know, even if you had high bandwidth broadband, it would still take you 30 seconds to a minute to download one of these tracks. So the Spotify experience of starting within the perceptual limit of immediacy, about 250 milliseconds, meant that the whole trick was it felt as if you had downloaded all of Pirate Bay. It was on your hard drive. It was that fast, even though it wasn't. And it was still free. But somehow you were actually still being a legal citizen. And that was the trick that Spotify managed to pull off. So I've actually heard you say this or write this. And I was surprised that I wasn't aware of it because I just took it for granted. You know, whenever an awesome thing comes along, you're just like, of course, it has to be this way. That's exactly right. That it felt like the entire world's libraries at my fingertips because of that latency being reduced. What was the technical challenge in reducing the latency? So there was a group of really, really talented engineers, one of them called Ludwig Strigius. He wrote the, actually from Gothenburg, he wrote the initial, the uTorrent client, which is kind of an interesting backstory to Spotify, that we have one of the top developers from uTorrent clients as well. So he wrote uTorrent, the world's smallest uTorrent client. And then he was acquired very early by Daniel and Martin, who founded Spotify, and they actually sold the uTorrent client to BitTorrent, but kept Ludwig. So Spotify had a lot of experience within peer to peer networking. So the original innovation was a distribution innovation, where Spotify built an end to end media distribution system up until only a few years ago, we actually hosted all the music ourselves. So we had both the service side and the client, and that meant that we could do things such as having a peer to peer solution to use local caching on the client side, because back then the world was mostly desktop. But we could also do things like hack the TCP protocols, things like Nagel's algorithm for kind of exponential back off, or ramp up and just go full throttle and optimize for latency at the cost of bandwidth. And all of this end to end control meant that we could do an experience that felt like a step change. These days, we actually are on GCP, we don't host our own stuff, and everyone is really fast these days. So that was the initial competitive advantage. But then obviously, you have to move on over time. And that was over 10 years ago, right? That was in 2008. The product was launched in Sweden. It was in a beta, I think, 2007. And it was on the desktop, right? It was desktop only. There's no phone. There was no phone. The iPhone came out in 2008. But the App Store came out one year later, I think. So the writing was on the wall, but there was no phone yet. You've mentioned that people would use Spotify to discover the songs they like, and then they would torrent those songs to so they can copy it to their phone. Just hilarious. Exactly. Not torrent, pirate. Seriously, piracy does seem to be like a good guide for business models. Video content. As far as I know, Spotify doesn't have video content. Well, we do have music videos, and we do have videos on the service. But the way we think about ourselves is that we're an audio service, and we think that if you look at the amount of time that people spend on audio, it's actually very similar to the amount of time that people spend on music. It's very similar to the amount of time that people spend on video. So the opportunity should be equally big. But today, it's not at all valued. Videos value much higher. So we think it's basically completely undervalued. So we think of ourselves as an audio service. But within that audio service, I think video can make a lot of sense. I think when you're discovering an artist, you probably do want to see them and understand who they are, to understand their identity. You won't see that video every time. 90% of the time, the phone is going to be in your pocket. For podcasters, you use video. I think that can make a ton of sense. So we do have video, but we're an audio service where, think of it as we call it internally, backgroundable video. Video that is helpful, but isn't the driver of the narrative. I think also, if we look at YouTube, there's quite a few folks who listen to music on YouTube. So in some sense, YouTube is a bit of a competitor to Spotify, which is very strange to me that people use YouTube to listen to music. They play essentially the music videos, right? But don't watch the videos and put it in their pocket. Well, I think it's similar to what, strangely, maybe it's similar to what we were for the piracy networks, where YouTube, for historical reasons, have a lot of music videos. So people use YouTube for a lot of the discovery part of the process, I think. But then it's not a really good sort of, quote unquote, MP3 player, because it doesn't even background. Then you have to keep the app in the foreground. So it's not a good consumption tool, but it's a decently good discovery. I mean, I think YouTube is a fantastic product. And I use it for all kinds of purposes. That's true. If I were to admit something, I do use YouTube a little bit to assist in the discovery process of songs. And then if I like it, I'll add it to Spotify. But that's OK. That's OK with us. OK, so sorry, we're jumping around a little bit. So it's kind of incredible. You look at Napster, you look at the early days of Spotify. One fascinating point is how do you grow a user base? So you're there in Sweden. You have an idea. I saw the initial sketches that look terrible. How do you grow a user base from a few folks to millions? I think there are a bunch of tactical answers. So first of all, I think you need a great product. I don't think you take a bad product and market it to be successful. So you need a great product. But sorry to interrupt, but it's a totally new way to listen to music, too. So it's not just did people realize immediately that Spotify is a great product? No, I think they did. So back to the point of piracy, it was a totally new way to listen to music legally. But people had been used to the access model in Sweden and the rest of the world for a long time through piracy. So one way to think about Spotify, it was just legal and fast piracy. And so people have been using it for a long time. So they weren't alien to it. They didn't really understand how it could be illegal because it seemed too fast and too good to be true, which I think is a great product proposition if you can be too good to be true. But what I saw again and again was people showing each other, clicking the song, showing how fast it started and say, can you believe this? So I really think it was about speed. Then we also had an invite program that was really meant for scaling because we hosted our own service. We needed to control scaling. But that built a lot of expectation. And I don't want to say hype because hype implies that it wasn't true. Excitement around the product. And we've replicated that when we launched in the US. We also built up an invite only program first. There are lots of tactics, but I think you need a great product to solve some problem. And basically the key innovation, there was technology, but on a meta level, the innovation was really the access model versus the ownership model. And that was tricky. A lot of people said that they wanted to be able to do it. I mean, they wanted to own their music. They would never kind of rent it or borrow it. But I think the fact that we had a free tier, which meant that you get to keep this music for life as well, helped quite a lot. So this is an interesting psychological point that maybe you can speak to. It was a big shift for me. It's almost like I had to go to therapy for this. I think I would describe my early listening experience, and I think a lot of my friends do, as basically hoarding music. As you're like slowly, one song by one song, or maybe albums, gathering a collection of music that you love. And you own it. It's like often, especially with CDs or tape, you like physically had it. And what Spotify, what I had to come to grips with, it was kind of liberating actually, is to throw away all the music. I've had this therapy session with lots of people. And I think the mental trick is, so actually we've seen the user data. When Spotify started, a lot of people did the exact same thing. They started hoarding as if the music would disappear. Almost the equivalent of downloading. And so we had these playlists that had limits of like a few hundred thousand tracks. We figured no one will ever. Well, they do. Nuts and hundreds and hundreds of thousands of tracks. And to this day, some people want to actually save, quote unquote, and then play the entire catalog. But I think the therapy session goes something like instead of throwing away your music, if you took your files and you stored them in the locker at Google, it'd be a streaming service. It's just that in that locker, you have all the world's music now for free. So instead of giving away your music, you got all the music. It's yours. You could think of it as having a copy of the world's catalog there forever. So you actually got more music instead of less. It's just that you just took that hard disk and you sent it to someone who stored it for you. And once you go through that mental journey, I'm like, it's still my files. They're just over there. And I just have 40 million or 50 million or something now. Then people are like, OK, that's good. The problem is, I think, because you paid us a subscription, if we hadn't had the free tier where you would feel like, even if I don't want to pay anymore, I still get to keep them. You keep your playlist forever. They don't disappear even though you stop paying. I think that was really important. If we would have started as, you know, you can put in all this time, but if you stop paying, you lose all your work. I think that would have been a big challenge and was the big challenge for a lot of our competitors. That's another reason why I think the free tier is really important. That people need to feel the security, that the work they put in, it will never disappear, even if they decide not to pay. I like how you put the work you put in. I actually stopped even thinking of it that way. I just actually Spotify taught me to just enjoy music as opposed to. As opposed to what I was doing before, which is like in an unhealthy way, hoarding music. Which I found that because I was doing that, I was listening to a small selection of songs way too much to where I was getting sick of them. Whereas Spotify, the more liberating kind of approach is I was just enjoying. Of course, I listened to Stairway to Heaven over and over, but because of the extra variety, I don't get as sick of them. There's an interesting statistic I saw. So Spotify has, maybe you can correct me, but over 50 million songs, tracks, and over 3 billion playlists. So 50 million songs and 3 billion playlists. 60 times more playlist songs. What do you make of that? Yeah. So the way I think about it is that from a statistician or machine learning point of view, you have all these, if you want to think about reinforcement learning, you have this state space of all the tracks. You can take different journeys through this world. I think of these as people helping themselves and each other, creating interesting vectors through this space of tracks. And then it's not so surprising that across many tens of millions of atomic units, there will be billions of paths that make sense. And we're probably pretty quite far away from having found all of them. So kind of our job now is users, when Spotify started, it was really a search box that was for the time pretty powerful. And then I'd like to refer to it as this programming language called playlisting, where if you, as you probably were pretty good at music, you knew your new releases, you knew your back catalog, you knew your star with the heaven, you could create a soundtrack for yourself using this playlisting tool, this like meta programming language for music to soundtrack your life. And people who were good at music, it's back to how do you scale the product. For people who are good at music, that wasn't actually enough. If you had the catalog and a good search tool, and you can create your own sessions, you could create really good a soundtrack for your entire life. Probably perfectly personalized because you did it yourself. But the problem was most people, many people aren't that good at music. They just can't spend the time. Even if you're very good at music, it's going to be hard to keep up. So what we did to try to scale this was to essentially try to build, you can think of them as agents that this friend that some people had that helped them navigate this music catalog. That's what we're trying to do for you. But also there is something like 200 million active users. 1 million active users on Spotify. So there it's okay. So from the machine learning perspective, you have these 200 million people plus they're creating. It's really interesting to think of a playlist as, I mean, I don't know if you meant it that way, but it's almost like a programming language. It's or at least a trace of exploration of those individual agents. The listeners and you have all this new tracks coming in. So it's a fascinating space that is ripe for machine learning. So is there, is it possible, how can playlists be used as data in terms of machine learning and to help Spotify organize the music? So we found in our data, not surprising that people who play listed lots they retain much better. They had a great experience. And so our first attempt was to playlist for users. And so we acquired this company called Tunigo of editors and professional playlisters and kind of leveraged the maximum of human intelligence to help build kind of these vectors through the track space for people. And that broadened the product. But then the obvious next, and we use statistical means, where they could see when they created a playlist, how did that playlist perform? They could see skips of the songs, they could see how the songs perform, and they manually iterated the playlist to maximize performance for a large group of people. But there were never enough editors to playlists for you personally. So the promise of machine learning was to go from kind of group personalization using editors and tools and statistics to individualization. And then what's so interesting about the 3 billion playlists we have is we ended, the truth is we lucked out. This was not a priority strategy, as is often the case. It looks really smart in hindsight, but it was dumb luck. We looked at these playlists and we had some people in the company, a person named Eric Beranodson. He was really good at machine learning already back then in like 2007, 2008. Back then it was mostly collaborative filtering and so forth. But we realized that what this is, is people are grouping tracks for themselves that have some semantic meaning to them. And then they actually label it with a playlist name as well. So in a sense, people were grouping tracks along semantic dimensions and labeling them. And so could you use that information to find that latent embedding? And so we started playing around with collaborative filtering and we saw tremendous success with it. Basically trying to extract some of these dimensions. And if you think about it, it's not surprising at all. It'd be quite surprising if playlists were actually random, if they had no semantic meaning. For most people, they group these tracks for some reason. So we just happened across this incredible data set. Where people are taking these tens of millions of tracks and group them along different semantic vectors. And the semantics being outside the individual users. So it's some kind of universal. There's a universal embedding that holds across people on this earth. Yes, I do think that the embeddings you find are going to be reflective of the people who play listed. So if you have a lot of indie lovers who play list, your embedding is going to perform better there. But what we found was that yes, there were these latent similarities. They were very powerful. And it was interesting because I think that the people who play listed the most initially were the so called music aficionados who were really into music. And they often had a certain... Their taste was often geared towards a certain type of music. And so what surprised us, if you look at the problem from the outside, you might expect that the algorithms would start performing best with mainstreamers first. Because it somehow feels like an easier problem to solve mainstream taste than really particular taste. It was the complete opposite for us. The recommendations performed fantastically for people who saw themselves as having very unique taste. That's probably because all of them play listed. And they didn't perform so well for mainstreamers. They actually thought they were a bit too particular and unorthodox. So we had the complete opposite of what we expected. Success within the hardest problem first, and then had to try to scale to more mainstream recommendations. So you've also acquired Echo Nest that analyzes song data. So in your view, maybe you can talk about, so what kind of data is there from a machine learning perspective? From a machine learning perspective, there's a huge amount. We're talking about playlisting and just user data of what people are listening to, the playlist they're constructing, and so on. And then there's the actual data within a song. What makes a song, I don't know, the actual waveforms. How do you mix the two? How much value is there in each? To me, it seems like user data is a romantic notion that the song itself would contain useful information. But if I were to guess, user data would be much more powerful, like playlists would be much more powerful. Yeah, so we use both. Our biggest success initially was with playlist data without understanding anything about the structure of the song. But when we acquired Echo Nest, they had the inverse problem. They actually didn't have any play data. They were just, they were a provider of recommendations, but they didn't actually have any play data. So they looked at the structure of songs, sonically, and they looked at Wikipedia for cultural references and so forth, right? And did a lot of NLU and so forth. So we got that skill into the company and combined kind of our user data with their kind of content based. So you can think of it as we were user based and they were content based in their recommendations. And we combined those two. And for some cases where you have a new song that has no play data, obviously you have to try to go by either who the artist is or the sonic information in the song or what it's similar to. So there's definitely a value in both and we do a lot in both, but I would say, yes, the user data captures things that have to do with culture in the greater society that you would never see in the content itself. But that said, we have seen, we have a research lab in Paris when we can talk more about that on machine learning on the creator side, what it can do for creators, not just for the consumers, but where we looked at how does the structure of a song actually affect the listening behavior? And it turns out that there is a lot of, we can predict things like skips based on the song itself. We could say that maybe you should move that chorus a bit because your skip is going to go up here. There is a lot of latent structure in the music, which is not surprising because it is some sort of mind hack. So there should be structure. That's probably what we respond to. You just blew my mind actually from the creator perspective. So that's a really interesting topic that probably most creators aren't taking advantage of, right? So I've recently got to interact with a few folks, YouTubers who are like obsessed with this idea of what do I do to make sure people keep watching the video? And they like look at the analytics of which point do people turn it off and so on. First of all, I don't think that's healthy, but it's because you can do it a little too much. But it is a really powerful tool for helping the creative process. You just made me realize you could do the same thing for creation of music. And so is that something you've looked into? And can you speak to how much opportunity there is for that kind of thing? Yeah, so I listened to the podcast with Ziraj and I thought it was fantastic and I reacted to the same thing where he said he posted something in the morning, immediately watched the feedback where the drop off was and then responded to that in the afternoon, which is quite different from how people make podcasts, for example. Yes, exactly. I mean, the feedback loop is almost non existent. So if we back out one level, I think actually both for music and podcasts, which we also do at Spotify, I think there's a tremendous opportunity just for the creation workflow. And I think it's really interesting speaking to you who, because you're a musician, a developer, and a podcaster. If you think about those three different roles, if you make the leap as a musician, if you think about it as a software tool chain, really, your DAW with the stems, that's the IDE, right? That's where you work in source code format with what you're creating. Then you sit around and you play with that. And when you're happy, you compile that thing into some sort of AAC or MP3 or something. You do that because you get distribution. There are so many runtimes for that MP3 across the world in car stairs and stuff. So if you kind of compile this execution, you ship it out in kind of an old fashioned boxed software analogy. And then you hope for the best, right? But as a software developer, you would never do that. First, you go on GitHub and you collaborate with other creators. And then you think it'd be crazy to just ship one version of your software without doing an A B test, without any feedback loop. Issue tracking. Exactly. And then you would look at the feedback loop and say, try to optimize that thing, right? So I think if you think of it as a very specific software tool chain, it looks quite arcane, the tools that a music creator has versus what a software developer has. So that's kind of how we think about it. Why wouldn't a music creator have something like GitHub where you could collaborate much more easily? So we bought this company called Soundtrap, which has a kind of Google Docs for music approach, where you can collaborate with other people on the kind of source code format with Stems. And I think introducing things like AI tools there to help you as you're creating music, both in helping you put accompaniment to your music, like drums or something, help you master and mix automatically, help you understand how this track will perform. Exactly what you would expect as a software developer. I think it makes a lot of sense. And I think the same goes for a podcaster. I think podcasters will expect to have the same kind of feedback loop that Siraj has, like, why wouldn't you? Maybe it's not healthy, but... Sorry, I wanted to criticize the fact because you can overdo it because a lot of the, and we're in a new era of that. So you can become addicted to it and therefore, what people say, you become a slave to the YouTube algorithm or sort of, it's always a danger of a new technology as opposed to say, if you're creating a song, becoming too obsessed about the intro riff to the song that keeps people listening versus actually the entirety of the creation process. It's a balance. But the fact that there's zero, I mean, you're blowing my mind right now, because you're completely right that there is no signal whatsoever. There's no feedback whatsoever on the creation process and music or podcasting, almost at all. And are you saying that Spotify is hoping to help create tools to, not tools, but... No, tools actually. Actually, tools. Tools for creators. Absolutely. So we've made some acquisitions the last few years around music creation, this company called Soundtrap, which is a digital audio workstation, but that is browser based. And their focus was really the Google Docs approach. We can collaborate with people much more easily than you could in previous tools. So we have some of these tools that we're working with that we want to make accessible and then we can connect it with our consumption data. We can create this feedback loop where we could help you understand, we could help you create and help you understand how you will perform. We also acquired this other company within podcasting called Anchor, which is one of the biggest podcasting tools, mobile focused. So really focused on simple creation or easy access to creation. But that also gives us this feedback loop. And even before that, we invested in something called Spotify for Artists and Spotify for Podcasters, which is an app that you can download, you can verify that you are that creator. And then you get things that software developers have had for years. You can see where, if you look at your podcast, for example, on Spotify or a song that you released, you can see how it's performing, which cities it's performing in, who's listening to it, what's the demographic breakup. So similar in the sense that you can understand how you're actually doing on the platform. So we definitely want to build tools. I think you also interviewed the head of research for Adobe. And I think that's an, back to Photoshop that you like, I think that's an interesting analogy as well. Photoshop, I think, has been very innovative in helping photographers and artists. And I think there should be the same kind of tools for music creators, where you could get AI assistance, for example, as you're creating music, as you can do with Adobe, where you can, I want a sky over here and you can get help creating that sky. The really fascinating thing is what Adobe doesn't have is a distribution for the content you create. So you don't have the data of if I create, if I, you know, whatever creation I make in Photoshop or Premiere, I can't get like immediate feedback like I can on YouTube, for example, about the way people are responding. And if Spotify is creating those tools, that's a really exciting actually world. But let's talk a little about podcasts. So I have trouble talking to one person. So it's a bit terrifying and kind of hard to fathom, but on average, 60 to 100,000 people will listen to this episode. Okay, so it's intimidating. Yeah, it's intimidating. So I hosted on Blueberry. I don't know if I'm pronouncing that correctly, actually. It looks like most people listen to it on Apple Podcasts, Cast Box and Pocket Casts, and only about a thousand listen on Spotify. It's just my podcast, right? So where do you see a time when Spotify will dominate this? So Spotify is relatively new into this podcasting site. Yeah, in podcasting. What's the deal with podcasting and Spotify? How serious is Spotify about podcasting? Do you see a time where everybody would listen to, you know, probably a huge amount of people, majority perhaps listen to music on Spotify? Do you see a time when the same is true for podcasting? Well, I certainly hope so. That is our mission. Our mission as a company is actually to enable a million creators to live off of their art, and a billion people be inspired by it. And what I think is interesting about that mission is it actually puts the creators first, even though it started as a consumer focused company, and it's just to be able to live off of their art, not just make some money off of their art as well. So it's quite an ambitious project. So we think about creators of all kinds, and we kind of expanded our mission from being music to being audio a while back. And that's not so much because we think we made that decision. We think that decision was made for us. We think the world made that decision. Whether we like it or not, when you put in your headphones, you're going to make a choice between music and a new episode of your podcast or something else. We're in that world whether we like it or not. And that's how radio works. So we decided that we think it's about audio. You can see the rise of audiobooks and so forth. We think audio is a great opportunity. So we decided to enter it. And obviously, Apple and Apple Podcasts is absolutely dominating in podcasting, and we didn't have a single podcast only like two years ago. What we did though was we looked at this and said, can we bring something to this? We want to do this, but back to the original Spotify, we have to do something that consumers actually value to be able to do this. And the reason we've gone from not existing at all to being quite a wide margin, the second largest podcast consumption, still wide gap to iTunes, but we're growing quite fast. I think it's because when we looked at the consumer problem, people said surprisingly that they wanted their podcasts and music in the same application. So what we did was we took a little bit of a different approach where we said, instead of building a separate podcast app, we thought, is there a consumer problem to solve here? Because the others are very successful already. And we thought there was in making a more seamless experience where you can have your podcast and your music in the same application, because we think it's audio to you. And that has been successful. And that meant that we actually had 200 million people to offer this to instead of starting from zero. So I think we have a good chance because we're taking a different approach than the competition. And back to the other thing I mentioned about creators, because we're looking at the end to end flow. I think there's a tremendous amount of innovation to do around podcast as a format. When we have creation tools and consumption, I think we could start improving what podcasting is. I mean, podcast is this opaque, big, like one, two hour file that you're streaming, which it really doesn't make that much sense in 2019 that it's not interactive. There's no feedback loops, nothing like that. So I think if we're going to win, it's going to have to be because we build a better product for creators and for consumers. So we'll see, but it's certainly our goal. We have a long way to go. Well, the creators part is really exciting. You already, you got me hooked there. Cause the only stats I have, Blueberry just recently added the stats of whether it's listened to the end or not. And that's like a huge improvement, but that's still nowhere to where you could possibly go in terms of statistics. You just download the Spotify podcasters up and verify. And then, then you'll know where people dropped out in this episode. Oh, wow. Okay. The moment I started talking. Okay. I might be depressed by this, but okay. So one, um, one other question is the original Spotify for music. And I have a question about podcasting in this line is the idea of podcasting about podcasting in this line is the idea of albums. I have, uh, what did you, uh, music aficionados, uh, friends who are really, uh, big fans of music often, uh, really enjoy albums, listening to entire albums of, of an artist. Correct me if I'm wrong, but I feel like Spotify has helped replace the idea of an album with playlists. So you create your own albums. It's, it's kind of the way, at least I've experienced music and I've really enjoyed it that way. One of the things that was missing in podcasting for me, I don't know if it's missing. I don't know. It's an open question for me, but the way I listened to podcasts is the way I would listen to albums. So I take a Joe Rogan experience and that's an album. And I listened, you know, I like, I, I put that on and I listened one episode after the next, then there's a sequence and so on. Is there a room for doing what you did for music or doing what Spotify did for music, but, uh, creating playlists, sort of, uh, this kind of playlisting idea of breaking apart from podcasting, uh, from individual podcasts and creating kind of, uh, this interplay or, or have you thought about that space? Uh, it's a great question. So I think in, um, in music, you're right. Basically you bought an album. So it was like, you bought a small catalog of like 10 tracks, right? It was, it was, again, it was actually a lot of, a lot of consumption. You think it's about what you like, but it's based on the business model. So you paid for this 10 track service and then you listened to that for a while. And then when, when everything was flat priced, you tended to listen differently. Now, so, so I think the, I think the album is still tremendously important. That's why we have it and you can save albums and so forth. And you have a huge amount of people who really listen according to albums. And I like that because it is a creator format, you can tell a longer story over several tracks. And so some people listen to just one track. Some people actually want to hear that whole story. Now in podcast, I think, I think it's different. You can argue that podcasts might be more like shows on Netflix. Have like a full season of Narcos and you're probably not going to do like one episode of Narcos and then one of House of Cards, like, like, you know, there's a narrative there. And you, you, you love the cast and you love these characters. So I think people will, people love shows. And I think they will, they will listen to those shows. I do think you follow a bunch of shows at the same time. So there's certainly an opportunity to bring you the latest episode of, you know, whatever the five, six, 10 things that, that you're into. But, but I think, I think people are going to listen to specific hosts and love those hosts for a long time. Because I think there's something different with podcasts where, um, this format of the, the, the, the, the, the experience of the, of the audience is actually sitting here right between us. Whereas if you look at something on TV, the audio actually would come from, you would sit over there and the audio would come to you from both of us as if you were watching, not as you were part of the conversation. So my experience is having listened to podcasts like yours and Joe Rogan is, I feel like I know all of these people. They, they have a lot of experience. I know all of these people, they have no idea who I am, but I feel like I've listened to so many hours of that. It's very different from me watching a, watching like a TV show or an interview. So I think you, you kind of, um, fall in love with people and, um, experience in a, in a different way. So I think, I think shows and hosts are going to be very, uh, very important. I don't think that's going to go away into some sort of thing where, where you don't even know who you're listening to. I don't think that's going to happen. What I do think is I think there's a tremendous discovery opportunity in podcast because the catalog is growing quite quickly. And I think podcast is only a few, like five, 600,000 shows right now. If you look back to YouTube as another analogy of creators, no one really knows if you would lift the lid on YouTube, but it's probably billions of episodes. And so I think the podcast catalog would probably grow tremendously because the creation tools are getting easier. And then you're going to have this discovery opportunity that I think is really big. So, so a lot of people tell me that they love their shows, but discovering podcasts kind of suck. It's really hard to get into new show. They're usually quite long. It's a big time investment. So I think there's plenty of opportunity in the discovery part. Yeah, for sure. A hundred percent in, in even the dumbest, there's so many low hanging fruit too. Uh, for example, just knowing what episode to listen to first to try out a podcast. Exactly. Uh, because most podcasts don't have an order to them. Uh, they, they can be listened to out of order and sorry to say some are better than others episodes. So some episodes of Joe Rogan are better than others. And it's nice to know, uh, which you should listen to, to try it out. And there's, uh, as far as I know, almost no information, uh, in terms of like, uh, upvotes on how good an episode is. Exactly. So I think part of the problem is, uh, you, it's kind of like music. There isn't one answer. People use music for different things and there's actually many different types of music. There's workout music and there's classical piano music and focus music and, and, and, uh, so forth. I think the same with podcasts. Some podcasts are sequential. They're supposed to be listened to in, in order. It's actually, it's actually telling a narrative. Some podcasts are one topic, uh, kind of like yours, but different guests. So you could jump in anywhere. Some podcasts actually have completely different topics. And for those podcasts, it might be that I want, you know, we should recommend one episode because it's about AI from someone, but then they talk about something that you're not interested in the rest of the episodes. So I think our, what we're spending a lot of time on now is just first understanding the domain and creating kind of the knowledge graph of how do these objects relate and how do people consume. And I think we'll find that it's going to be, it's going to be different. I'm excited because you're the, uh, Spotify is the first people I'm aware of that are trying to do this for podcasting. Podcasting has been like a wild west up until now. It's been a very, we want to be very careful though, because it's been a very good wild west, I think it's this fragile ecosystem. And I, we want to make sure that you don't barge in and say like, Oh, we're going to internetize this thing. And you have to think about the creators. You have to understand how they get distribution today, who listens to how they make money today, try to, you know, make sure that their business model works, that they understand. I think it's back to doing something to improving their products, like feedback loops and distribution. So jumping back into terms of this fascinating world of a recommender system and listening to music and using machine learning to analyze things, do you think it's better to what currently, correct me if I'm wrong, but currently Spotify lets people pick what they listen to the most part. There's a discovery process, but you kind of organize playlists. Is it better to let people pick what they listen to or recommend what they should listen to something like stations by Spotify that I saw that you're playing around with? Maybe you can tell me what's the status of that. This is a Pandora style app that just kind of, as opposed to you select the music you listen to, it kind of feeds you the music you listen to. What's the status of stations by Spotify? What's its future? The story of Spotify, as we have grown, has been that we made it more accessible to different audiences and stations is another one of those where the question is, some people want to be very specific. They actually want to hear Starway to Heaven right now, that needs to be very easy to do. And some people, or even the same person, at some point might say, I want to feel upbeat or I want to feel happy or I want songs to sing in the car. So they put in the information at a very different level and then we need to translate that into what that means musically. So stations is a test to create like a consumption input vector that is much simpler where you can just tune it a little bit and see if that increases the overall reach. But we're trying to kind of serve the entire gamut of super advanced so called music aficionados all the way to people who they love listening to music but it's not their number one priority in life. They're not going to sit and follow every new release from every new artist. They need to be able to influence music at a different level. So you can think of it as different products and I think one of the interesting things to answer your question on if it's better to let the user choose or to play, I think the answer is the challenge when machine learning kind of came along, there was a lot of thinking about what does product development mean in a machine learning context. People like Andrew Ng, for example, when he went to Baidu, he started doing a lot of practical machine learning, went from academia and he thought a lot about this and he had this notion that a product manager, designer and engineer, they used to work around this wireframe to kind of describe what the product should look like. It was something to talk about when you're doing a chatbot or a playlist, what are you going to say? It should be good. That's not a good product description. So how do you do that? And he came up with this notion that the test set is the new wireframe. The job of the product manager is to source a good test set that is representative of what, like if you say I want to play this, that is songs to sing in the car. The job of the product manager is to go and source a good test set of what that means. So then you can work with engineering to have algorithms to try to produce that. So we try to think a lot about how to structure product development for a machine learning age. And what we discovered was that a lot of it is actually in the expectation. And you can go two ways. So let's say that if you set the expectation with the user that this is a discovery product, like Discover Weekly, you're actually setting the expectation that most of what we show you will not be relevant. When you're in the discovery process, you're going to accept that actually if you find one gem every Monday that you totally love, you're probably going to be happy. Even though the statistical meaning, one out of 10 is terrible or one out of 20 is terrible from a user point of view because the setting was discovery is fine. Sorry to interrupt real quick. I just actually learned about Discover Weekly, which is a Spotify, I don't know, it's a feature of Spotify that shows you cool songs to listen to. Maybe I can do issue tracking. I couldn't find it on my Spotify app. It's in your library. It's in the library. It's in the list of library. Because I was like, whoa, this is cool. I didn't know this existed. And I tried to find it. But okay. I will show it to you and feedback to our product team. There you go. But yeah, so yeah, sorry. Just to mention the expectation there is basically that you're going to discover new songs. Yeah. So then you can be quite adventurous in the recommendations you do. But we have another product called Daily Mix, which kind of implies that these are only going to be your favorites. So if you have one out of 10 that is good and nine out of 10 that doesn't work for you, you're going to think it's a horrible product. So actually a lot of the product development we learned over the years is about setting the right expectations. So for Daily Mix, you know, algorithmically, we would pick among things that feel very safe in your taste space. Whereas Discover Weekly, we go kind of wild because the expectation is most of this is not going to. So a lot of that, a lot of to answer your question there, a lot of should you let the user pick or not? It depends. We have some products where the whole point is that the user can click play, put the phone in the pocket, and it should be really good music for like an hour. We have other products where you probably need to say like, no, no, save, no, no. And it's very interactive. I see. That makes sense. And then the radio product, the stations product is one of these like click play, put in your pocket for hours. That's really interesting. So you're thinking of different test sets for different users and trying to create products that sort of optimize for those test sets that represent a specific set of users. Yes, I think one thing that I think is interesting is we invested quite heavily in editorial in people creating playlists using statistical data. And that was successful for us. And then we also invested in machine learning. And for the longest time within Spotify and within the rest of the industry, there was always this narrative of humans versus the machine, algo versus editorial. And editors would say like, well, if I had that data, if I could see your playlisting history and I made a choice for you, I would have made a better choice. And they would have because they're much smarter than these algorithms. The human is incredibly smart compared to our algorithms. They can take culture into account and so forth. The problem is that they can't make 200 million decisions per hour for every user that logs in. So the algo may be not as sophisticated, but much more efficient. So there was this contradiction. But then a few years ago, we started focusing on this kind of human in the loop thinking around machine learning. And we actually coined an internal term for it called algotorial, a combination of algorithms and editors, where if we take a concrete example, you think of the editor, this paid expert that we have that's really good at something like soul, hip hop, EDM, something, right? They're a true expert, no one in the industry. So they have all the cultural knowledge. You think of them as the product manager. And you say that, let's say that you want to create a, you think that there's a product need in the world for something like songs to sing in the car or songs to sing in the shower. I'm taking that example because it exists. People love to scream songs in the car when they drive, right? So you want to create that product and you have this product manager who's a musical expert. They create, they come up with a concept, like I think this is a missing thing in humanity, like a playlist called songs to sing in the car. They create the framing, the image, the title, and they create a test set of, they create a group of songs, like a few thousand songs out of the catalog that they manually curate that are known songs that are great to sing in the car. And they can take like true romance into account. They understand things that our algorithms do not at all. So they have this huge set of tracks. Then when we deliver that to you, we look at your taste vectors and you get the 20 tracks that are songs to sing in the car in your taste. So you have personalization and editorial input in the same process, if that makes sense. Yeah, it makes total sense. And I have several questions around that. This is like fascinating. Okay. So first, it is a little bit surprising to me that the world expert humans are outperforming machines at specifying songs to sing in the car. So maybe you could talk to that a little bit. I don't know if you can put it into words, but what is it? How difficult is this problem? Do you really, I guess what I'm trying to ask is there, how difficult is it to encode the cultural references, the context of the song, the artists, all those things together? Can machine learning really not do that? I mean, I think machine learning is great at replicating patterns if you have the patterns. But if you try to write with me a spec of what song's greatest song to sing in the car definition is, is it loud? Does it have many choruses? Should it have been in movies? It quickly gets incredibly complicated, right? Yeah. And a lot of it may not be in the structure of the song or the title. It could be cultural references because, you know, it was a history. So the definition problems quickly get, and I think that was the insight of Andrew Ng when he said the job of the product manager is to understand these things that algorithms don't and then define what that looks like. And then you have something to train towards, right? Then you have kind of the test set. And then so today the editors create this pool of tracks and then we personalize. You could easily imagine that once you have this set, you could have some automatic exploration on the rest of the catalog because then you understand what it is. And then the other side of it, when machine learning does help is this taste vector. How hard is it to construct a vector that represents the things an individual human likes, this human preference? So you can, you know, music isn't like, it's not like Amazon, like things you usually buy. Music seems more amorphous. Like it's this thing that's hard to specify. Like what is, you know, if you look at my playlist, what is the music that I love? It's harder. It seems to be much more difficult to specify concretely. So how hard is it to build a taste vector? It is very hard in the sense that you need a lot of data. And I think what we found was that, so it's not a stationary problem. It changes over time. And so we've gone through the journey of, if you've done a lot of computer vision, obviously I've done a bunch of computer vision in my past. And we started kind of with the handcrafted heuristics for, you know, this is kind of indie music. This is this. And if you consume this, you'd probably like this. So we have, we started there and we have some of that still. Then what was interesting about the playlist data was that you could find these latent things that wouldn't necessarily even make sense to you. That could even capture maybe cultural references because they cooccurred. Things that wouldn't have appeared kind of mechanistically either in the content or so forth. So I think that, I think the core assumption is that there are patterns in almost everything. And if there are patterns, these embedding techniques are getting better and better now. Now, as everyone else, we're also using kind of deep embeddings where you can encode binary values and so forth. And what I think is interesting is this process to try to find things that do not necessarily, you wouldn't actually have guessed. So it is very hard in an engineering sense to find the right dimensions. It's an incredible scalability problem to do for hundreds of millions of users and to update it every day. But in theory, in theory embeddings isn't that complicated. The fact that you try to find some principal components or something like that, dimensionality reduction and so forth. So the theory, I guess, is easy. The practice is very, very hard. And it's a huge engineering challenge. But fortunately, we have some amazing both research and engineering teams in this space. Yeah, I guess the question is all, I mean, it's similar. I deal with it with autonomous vehicle spaces. The question is how hard is driving? And here is basically the question is of edge cases. So embedding probably works, not probably, but I would imagine works well in a lot of cases. So there's a bunch of questions that arise then. So do song preferences, does your taste vector depend on context, like mood, right? So there's different moods, and so how does that take in it? Is it possible to take that as a consideration? Or do you just leave that as a interface problem that allows the user to just control it? So when I'm looking for workout music, I kind of specify it by choosing certain playlists, doing certain search. Yeah, so that's a great point. Back to the product development. You could try to spend a few years trying to predict which mood you're in automatically when you open Spotify, or you create a tab which is happy and sad, right? And you're going to be right 100% of the time with one click. Now, it's probably much better to let the user tell you if they're happy or sad, or if they want to work out. On the other hand, if your user interface becomes 2,000 tabs, you're introducing so much friction so no one will use the product. So then you have to get better. So it's this thing where you have to be able to get better. So then you have to get better, so it's this thing where I think maybe it was, I don't remember who coined it, but it's called fault tolerant UIs, right? You build a UI that is tolerant of being wrong, and then you can be much less right in your algorithms. So we've had to learn a lot of that. Building the right UI that fits where the machine learning is, and a great discovery there, which was by the teams during one of our hack days, was this thing of taking discovery, packaging it into a playlist, and saying that these are new tracks that we think you might like based on this. And setting the right expectation made it a great product. So I think we have this benefit that, for example, Tesla doesn't have that we can change the expectation. We can build a fault tolerant setting. It's very hard to be fault tolerant when you're driving at 100 miles per hour or something. And we have the luxury of being able to say that of being wrong if we have the right UI, which gives us different abilities to take more risk. So I actually think the self driving problem is much harder. Oh, yeah, for sure. It's much less fun because people die. Exactly. And in Spotify, it's such a more fun problem because failure is beautiful in a way. It leads to exploration. So it's a really fun reinforcement learning problem. The worst case scenario is you get these WTF tweets like, how did I get this? This song, yeah. Which is a lot better than the self driving. Exactly, so what's the feedback that a user, what's the signal that a user provides into the system? So you mentioned skipping. What is like the strongest signal? You didn't mention clicking like. So we have a few signals that are important. Obviously playing, playing through. So one of the benefits of music, actually, even compared to podcasts or movies is the object itself is really only about three minutes. So you get a lot of chances to recommend and the feedback loop is every three minutes instead of every two hours or something. So you actually get kind of noisy, but quite fast feedback. And so you can see if people play through, which is the inverse of skip really. That's an important signal. On the other hand, much of the consumption happens when your phone is in your pocket. Maybe you're running or driving or you're playing on a speaker. And so you not skipping doesn't mean that you love that song. It may be that it wasn't bad enough that you would walk up and skip. So it's a noisy signal. Then we have the equivalent of the like, which is you saved it to your library. That's a pretty strong signal of affection. And then we have the more explicit signal of playlisting. Like you took the time to create a playlist, you put it in there. There's a very little small chance that if you took all that trouble, this is not a really important track to you. And then we understand also what are the tracks it relates to. So we have the playlisting, we have the like, and then we have the listening or skip. And you have to have very different approaches to all of them because of different levels of noise. One is very voluminous, but noisy, and the other is rare, but you can probably trust it. Yeah, it's interesting because I think between those signals captures all the information you'd want to capture. I mean, there's a feeling, a shallow feeling for me that there's sometimes that I'll hear a song that's like, yes, this is, you know, this was the right song for the moment. But there's really no way to express that fact except by listening through it all the way and maybe playing it again at that time or something. But there's no need for a button that says this was the best song I could have heard at this moment. Well, we're playing around with that, with kind of the thumbs up concept saying like, I really like this. Just kind of talking to the algorithm. It's unclear if that's the best way for humans to interact. Maybe it is. Maybe they should think of Spotify as a person, an agent sitting there trying to serve you and you can say like, bad Spotify, good Spotify. Right now, the analogy we've had is more, you shouldn't think of us. We should be invisible. And the feedback is if you save it, it's kind of you work for yourself. You do a playlist because you think it's great and we can learn from that. It's kind of back to Tesla, how they kind of have this shadow mode. They sit in what you drive. We kind of took the same analogy. We sit in what you playlist and then maybe we can offer you an autopilot where you can take over for a while or something like that. And then back off if you say like, that's not good enough. But I think it's interesting to figure out what your mental model is. If Spotify is an AI that you talk to, which I think might be a bit too abstract for many consumers, or if you still think of it as it's my music app, but it's just more helpful. And it depends on the device it's running on, which brings us to smart speakers. So I have a lot of the Spotify listening I do is on devices I can talk to, whether it's from Amazon, Google or Apple. What's the role of Spotify on those devices? How do you think of it differently than on the phone or on the desktop? There are a few things to say about the first of all, it's incredibly exciting. They're growing like crazy, especially here in the US. And it's solving a consumer need that I think is, you can think of it as just remote interactivity. You can control this thing from across the room. And it may feel like a small thing, but it turns out that friction matters to consumers being able to say play, pause and so forth from across the room is very powerful. So basically, you made the living room interactive now. And what we see in our data is that the number one use case for these speakers is music, music and podcast. So fortunately for us, it's been important to these companies to have those use case covered. So they want to Spotify on this. We have very good relationships with them. And we're seeing tremendous success with them. What I think is interesting about them is it's already working. We kind of had this epiphany many years ago, back when we started using Sonos. If you went through all the trouble of setting up your Sonos system, you had this magical experience where you had all the music ever made in your living room. And we made this assumption that the home, everyone used to have a CD player at home, but they never managed to get their files working in the home. Having this network attached storage was too cumbersome for most consumers. So we made the assumption that the home would skip from the CD all the way to streaming books, where you would buy the steering and would have all the music built in. That took longer than we thought. But with the voice speakers, that was the unlocking that made kind of the connected speaker happen in the home. So it really exploded. And we saw this engagement that we predicted would happen. What I think is interesting, though, is where it's going from now. Right now, you think of them as voice speakers. But I think if you look at Google I.O., for example, they just added a camera to it, where when the alarm goes off, instead of saying, hey, Google, stop, you can just wave your hand. So I think they're going to think more of it as an agent or as an assistant, truly an assistant. And an assistant that can see you is going to be much more effective than a blind assistant. So I think these things will morph. And we won't necessarily think of them as, quote unquote, voice speakers anymore. Just as interactive access to the Internet in the home. But I still think that the biggest use case for those will be audio. So for that reason, we're investing heavily in it. And we built our own NLU stack to be able to the challenge here is, how do you innovate in that world? It lowers friction for consumers, but it's also much more constrained. You have no pixels to play with in an audio only world. It's really the vocabulary that is the interface. So we started investing and playing around quite a lot with that, trying to understand what the future will be of you speaking and gesturing and waving at your music. And actually, you're actually nudging closer to the autonomous vehicle space because from everything I've seen, the level of frustration people experience upon failure of natural language understanding is much higher than failure in other contexts. People get frustrated really fast. So if you screw that experience up even just a little bit, they give up really quickly. Yeah. And I think you see that in the data. While it's tremendously successful, the most common interactions are play, pause and next. The things where if you compare it to taking up your phone, unlocking it, bringing up the app and skipping, clicking skip, it was much lower friction. But then for longer, more complicated things like, can you find me that song about the people still bring up the phone and search and then play it on their speaker? So we tried again to build a fault tolerant UI where for the more complicated things, you can still pick up your phone, have powerful full keyboard search and then try to optimize for where there is actually lower friction and try to it's kind of like the test autopilot thing. You have to be at the level where you're helpful. If you're too smart and just in the way, people are going to get frustrated. And first of all, I'm not obsessed with stairway to heaven. It's just a good song. But let me mention that as a use case because it's an interesting one. I've literally told one of I don't want to say the name of the speaker because when people are listening to it, it'll make their speaker go off. But I talked to the speaker and I say play stairway to heaven. And every time it like not every time, but a large percentage of the time plays the wrong stairway to heaven. It plays like some cover of the and that part of the experience. I actually wonder from a business perspective, does Spotify control that entire experience or no? It seems like the NLU, the natural language stuff is controlled by the speaker and then Spotify stays at a layer below that. It's a good and complicated question. Some of which is dependent on the on the partners. So it's hard to comment on the on the specifics. But the question is the right one. The challenge is if you can't use any of the personalization, I mean, we know which stairway to heaven. And the truth is maybe for for one person, it is exactly the cover that they want. And they would be very frustrated if a place I think we I think we default to the right version. But but you actually want to be able to do the cover for the person that just played the cover 50 times. Or Spotify is just going to seem stupid. So you want to be able to leverage the personalization. But you have this stack where you have the the ASR and this thing called the end best list of the best guesses here. And then the position comes in at the end. You actually want the person to be here when you're guessing about what they actually meant. So we're working with these partners and it's a complicated it's a complicated thing where you want to you want to be able. So first of all, you want to be very careful with your users data. You don't want to share your users data without the permission. But you want to share some data so that their experience gets better. So that these partners can understand enough, but not too much and so forth. So it's really the trick is that it's like a business driven relationship where you're doing product development across companies together, which is which is really complicated. But this is exactly why we built our own NLU so that we actually can make personalized guesses, because this is the biggest frustration from a user point of view. They don't understand about ASR and best list and and business deals. They're like, how hard can it be? I was told this thing 50 times this version and still the place the wrong thing. It can't it can't be hard. So we try to take the user approach. If the user the user is not going to understand the complications of business, we have to solve it. So let's talk about sort of a complicated subject that I myself I'm quite torn about the idea sort of of paying artists. Right. I saw as of August 31st, 2018, over 11 billion dollars were paid to rights holders. So and further distributed to artists from Spotify. So a lot of money is being paid to artists. First of all, the whole time as a consumer for me, when I look at Spotify, I'm not sure I'm remembering correctly, but I think you said exactly how I feel, which is this is too good to be true. Like when I start using Spotify, I assume you guys will go bankrupt in like a month. It's like this is too good. A lot of people did. I was like, this is amazing. So one question I have is sort of the bigger question. How do you make money in this complicated world? How do you deal with the relationship with record labels who are complicated? These big you're essentially have the task of herding cats, but like rich and powerful powerful cats, and also have the task of paying artists enough and paying those labels enough and still making money in the Internet space where people are not willing to pay hundreds of dollars a month. So how do you navigate the space? How do you navigate? That's a beautiful description. Herding rich cats. That before. It is very complicated, and I think certainly actually betting against Spotify has been statistically a very smart thing to do. Just looking at the at the line of roadkill in music streaming services, it's it's kind of I think if I understood the complexity when I joined Spotify, unfortunately, fortunately, I didn't know enough about the music industry to understand the complexities, because then I would have made a more rational guess that it wouldn't work. So, you know, ignorance is bliss. But I think there have been a few distinct challenges. I think, as I said, one of the things that made it work at all was that Sweden and the Nordics was a lost market. So there was no risk for labels to try this. I don't think it would have worked if if the market was healthy. So that was the initial condition. Then we had this tremendous challenge with the model itself. So now most people were pirating. But for the people who bought a download or a CD, the artists would get all the revenue for all the future plays then, right? So you got it all up front, whereas the streaming model was like almost nothing day one, almost nothing day two. And then at some point, this curve of incremental revenue would intersect with your day one payment. And that took a long time to play out before before the music labels, they understood that. But on the artist side, it took a lot of time to understand that actually, if I have a big hit that is going to be played for many years, this is a much better model because I get paid based on how much people use the product, not how much they thought they would use it day one or so forth. So it was a complicated model to get across. But time helped with that. And now the revenues to the music industry actually are bigger again than it's gone through this incredible dip and now they're back up. And so we're very proud of having been a part of that. So there have been distinct problems. I think when it comes to the labels, we have taken the painful approach. Some of our competition at the time, they kind of looked at other companies and said, if we just ignore the rights, we get really big, really fast. We're going to be too big for the labels to kind of, too big to fail. They're not going to kill us. We didn't take that approach. We went legal from day one and we negotiated and negotiated and negotiated. It was very slow. It was very frustrating. We were angry at seeing other companies taking shortcuts and seeming to get away with it. It was this game theory thing where over many rounds of playing the game, this would be the right strategy. And even though clearly there's a lot of frustrations at times during renegotiations, there is this there is this weird trust where we have been honest and fair. We've never screwed them. They've never screwed us. It's 10 years, but there's this trust and like they know that if music doesn't get really big, if lots of people do not want to listen to music and want to pay for it, Spotify has no business model. So we actually are incredibly aligned. Other companies, not to be tense, but other companies have other business models where even if they made no money from music, they'd still be profitable companies. But Spotify won't. So I think the industry sees that we are actually aligned business wise. So there is this trust that allows us to do product development, even if it's scary, taking risks. The free model itself was an incredible risk for the music industry to take that they should get credit for. Now, some of it was that they had nothing to lose in the game. Some of it was that they had nothing to lose in Sweden. But frankly, a lot of the labels also took risk. And so I think we built up that trust with I think herding of cats sounds a bit. What's the word? It sounds like dismissive of the cats. Dismissive. No, every cat matters. They're all beautiful and very important. Exactly. They've taken a lot of risks and certainly it's been frustrating. So it's really like playing it's game theory. If you play the game many times, then you can have the statistical outcome that you bet on. And it feels very painful when you're in the middle of that thing. I mean, there's risk, there's trust, there's relationships. From just having read the biography of Steve Jobs, similar kind of relationships were discussed in iTunes. The idea of selling a song for a dollar was very uncomfortable for labels. Exactly. And there was no, it was the same kind of thing. It was trust, it was game theory as a lot of relationships that had to be built. And it's really a terrifyingly difficult process that Apple could go through a little bit because they could afford for that process to fail. For Spotify, it seems terrifying because you can't. Initially, I think a lot of it comes down to honestly Daniel and his tenacity in negotiating, which seems like an impossible task because he was completely unknown and so forth. But maybe that was also the reason that it worked. But I think game theory is probably the best way to think about it. You could go straight for this Nash equilibrium that someone is going to defect or you play it many times, you try to actually go for the top left, the corporations sell. Is there any magical reason why Spotify seems to have won this? So a lot of people have tried to do what Spotify tried to do and Spotify has come out. Well, so the answer is that there's no magical reason because I don't believe in magic. But I think there are there are reasons. And I think some of them are that people have misunderstood a lot of what we actually do. The actual Spotify model is very complicated. They've looked at the premium model and said, it seems like you can charge $9.99 for music and people are going to pay, but that's not what happened. Actually, when we launched the original mobile product, everyone said they would never pay. What happened was they started on the free product and then their engagement grew so much that eventually they said, maybe it is worth $9.99, right? It's your propensity to pay gross with your engagement. So we have this super complicated business model. We operate two different business models, advertising and premium at the same time. And I think that is hard to replicate. I struggle to think of other companies that run large scale advertising and subscription products at the same time. So I think the business model is actually much more complicated than people think it is. And so some people went after just the premium part without the free part and ran into a wall where no one wanted to pay. Some people went after just music should be free, just ads, which doesn't give you enough revenue and doesn't work for the music industry. So I think that combination is kind of opaque from the outside. So maybe I shouldn't say it here and reveal the secret, but that turns out to be hard to replicate than you would think. So there's a lot of brilliant business strategies out there. Brilliant business strategy here. Brilliance or luck? Probably more luck, but it doesn't really matter. It looks brilliant in retrospect. Let's call it brilliant. Yeah, when the books are written, they'll be brilliant. You've mentioned that your philosophy is to embrace change. So how will the music streaming and music listening world change over the next 10 years, 20 years? You look out into the far future. What do you think? I think that music and for that matter, audio podcasts, audiobooks, I think it's one of the few core human needs. I think it there is no good reason to me why it shouldn't be at the scale of something like messaging or social networking. I don't think it's a niche thing to listen to music or news or something. So I think scale is obviously one of the things that I really hope for. I think I hope that it's going to be billions of users. I hope eventually everyone in the world gets access to all the world's music ever made. So obviously, I think it's going to be a much bigger business. Otherwise, we wouldn't be betting this big. Now, if you look more at how it is consumed, what I'm hoping is back to this analogy of the software tool chain, where I think I sometimes internally I make this analogy to text messaging. Text messaging was also based on standards in the area of mobile carriers. You had the SMS, the 140 character, 120 character SMS. And it was great because everyone agreed on the standards. So as a consumer, you got a lot of distributions and interoperability, but it was a very constrained format. And when the industry wanted to add pictures to that format to do the MMS, I looked it up and I think it took from the late 80s to early 2000s. This is like a 15, 20 year product cycle to bring pictures into that. Now, once that entire value chain of creation and consumption got wrapped in one software stack within something like Snapchat or WhatsApp, the first week they added disappearing messages. Then two weeks later, they added stories. The pace of innovation when you're on one software stack and you can affect both creation and consumption, I think it's going to be rapid. So with these streaming services, we now, for the first time in history, have enough, I hope, people on one of these services. Actually, whether it's Spotify or Amazon or Apple or YouTube, and hopefully enough creators that you can actually start working with the format again. And that excites me. I think being able to change these constraints from 100 years, that could really do something interesting. I really hope it's not just going to be the iteration on the same thing for the next 10 to 20 years as well. Yeah, changing the creation of music, the creation of audio, the creation of podcasts is a really fascinating possibility. I myself don't understand what it is about podcasts that's so intimate. It just is. I listen to a lot of podcasts. I think it touches on a deep human need for connection that people do feel like they're connected to when they listen. I don't understand what the psychology of that is, but in this world that's becoming more and more disconnected, it feels like this is fulfilling a certain kind of need. And empowering the creator as opposed to just the listener is really interesting. I'm really excited that you're working on this. Yeah, I think one of the things that is inspiring for our teams to work on podcasts is exactly that, whether you think, like I probably do, that it's something biological about perceiving to be in the middle of the conversation that makes you listen in a different way. It doesn't really matter. People seem to perceive it differently. And there was this narrative for a long time that if you look at video, everything kind of in the foreground, it got shorter and shorter and shorter because of financial pressures and monetization and so forth. And eventually, at the end, there's almost like 20 seconds clip, people just screaming something and I feel really good about the fact that you could have interpreted that as people have no attention span anymore. They don't want to listen to things. They're not interested in deeper stories. People are getting dumber. But then podcasts came along and it's almost like, no, no, the need still existed. But maybe it was the fact that you're not prepared to look at your phone like this for two hours. But if you can drive at the same time, it seems like people really want to dig deeper and they want to hear like the more complicated version. So to me, that is very inspiring that that podcast is actually long form. It gives me a lot of hope for humanity that people seem really interested in hearing deeper, more complicated conversations. This is I don't understand it. It's fascinating. So the majority for this podcast, listen to the whole thing. This whole conversation we've been talking for an hour and 45 minutes. And somebody will I mean, most people will be listening to these words I'm speaking right now. It's crazy. You wouldn't have thought that 10 years ago with where the world seemed to go. That's very positive, I think. That's really exciting. And empowering the creator there is really exciting. Last question. You also have a passion for just mobile in general. How do you see the smartphone world, the digital space of smartphones and just everything that's on the move, whether it's Internet of Things and so on, changing over the next 10 years and so on? I think that one way to think about it is that computing might be moving out of these multipurpose devices, the computer we had and the phone, into specific purpose devices. And it will be ambient that at least in my home, you just shout something at someone and there's always one of these speakers close enough. And so you start behaving differently. It's as if you have the Internet ambient, ambiently around you and you can ask it things. So I think computing will kind of get more integrated and we won't necessarily think of it as connected to a device in the same way that we do today. I don't know the path to that. Maybe we used to have these desktop computers and then we partially replaced that with the laptops and left the desktop at home when I work. And then we got these phones and we started leaving the mobile phones. We had the desktop at home when I work and then we got these phones and we started leaving the laptop at home for a while. And maybe for stretches of time you're going to start using the watch and you can leave your phone at home for a run or something. And we're on this progressive path where I think what is happening with voice is that you have an interaction paradigm that doesn't require as large physical devices. So I definitely think there's a future where you can have your AirPods and your watch and you can do a lot of computing. And I don't think it's going to be this binary thing. I think it's going to be like many of us still have a laptop, we just use it less. And so you shift your consumption over. And I don't know about AR glasses and so forth. I'm excited about it. I spent a lot of time in that area, but I still think it's quite far away. AR, VR, all of that. Yeah, VR is happening and working. I think the recent Oculus Quest is quite impressive. I think AR is further away. At least that type of AR. But I do think your phone or watch or glasses understanding where you are and maybe what you're looking at and being able to give you audio cues about that. Or you can say like, what is this? And it tells you what it is. That I think might happen. You use your watch or your glasses as a mouse pointer on reality. I think it might be a while before... I might be wrong. I hope I'm wrong. I think it might be a while before we walk around with these big lab glasses that project things. I agree with you. It's actually really difficult when you have to understand the physical world enough to project onto it. I lied about the last question. Go ahead, because I just thought of audio and my favorite topic, which is the movie Her, do you think, whether it's part of Spotify or not, we'll have, I don't know if you've seen the movie Her. Absolutely. And there, audio is the primary form of interaction and the connection with another entity that you can actually have a relationship with, that you fall in love with based on voice alone, audio alone. Do you think that's possible, first of all, based on audio alone to fall in love with somebody? Somebody or... Well, yeah, let's go with somebody. Just have a relationship based on audio alone. And second question to that, can we create an artificial intelligence system that allows one to fall in love with it and her, him with you? So this is my personal answer, speaking for me as a person, the answer is quite unequivocally yes on both. I think what we just said about podcasts and the feeling of being in the middle of a conversation, if you could have an assistant where, and we just said that feels like a very personal setting. So if you walk around with these headphones and this thing, you're speaking with this thing all of the time that feels like it's in your brain. I think it's going to be much easier to fall in love with than something that would be on your screen. I think that's entirely possible. And then from the, you can probably answer this better than me, but from the concept of if it's going to be possible to build a machine that can achieve that, I think whether you think of it as, if you can fake it, the philosophical zombie that assimilates it enough or it somehow actually is, I think there's, it's only a question. It's only a question if you ask me about time, I'd have a different answer. But if you say I've given some half infinite time, absolutely. I think it's just atoms and arrangement of information. Well, I personally think that love is a lot simpler than people think. So we started with true romance and ended in love. I don't see a better place to end. Beautiful. Gustav, thanks so much for talking today. Thank you so much. It was a lot of fun. It was fun.
Gustav Soderstrom: Spotify | Lex Fridman Podcast #29
The following is a conversation with Kevin Scott, the CTO of Microsoft. Before that, he was the senior vice president of engineering and operations at LinkedIn. And before that, he oversaw mobile ads engineering at Google. He also has a podcast called Behind the Tech with Kevin Scott, which I'm a fan of. This was a fun and wide ranging conversation that covered many aspects of computing. It happened over a month ago, before the announcement of Microsoft's investment in OpenAI that a few people have asked me about. I'm sure there'll be one or two people in the future that'll talk with me about the impact of that investment. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And I'd like to give a special thank you to Tom and Nelante Bighousen for their support of the podcast on Patreon. Thanks Tom and Nelante. Hope I didn't mess up your last name too bad. Your support means a lot and inspires me to keep this series going. And now, here's my conversation with Kevin Scott. You've described yourself as a kid in a candy store at Microsoft because of all the interesting projects that are going on. Can you try to do the impossible task and give a brief whirlwind view of all the spaces that Microsoft is working in? Both research and product? If you include research, it becomes even more difficult. I think broadly speaking, Microsoft's product portfolio includes everything from big cloud business, like a big set of SaaS services. We have sort of the original, or like some of what are among the original productivity software products that everybody uses. We have an operating system business. We have a hardware business where we make everything from computer mice and headphones to high end personal computers and laptops. We have a fairly broad ranging research group where we have people doing everything from economics research. So there's this really, really smart young economist, Glenn Weil, who my group works with a lot, who's doing this research on these things called radical markets. He's written an entire technical book about this whole notion of radical markets. So like the research group sort of spans from that to human computer interaction to artificial intelligence. And we have GitHub, we have LinkedIn, we have a search advertising and news business and like probably a bunch of stuff that I'm embarrassingly not recounting in this list. Gaming to Xbox and so on, right? Yeah, gaming for sure. Like I was having a super fun conversation this morning with Phil Spencer. So when I was in college, there was this game that LucasArts made called Day of the Tentacle that my friends and I played forever. And like we're doing some interesting collaboration now with the folks who made Day of the Tentacle. And I was like completely nerding out with Tim Schafer, like the guy who wrote a Day of the Tentacle this morning, just a complete fan boy, which sort of it like happens a lot. Like Microsoft has been doing so much stuff at such breadth for such a long period of time that like being CTO like most of the time, my job is very, very serious. And sometimes like I get caught up in like how amazing it is to be able to have the conversations that I have with the people I get to have them with. Yeah, to reach back into the sentimental. And what's the radical markets and the economics? So the idea with radical markets is like, can you come up with new market based mechanisms to, you know, I think we have this, we're having this debate right now, like does capitalism work like free markets work? Can the incentive structures that are built into these systems produce outcomes that are creating sort of equitably distributed benefits for every member of society? You know, and I think it's a reasonable, reasonable set of questions to be asking. And so what Glenn, and so like, you know, one mode of thought there, like if you have doubts that the markets are actually working, you can sort of like tip towards like, okay, let's become more socialist and, you know, like have central planning and, you know, governments or some other central organization is like making a bunch of decisions about how, you know, sort of work gets done and, you know, like where the, you know, where the investments and where the outputs of those investments get distributed. Glenn's notion is like, lean more into like the market based mechanism. So like, for instance, you know, this is one of the more radical ideas, like suppose that you had a radical pricing mechanism for assets like real estate where you were, you could be bid out of your position in your home, you know, for instance. So like if somebody came along and said, you know, like I can find higher economic utility for this piece of real estate that you're running your business in, like then like you either have to, you know, sort of bid to sort of stay or like the thing that's got the higher economic utility, you know, sort of takes over the asset which would make it very difficult to have the same sort of rent seeking behaviors that you've got right now because like if you did speculative bidding, like you would very quickly like lose a whole lot of money. And so like the prices of the assets would be sort of like very closely indexed to like the value that they could produce. And like, because like you'd have this sort of real time mechanism that would force you to sort of mark the value of the asset to the market, then it could be taxed appropriately. Like you couldn't sort of sit on this thing and say, oh, like this house is only worth 10,000 bucks when like everything around it is worth 10 million. That's really, so it's an incentive structure that where the prices match the value much better. Yeah, and Glenn does a much better job than I do at selling and I probably picked the world's worst example, you know, and it's intentionally provocative, so like this whole notion, like I'm not sure whether I like this notion that like we can have a set of market mechanisms where I could get bid out of my property, you know, but you know, like if you're thinking about something like Elizabeth Warren's wealth tax, for instance, like you would have, I mean, it'd be really interesting in like how you would actually set the price on the assets and like you might have to have a mechanism like that if you put a tax like that in place. It's really interesting that that kind of research, at least tangentially is touching Microsoft research. That you're really thinking broadly. Maybe you can speak to, this connects to AI, so we have a candidate, Andrew Yang, who kind of talks about artificial intelligence and the concern that people have about, you know, automation's impact on society and arguably, Microsoft is at the cutting edge of innovation in all these kinds of ways and so it's pushing AI forward. How do you think about combining all our conversations together here with radical markets and socialism and innovation in AI that Microsoft is doing and then Andrew Yang's worry that that will result in job loss for the lower and so on. How do you think about that? I think it's sort of one of the most important questions in technology like maybe even in society right now about how is AI going to develop over the course of the next several decades and what's it going to be used for and what benefits will it produce and what negative impacts will it produce and who gets to steer this whole thing. I'll say at the highest level, one of the real joys of getting to do what I do at Microsoft is Microsoft has this heritage as a platform company and so Bill has this thing that he said a bunch of years ago where the measure of a successful platform is that it produces far more economic value for the people who build on top of the platform than is created for the platform owner or builder and I think we have to think about AI that way. As a platform. Yeah, it has to be a platform that other people can use to build businesses, to fulfill their creative objectives, to be entrepreneurs, to solve problems that they have in their work and in their lives. It can't be a thing where there are a handful of companies sitting in a very small handful of cities geographically who are making all the decisions about what goes into the AI and then on top of all this infrastructure, then build all of the commercially valuable uses for it. So I think that's bad from a sort of economics and sort of equitable distribution of value perspective, sort of back to this whole notion of did the markets work? But I think it's also bad from an innovation perspective because I have infinite amounts of faith in human beings that if you give folks powerful tools, they will go do interesting things and it's more than just a few tens of thousands of people with the interesting tools, it should be millions of people with the tools. So it's sort of like you think about the steam engine in the late 18th century, like it was maybe the first large scale substitute for human labor that we've built like a machine and in the beginning when these things are getting deployed, the folks who got most of the value from the steam engines were the folks who had capital so they could afford to build them and like they built factories around them and businesses and the experts who knew how to build and maintain them. But access to that technology democratized over time. Like now, like an engine, it's not like a differentiated thing, like there isn't one engine company that builds all the engines and all of the things that use engines are made by this company and like they get all the economics from all of that. Like fully demarcated, like they're probably, we're sitting here in this room and like even though they're probably things like the MEMS gyroscope that are in both of our phones, like there's like little engines sort of everywhere. They're just a component in how we build the modern world. Like AI needs to get there. Yeah, so that's a really powerful way to think. If we think of AI as a platform versus a tool that Microsoft owns, as a platform that enables creation on top of it, that's the way to democratize it. That's really interesting actually. And Microsoft throughout its history has been positioned well to do that. And the tie back to this radical markets thing, like so my team has been working with Glenn on this, and Jaren Lanier actually. So Jaren is the sort of father of virtual reality. Like he's one of the most interesting human beings on the planet, like a sweet, sweet guy. And so Jaren and Glenn and folks in my team have been working on this notion of data as labor or like they call it data dignity as well. And so the idea is that if you, again going back to this sort of industrial analogy, if you think about data as the raw material that is consumed by the machine of AI in order to do useful things, then like we're not doing a really great job right now in having transparent marketplaces for valuing those data contributions. So and we all make them explicitly like you go to LinkedIn, you sort of set up your profile on LinkedIn, like that's an explicit contribution. Like you know exactly the information that you're putting into the system. And like you put it there because you have some nominal notion of what value you're going to get in return. But it's like only nominal, like you don't know exactly what value you're getting in return. Like service is free, like it's low amount of perceived debt. And then you've got all this indirect contribution that you're making just by virtue of interacting with all of the technology that's in your daily life. And so like what Glenn and Jaren and this data dignity team are trying to do is like, can we figure out a set of mechanisms that let us value those data contributions so that you could create an economy and like a set of controls and incentives that would allow people to like maybe even in the limit, like earn part of their living through the data that they're creating. And like you can sort of see it in explicit ways. There are these companies like Scale AI, and like there are a whole bunch of them in China right now that are basically data labeling companies. So like you're doing supervised machine learning, you need lots and lots of label training data. And like those people who work for those companies are getting compensated for their data contributions into the system. And so. That's easier to put a number on their contribution because they're explicitly labeling data. Correct. But you're saying that we're all contributing data in different kinds of ways. And it's fascinating to start to explicitly try to put a number on it. Do you think that's possible? I don't know. It's hard. It really is. Because we don't have as much transparency as I think we need in like how the data is getting used. And it's super complicated. Like we, I think as technologists sort of appreciate like some of the subtlety there. It's like the data gets created and then it gets, it's not valuable. Like the data exhaust that you give off, or the explicit data that I am putting into the system isn't super valuable atomically. Like it's only valuable when you sort of aggregate it together into sort of large numbers. This is true even for these like folks who are getting compensated for like labeling things. Like for supervised machine learning now, like you need lots of labels to train a model that performs well. And so I think that's one of the challenges. It's like how do you sort of figure out like because this data is getting combined in so many ways like through these combinations like how the value is flowing. Yeah, that's fascinating. Yeah. And it's fascinating that you're thinking about this. And I wasn't even going into this conversation expecting the breadth of research really that Microsoft broadly is thinking about, you're thinking about at Microsoft. So if we go back to 89 when Microsoft released Office, or 1990 when they released Windows 3.0. In your view, I know you weren't there through its history, but how has the company changed in the 30 years since as you look at it now? The good thing is it's started off as a platform company. Like it's still a platform company, like the parts of the business that are thriving and most successful are those that are building platforms. Like the mission of the company now is, the mission's changed. It's like changed in a very interesting way. So back in 89, 90 like they were still on the original mission, which was like put a PC on every desk and in every home. And it was basically about democratizing access to this new personal computing technology, which when Bill started the company, integrated circuit microprocessors were a brand new thing. And people were building homebrew computers from kits, like the way people build ham radios right now. I think this is the interesting thing for folks who build platforms in general. Bill saw the opportunity there and what personal computers could do. And it was like, it was sort of a reach. Like you just sort of imagine like where things were when they started the company versus where things are now. Like in success, when you've democratized a platform, it just sort of vanishes into the platform. You don't pay attention to it anymore. Like operating systems aren't a thing anymore. Like they're super important, like completely critical. And like when you see one fail, like you just sort of understand. But like it's not a thing where you're not like waiting for the next operating system thing in the same way that you were in 1995, right? Like in 1995, like we had Rolling Stones on the stage with the Windows 95 rollout. Like it was like the biggest thing in the world. Everybody lined up for it the way that people used to line up for iPhone. But like, you know, eventually, and like this isn't necessarily a bad thing. Like it just sort of, you know, the success is that it's sort of, it becomes ubiquitous. It's like everywhere, like human beings, when their technology becomes ubiquitous, they just sort of start taking it for granted. So the mission now that Satya rearticulated five plus years ago now, when he took over as CEO of the company. Our mission is to empower every individual and every organization in the world to be more successful. And so, you know, again, like that's a platform mission. And like the way that we do it now is, is different. It's like we have a hyperscale cloud that people are building their applications on top of. Like we have a bunch of AI infrastructure that people are building their AI applications on top of. We have, you know, we have a productivity suite of software, like Microsoft Dynamics, which, you know, some people might not think is the sexiest thing in the world, but it's like helping people figure out how to automate all of their business processes and workflows and to help those businesses using it to grow and be more. So it's a much broader vision in a way now than it was back then. Like it was sort of very particular thing. And like now, like we live in this world where technology is so powerful and it's like such a basic fact of life that it both exists and is going to get better and better over time or at least more and more powerful over time. So like, you know, what you have to do as a platform player is just much bigger. Right. There's so many directions in which you can transform. You didn't mention mixed reality, too. You know, that's probably early days or it depends how you think of it. But if we think on a scale of centuries, it's the early days of mixed reality. Oh, for sure. And so with HoloLens, Microsoft is doing some really interesting work there. Do you touch that part of the effort? What's the thinking? Do you think of mixed reality as a platform, too? Oh, sure. When we look at what the platforms of the future could be, it's like fairly obvious that like AI is one. Like you don't have to, I mean, like that's, you know, you sort of say it to like someone and you know, like they get it. But like we also think of the like mixed reality and quantum as like these two interesting, you know, potentially. Quantum computing? Yeah. Okay. So let's get crazy then. So you're talking about some futuristic things here. Well, the mixed reality, Microsoft is really, it's not even futuristic, it's here. It is. It's incredible stuff. And look, and it's having an impact right now. Like one of the more interesting things that's happened with mixed reality over the past couple of years that I didn't clearly see is that it's become the computing device for folks who, for doing their work, who haven't used any computing device at all to do their work before. So technicians and service folks and people who are doing like machine maintenance on factory floors. So like they, you know, because they're mobile and like they're out in the world and they're working with their hands and, you know, sort of servicing these like very complicated things, they're, they don't use their mobile phone and like they don't carry a laptop with them and, you know, they're not tethered to a desk. And so mixed reality, like where it's getting traction right now, where HoloLens is selling a lot of units is for these sorts of applications for these workers. And it's become like, I mean, like the people love it. They're like, oh my God, like this is like for them, like the same sort of productivity boosts that, you know, like an office worker had when they got their first personal computer. Yeah, but you did mention it's certainly obvious AI as a platform, but can we dig into it a little bit? How does AI begin to infuse some of the products in Microsoft? So currently providing training of, for example, neural networks in the cloud or providing pre trained models or just even providing computing resources and whatever different inference that you wanna do using neural networks. How do you think of AI infusing as a platform that Microsoft can provide? Yeah, I mean, I think it's super interesting. It's like everywhere. And like we run these review meetings now where it's me and Satya and like members of Satya's leadership team and like a cross functional group of folks across the entire company who are working on like either AI infrastructure or like have some substantial part of their product work using AI in some significant way. Now, the important thing to understand is like when you think about like how the AI is gonna manifest in like an experience for something that's gonna make it better, like I think you don't want the AIness to be the first order thing. It's like whatever the product is and like the thing that is trying to help you do, like the AI just sort of makes it better. And this is a gross exaggeration, but like people get super excited about like where the AI is showing up in products and I'm like, do you get that excited about like where you're using a hash table like in your code? Like it's just another. It's just a tool. It's a very interesting programming tool, but it's sort of like it's an engineering tool. And so like it shows up everywhere. So like we've got dozens and dozens of features now in Office that are powered by like fairly sophisticated machine learning, our search engine wouldn't work at all if you took the machine learning out of it. The like increasingly things like content moderation on our Xbox and xCloud platform. When you mean moderation, you mean like the recommender is like showing what you wanna look at next. No, no, no, it's like anti bullying stuff. So the usual social network stuff that you have to deal with. Yeah, correct. But it's like really it's targeted, it's targeted towards a gaming audience. So it's like a very particular type of thing where the line between playful banter and like legitimate bullying is like a subtle one. And like you have to like, it's sort of tough. Like I have. I'd love to if we could dig into it because you're also, you led the engineering efforts of LinkedIn. And if we look at LinkedIn as a social network, and if we look at the Xbox gaming as the social components, the very different kinds of I imagine communication going on on the two platforms, right? And the line in terms of bullying and so on is different on the platforms. So how do you, I mean, it's such a fascinating philosophical discussion of where that line is. I don't think anyone knows the right answer. Twitter folks are under fire now, Jack at Twitter for trying to find that line. Nobody knows what that line is. But how do you try to find the line for trying to prevent abusive behavior and at the same time, let people be playful and joke around and that kind of thing? I think in a certain way, like if you have what I would call vertical social networks, it gets to be a little bit easier. So like if you have a clear notion of like what your social network should be used for, or like what you are designing a community around, then you don't have as many dimensions to your sort of content safety problem as you do in a general purpose platform. I mean, so like on LinkedIn, like the whole social network is about connecting people with opportunity, whether it's helping them find a job or to sort of find mentors or to sort of help them like find their next sales lead or to just sort of allow them to broadcast their sort of professional identity to their network of peers and collaborators and sort of professional community. Like that is, I mean, like in some ways, like that's very, very broad, but in other ways it's sort of, it's narrow. And so like you can build AI's like machine learning systems that are capable with those boundaries of making better automated decisions about like what is sort of inappropriate and offensive comment or dangerous comment or illegal content when you have some constraints. You know, same thing with like the gaming social network. So for instance, like it's about playing games, not having fun. And like the thing that you don't want to have happen on the platform is why bullying is such an important thing. Like bullying is not fun. So you want to do everything in your power to encourage that not to happen. And yeah, but I think it's sort of a tough problem in general and it's one where I think, you know, eventually we're going to have to have some sort of clarification from our policymakers about what it is that we should be doing, like where the lines are, because it's tough. Like you don't, like in democracy, right? Like you don't want, you want some sort of democratic involvement. Like people should have a say in like where the lines are drawn. Like you don't want a bunch of people making like unilateral decisions. And like we are in a state right now for some of these platforms where you actually do have to make unilateral decisions where the policymaking isn't going to happen fast enough in order to like prevent very bad things from happening. But like we need the policymaking side of that to catch up, I think, as quickly as possible because you want that whole process to be a democratic thing, not a, you know, not some sort of weird thing where you've got a non representative group of people making decisions that have, you know, like national and global impact. And it's fascinating because the digital space is different than the physical space in which nations and governments were established. And so what policy looks like globally, what bullying looks like globally, what's healthy communication looks like globally is an open question and we're all figuring it out together, which is fascinating. Yeah, I mean with, you know, sort of fake news, for instance. And... Deep fakes and fake news generated by humans? Yeah, so we can talk about deep fakes, like I think that is another like, you know, sort of very interesting level of complexity. But like if you think about just the written word, right? Like we have, you know, we invented papyrus, what, 3,000 years ago where we, you know, you could sort of put word on paper. And then 500 years ago, like we get the printing press, like where the word gets a little bit more ubiquitous. And then like you really, really didn't get ubiquitous printed word until the end of the 19th century when the offset press was invented. And then, you know, just sort of explodes and like, you know, the cross product of that and the Industrial Revolution's need for educated citizens resulted in like this rapid expansion of literacy and the rapid expansion of the word. But like we had 3,000 years up to that point to figure out like how to, you know, like what's journalism, what's editorial integrity, like what's, you know, what's scientific peer review. And so like you built all of this mechanism to like try to filter through all of the noise that the technology made possible to like, you know, sort of getting to something that society could cope with. And like, if you think about just the piece, the PC didn't exist 50 years ago. And so in like this span of, you know, like half a century, like we've gone from no digital, you know, no ubiquitous digital technology to like having a device that sits in your pocket where you can sort of say whatever is on your mind to like what did Mary have in her, Mary Meeker just released her new like slide deck last week. You know, we've got 50% penetration of the internet to the global population. Like there are like three and a half billion people who are connected now. So it's like, it's crazy, crazy, like inconceivable, like how fast all of this happened. So, you know, it's not surprising that we haven't figured out what to do yet, but like we gotta really like lean into this set of problems because like we basically have three millennia worth of work to do about how to deal with all of this and like probably what, you know, amounts to the next decade worth of time. So since we're on the topic of tough, you know, tough challenging problems, let's look at more on the tooling side in AI that Microsoft is looking at is face recognition software. So there's a lot of powerful positive use cases for face recognition, but there's some negative ones and we're seeing those in different governments in the world. So how do you, how does Microsoft think about the use of face recognition software as a platform in governments and companies? How do we strike an ethical balance here? Yeah, I think we've articulated a clear point of view. So Brad Smith wrote a blog post last fall, I believe that sort of like outlined like very specifically what, you know, what our point of view is there. And, you know, I think we believe that there are certain uses to which face recognition should not be put. And we believe again, that there's a need for regulation there. Like the government should like really come in and say that, you know, this is where the lines are. And like, we very much wanted to like figuring out where the lines are, should be a democratic process. But in the short term, like we've drawn some lines where, you know, we push back against uses of face recognition technology, you know, like the city of San Francisco, for instance, I think has completely outlawed any government agency from using face recognition tech. And like that may prove to be a little bit overly broad. But for like certain law enforcement things, like you really, I would personally rather be overly sort of cautious in terms of restricting use of it until like we have, you know, sort of defined a reasonable, you know, democratically determined regulatory framework for like where we could and should use it. And, you know, the other thing there is like, we've got a bunch of research that we're doing and a bunch of progress that we've made on bias there. And like, there are all sorts of like weird biases that these models can have, like all the way from like the most noteworthy one where, you know, you may have underrepresented minorities who are like underrepresented in the training data and then you start learning like strange things. But like there are even, you know, other weird things. Like we've, I think we've seen in the public research, like models can learn strange things, like all doctors are men, for instance, just, yeah. I mean, and so like, it really is a thing where it's very important for everybody who is working on these things before they push publish, they launch the experiment, they, you know, push the code to, you know, online, or they even publish the paper that they are at least starting to think about what some of the potential negative consequences are, some of this stuff. I mean, this is where, you know, like the deep fake stuff I find very worrisome just because there are going to be some very good beneficial uses of like GAN generated imagery. And funny enough, like one of the places where it's actually useful is we're using the technology right now to generate synthetic visual data for training some of the face recognition models to get rid of the bias. So like, that's one like super good use of the tech, but like, you know, it's getting good enough now where, you know, it's going to sort of challenge a normal human being's ability to, like now you're just sort of say, like it's very expensive for someone to fabricate a photorealistic fake video. And like GANs are going to make it fantastically cheap to fabricate a photorealistic fake video. And so like what you assume you can sort of trust is true versus like be skeptical about is about to change. And like, we're not ready for it, I don't think. The nature of truth, right. That's, it's also exciting because I think both you and I probably would agree that the way to solve, to take on that challenge is with technology, right? There's probably going to be ideas of ways to verify which kind of video is legitimate, which kind is not. So to me, that's an exciting possibility, most likely for just the comedic genius that the internet usually creates with these kinds of videos and hopefully will not result in any serious harm. Yeah, and it could be, you know, like I think we will have technology to, that may be able to detect whether or not something's fake or real. Although the fakes are pretty convincing, even like when you subject them to machine scrutiny. But, you know, we also have these increasingly interesting social networks, you know, that are under fire right now for some of the bad things that they do. Like one of the things you could choose to do with a social network is like you could, you could use crypto and the networks to like have content signed where you could have a like full chain of custody that accompanied every piece of content. So like when you're viewing something and like you want to ask yourself, like how much can I trust this? Like you can click something and like have a verified chain of custody that shows like, oh, this is coming from this source. And it's like signed by like someone whose identity I trust. Yeah, I think having that, you know, having that chain of custody, like being able to like say, oh, here's this video. Like it may or may not have been produced using some of this deepfake technology, but if you've got a verified chain of custody where you can sort of trace it all the way back to an identity and you can decide whether or not like I trust this identity. Like, oh no, this is really from the White House or like this is really from the, you know, the office of this particular presidential candidate or it's really from, you know, Jeff Wiener, CEO of LinkedIn or Satya Nadella, CEO of Microsoft. Like that might be like one way that you can solve some of the problems. So like that's not the super high tech. Like we've had all of this technology forever. And, but I think you're right. Like it has to be some sort of technological thing because the underlying tech that is used to create this is not going to do anything but get better over time and the genie is sort of out of the bottle. There's no stuffing it back in. And there's a social component, which I think is really healthy for a democracy where people will be skeptical about the thing they watch in general. So, you know, which is good. Skepticism in general is good for content. So deepfakes in that sense are creating a global skepticism about can they trust what they read. It encourages further research. I come from the Soviet Union where basically nobody trusted the media because you knew it was propaganda. And that kind of skepticism encouraged further research about ideas as opposed to just trusting any one source. Well, look, I think it's one of the reasons why the scientific method and our apparatus of modern science is so good. Like, because you don't have to trust anything. Like, the whole notion of modern science beyond the fact that this is a hypothesis and this is an experiment to test the hypothesis and this is a peer review process for scrutinizing published results. But stuff's also supposed to be reproducible. So you know it's been vetted by this process, but you also are expected to publish enough detail where if you are sufficiently skeptical of the thing, you can go try to reproduce it yourself. And like, I don't know what it is. Like, I think a lot of engineers are like this where like, you know, sort of this, like your brain is sort of wired for skepticism. Like, you don't just first order trust everything that you see and encounter. And like, you're sort of curious to understand, you know, the next thing. But like, I think it's an entirely healthy thing. And like, we need a little bit more of that right now. So I'm not a large business owner. So I'm just a huge fan of many of Microsoft products. I mean, I still, actually in terms of, I generate a lot of graphics and images and I still use PowerPoint to do that. It beats Illustrator for me. Even professional sort of, it's fascinating. So I wonder, what is the future of, let's say Windows and Office look like? Is, do you see it? I mean, I remember looking forward to XP. Was it exciting when XP was released? Just like you said, I don't remember when 95 was released. But XP for me was a big celebration. And when 10 came out, I was like, oh, okay. Well, it's nice. It's a nice improvement. So what do you see the future of these products? I think there's a bunch of excite. I mean, on the Office front, there's gonna be this like increasing productivity wins that are coming out of some of these AI powered features that are coming. Like the products will sort of get smarter and smarter in like a very subtle way. Like there's not gonna be this big bang moment where like Clippy is gonna reemerge and it's gonna be. Wait a minute. Okay, we'll have to wait, wait, wait. Is Clippy coming back? But quite seriously, so injection of AI. There's not much, or at least I'm not familiar, sort of assistive type of stuff going on inside the Office products. Like a Clippy style assistant, personal assistant. Do you think that there's a possibility of that in the future? So I think there are a bunch of like very small ways in which like machine learning powered assistive things are in the product right now. So there are a bunch of interesting things. Like the auto response stuff's getting better and better. And it's like getting to the point where it can auto respond with like, okay, this person's clearly trying to schedule a meeting. So it looks at your calendar and it automatically like tries to find like a time and a space that's mutually interesting. Like we have this notion of Microsoft search at a Microsoft search where it's like not just web search, but it's like search across like all of your information that's sitting inside of like your Office 365 tenant and like potentially in other products. And like we have this thing called the Microsoft Graph that is basically an API federator that sort of like gets you hooked up across the entire breadth of like all of the, like what were information silos before they got woven together with the graph. Like that is like getting increasing, with increasing effectiveness, sort of plumbed into some of these auto response things where you're gonna be able to see the system like automatically retrieve information for you. Like if, you know, like I frequently send out, you know, emails to folks where like I can't find a paper or a document or whatnot. There's no reason why the system won't be able to do that for you. And like, I think the, it's building towards like having things that look more like, like a fully integrated, you know, assistant, but like you'll have a bunch of steps that you will see before you, like it will not be this like big bang thing where like Clippy comes back and you've got this like, you know, manifestation of, you know, like a fully, fully powered assistant. So I think that's, that's definitely coming in, like all of the, you know, collaboration, coauthoring stuff's getting better. You know, it's like really interesting. Like if you look at how we use the Office product portfolio at Microsoft, like more and more of it is happening inside of like Teams as a canvas. And like, it's this thing where, you know, you've got collaboration is like at the center of the product and like we built some like really cool stuff that's some of, which is about to be open source that are sort of framework level things for doing, for doing coauthoring. That's awesome. So in, is there a cloud component to that? So on the web, or is it, and forgive me if I don't already know this, but with Office 365, we still, the collaboration we do if we're doing Word, we still send the file around. No, no. So this is. We're already a little bit better than that. A little bit better than that and like, you know, so like the fact that you're unaware of it means we've got a better job to do, like helping you discover, discover this stuff. But yeah, I mean, it's already like got a huge, huge cloud component. And like part of, you know, part of this framework stuff, I think we're calling it, like I, like we've been working on it for a couple of years. So like, I know the internal code name for it, but I think when we launched it to build, it's called the Fluid Framework. And, but like what Fluid lets you do is like, you can go into a conversation that you're having in Teams and like reference like part of a spreadsheet that you're working on where somebody's like sitting in the Excel canvas, like working on the spreadsheet with a, you know, chart or whatnot, and like you can sort of embed like part of the spreadsheet in the Teams conversation where like you can dynamically update it and like all of the changes that you're making to the, to this object are like, you know, coordinate and everything is sort of updating in real time. So like you can be in whatever canvas is most convenient for you to get your work done. So I, out of my own sort of curiosity as an engineer, I know what it's like to sort of lead a team of 10, 15 engineers. Microsoft has, I don't know what the numbers are, maybe 50, maybe 60,000 engineers, maybe 40. I don't know exactly what the number is, it's a lot. It's tens of thousands. Right, so it's more than 10 or 15. What, I mean, you've led different sizes, mostly large size of engineers. What does it take to lead such a large group into a continue innovation, continue being highly productive and yet develop all kinds of new ideas and yet maintain, like what does it take to lead such a large group of brilliant people? I think the thing that you learn as you manage larger and larger scale is that there are three things that are like very, very important for big engineering teams. Like one is like having some sort of forethought about what it is that you're gonna be building over large periods of time. Like not exactly, like you don't need to know that like, you know, I'm putting all my chips on this one product and like this is gonna be the thing, but like it's useful to know like what sort of capabilities you think you're going to need to have to build the products of the future. And then like invest in that infrastructure, like whether, and like I'm not just talking about storage systems or cloud APIs, it's also like what does your development process look like? What tools do you want? Like what culture do you want to build around? Like how you're, you know, sort of collaborating together to like make complicated technical things. And so like having an opinion and investing in that is like, it just gets more and more important. And like the sooner you can get a concrete set of opinions, like the better you're going to be. Like you can wing it for a while at small scales, like, you know, when you start a company, like you don't have to be like super specific about it, but like the biggest miseries that I've ever seen as an engineering leader are in places where you didn't have a clear enough opinion about those things soon enough. And then you just sort of go create a bunch of technical debt and like culture debt that is excruciatingly painful to clean up. So like, that's one bundle of things. Like the other, you know, another bundle of things is like, it's just really, really important to like have a clear mission that's not just some cute crap you say because like you think you should have a mission, but like something that clarifies for people like where it is that you're headed together. Like, I know it's like probably like a little bit too popular right now, but Yuval Harari's book, Sapiens, one of the central ideas in his book is that like storytelling is like the quintessential thing for coordinating the activities of large groups of people. Like once you get past Dunbar's number, and like I've really, really seen that just managing engineering teams. Like you can just brute force things when you're less than 120, 150 folks where you can sort of know and trust and understand what the dynamics are between all the people, but like past that, like things just sort of start to catastrophically fail if you don't have some sort of set of shared goals that you're marching towards. And so like, even though it sounds touchy feely and you know, like a bunch of technical people will sort of balk at the idea that like, you need to like have a clear, like the missions, like very, very, very important. You're always right, right? Stories, that's how our society, that's the fabric that connects us, all of us is these powerful stories. And that works for companies too, right? It works for everything. Like, I mean, even down to like, you know, you sort of really think about it, like our currency, for instance, is a story. Our constitution is a story. Our laws are stories. I mean, like we believe very, very, very strongly in them. And thank God we do. But like they are, they're just abstract things. Like they're just words. Like if we don't believe in them, they're nothing. And in some sense, those stories are platforms and the kinds, some of which Microsoft is creating, right? They have platforms on which we define the future. So last question, what do you, let's get philosophical maybe, bigger than even Microsoft, what do you think the next 20, 30 plus years looks like for computing, for technology, for devices? Do you have crazy ideas about the future of the world? Yeah, look, I think we, you know, we're entering this time where we've got, we have technology that is progressing at the fastest rate that it ever has. And you've got, you've got some really big social problems, like society scale problems that we have to tackle. And so, you know, I think we're going to rise to the challenge and like figure out how to intersect like all of the power of this technology with all of the big challenges that are facing us, whether it's, you know, global warming, whether it's like the biggest remainder of the population boom is in Africa for the next 50 years or so. And like global warming is going to make it increasingly difficult to feed the global population in particular, like in this place where you're going to have like the biggest population boom. I think we, you know, like AI is going to, like if we push it in the right direction, like it can do like incredible things to empower all of us to achieve our full potential and to, you know, like live better lives. But like that also means focus on like some super important things. Like how can you apply it to healthcare to make sure that, you know, like our quality and cost of and sort of ubiquity of health coverage is better and better over time. Like that's more and more important every day is like in the United States and like the rest of the industrialized world, so Western Europe, China, Japan, Korea, like you've got this population bubble of like aging, working, you know, working age folks who are, you know, at some point over the next 20, 30 years, they're going to be largely retired. And like you're going to have more retired people than working age people. And then like you've got, you know, sort of natural questions about who's going to take care of all the old folks and who's going to do all the work. And the answers to like all of these sorts of questions, like where you're sort of running into, you know, like constraints of the, you know, the world and of society has always been like what tech is going to like help us get around this? Like when I was a kid in the 70s and 80s, like we talked all the time about like population boom, population boom, like we're going to, like we're not going to be able to like feed the planet. And like we were like right in the middle of the Green Revolution where like this massive technology driven increase in crop productivity like worldwide. And like some of that was like taking some of the things that we knew in the West and like getting them distributed to the, you know, to the developing world. And like part of it were things like, you know, just smarter biology like helping us increase. And like we don't talk about like overpopulation anymore because like we can more or less, we sort of figured out how to feed the world. Like that's a technology story. And so like I'm super, super hopeful about the future and in the ways where we will be able to apply technology to solve some of these super challenging problems. Like I've, like one of the things that I'm trying to spend my time doing right now is trying to get everybody else to be hopeful as well because, you know, back to Harare, like we are the stories that we tell. Like if we, you know, if we get overly pessimistic right now about like the potential future of technology, like we, you know, like we may fail to get all of the things in place that we need to like have our best possible future. And that kind of hopeful optimism, I'm glad that you have it because you're leading large groups of engineers that are actually defining, that are writing that story, that are helping build that future, which is super exciting. And I agree with everything you said except I do hope Clippy comes back. We miss him. I speak for the people. So, Galen, thank you so much for talking to me. Thank you so much for having me. It was a pleasure.
Kevin Scott: Microsoft CTO | Lex Fridman Podcast #30
The following is a conversation with George Hotz. He's the founder of Kama AI, a machine learning based vehicle automation company. He is most certainly an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carry or unlock an iPhone. And since then, he's done quite a few interesting things at the intersection of hardware and software. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And I'd like to give a special thank you to Jennifer from Canada for her support of the podcast on Patreon. Merci beaucoup, Jennifer. She's been a friend and an engineering colleague for many years since I was in grad school. Your support means a lot and inspires me to keep this series going. And now, here's my conversation with George Hotz. Do you think we're living in a simulation? Yes, but it may be unfalsifiable. What do you mean by unfalsifiable? So if the simulation is designed in such a way that they did like a formal proof to show that no information can get in and out, and if their hardware is designed for the anything in the simulation to always keep the hardware in spec, it may be impossible to prove whether we're in a simulation or not. So they've designed it such that it's a closed system you can't get outside the system. Well, maybe it's one of three worlds. We're either in a simulation which can be exploited, we're in a simulation which not only can't be exploited, but like the same thing's true about VMs. A really well designed VM, you can't even detect if you're in a VM or not. That's brilliant. So the simulation is running on a virtual machine. But now in reality, all VMs have ways to detect. That's the point. I mean, you've done quite a bit of hacking yourself. So you should know that really any complicated system will have ways in and out. So this isn't necessarily true going forward. I spent my time away from Comma, I learned Coq, it's a dependently typed, it's a language for writing math proofs in. And if you write code that compiles in a language like that, it is correct by definition. The types check its correctness. So it's possible that the simulation is written in a language like this, in which case, yeah. Yeah, but that can't be sufficiently expressive a language like that. Oh, it can. It can be? Oh, yeah. Okay, well, so all right, so. The simulation doesn't have to be Turing complete if it has a scheduled end date. Looks like it does actually with entropy. I mean, I don't think that a simulation that results in something as complicated as the universe would have a form of proof of correctness, right? It's possible, of course. We have no idea how good their tooling is. And we have no idea how complicated the universe computer really is. It may be quite simple. It's just very large, right? It's very, it's definitely very large. But the fundamental rules might be super simple. Yeah, Conway's getting a life kind of stuff. Right. So if you could hack, so imagine a simulation that is hackable, if you could hack it, what would you change about the, like how would you approach hacking a simulation? The reason I gave that talk. By the way, I'm not familiar with the talk you gave. I just read that you talked about escaping the simulation or something like that. So maybe you can tell me a little bit about the theme and the message there too. It wasn't a very practical talk about how to actually escape a simulation. It was more about a way of restructuring an us versus them narrative. If we continue on the path we're going with technology, I think we're in big trouble, like as a species and not just as a species, but even as me as an individual member of the species. So if we could change rhetoric to be more like to think upwards, like to think about that we're in a simulation and how we could get out, already we'd be on the right path. What you actually do once you do that, well, I assume I would have acquired way more intelligence in the process of doing that. So I'll just ask that. So the thinking upwards, what kind of ideas, what kind of breakthrough ideas do you think thinking in that way could inspire? And why did you say upwards? Upwards. Into space? Are you thinking sort of exploration in all forms? The space narrative that held for the modernist generation doesn't hold as well for the postmodern generation. What's the space narrative? Are we talking about the same space, the three dimensional space? No, no, no, space, like going on space, like building like Elon Musk, like we're going to build rockets, we're going to go to Mars, we're going to colonize the universe. And the narrative you're referring, I was born in the Soviet Union, you're referring to the race to space. The race to space, yeah. Explore, okay. That was a great modernist narrative. Yeah. It doesn't seem to hold the same weight in today's culture. I'm hoping for good postmodern narratives that replace it. So let's think, so you work a lot with AI. So AI is one formulation of that narrative. There could be also, I don't know how much you do in VR and AR. Yeah. That's another, I know less about it, but every time I play with it in our research, it's fascinating, that virtual world. Are you interested in the virtual world? I would like to move to virtual reality. In terms of your work? No, I would like to physically move there. The apartment I can rent in the cloud is way better than the apartment I can rent in the real world. Well, it's all relative, isn't it? Because others will have very nice apartments too, so you'll be inferior in the virtual world as well. No, but that's not how I view the world, right? I don't view the world, I mean, it's a very almost zero sum ish way to view the world. Say like, my great apartment isn't great because my neighbor has one too. No, my great apartment is great because look at this dishwasher, man. You just touch the dish and it's washed, right? And that is great in and of itself if I have the only apartment or if everybody had the apartment. I don't care. So you have fundamental gratitude. The world first learned of George Hots in August 2007, maybe before then, but certainly in August 2007 when you were the first person to unlock, carry unlock an iPhone. How did you get into hacking? What was the first system you discovered vulnerabilities for and broke into? So that was really kind of the first thing. I had a book in 2006 called Grey Hat Hacking. And I guess I realized that if you acquired these sort of powers, you could control the world. But I didn't really know that much about computers back then. I started with electronics. The first iPhone hack was physical. Cardware. You had to open it up and pull an address line high. And it was because I didn't really know about software exploitation. I learned that all in the next few years and I got very good at it. But back then I knew about like how memory chips are connected to processors and stuff. You knew about software and programming. You just didn't know. Oh really? So your view of the world and computers was physical, was hardware. Actually, if you read the code that I released with that in August 2007, it's atrocious. What language was it? C. C, nice. And in a broken sort of state machine ask C. I didn't know how to program. Yeah. So how did you learn to program? What was your journey? Cause I mean, we'll talk about it. You've live streamed some of your programming. This chaotic, beautiful mess. How did you arrive at that? Years and years of practice. I interned at Google after the summer after the iPhone unlock. And I did a contract for them where I built hardware for Street View and I wrote a software library to interact with it. And it was terrible code. And for the first time I got feedback from people who I respected saying, no, like don't write code like this. Now, of course, just getting that feedback is not enough. The way that I really got good was I wanted to write this thing like that could emulate and then visualize like arm binaries. Cause I wanted to hack the iPhone better. And I didn't like that I couldn't like see what the, I couldn't single step through the processor because I had no debugger on there, especially for the low level things like the boot rum and the bootloader. So I tried to build this tool to do it. And I built the tool once and it was terrible. I built the tool a second time, it was terrible. I built the tool a third time. This was by the time I was at Facebook, it was kind of okay. And then I built the tool a fourth time when I was a Google intern again in 2014. And that was the first time I was like, this is finally usable. How do you pronounce this Kira? Kira, yeah. So it's essentially the most efficient way to visualize the change of state of the computer as the program is running. That's what you mean by debugger. Yeah, it's a timeless debugger. So you can rewind just as easily as going forward. Think about if you're using GDB, you have to put a watch on a variable. If you wanna see if that variable changes. In Kira, you can just click on that variable and then it shows every single time when that variable was changed or accessed. Think about it like Git for your computers, the run log. So there's like a deep log of the state of the computer as the program runs and you can rewind. Why isn't that, maybe it is, maybe you can educate me. Why isn't that kind of debugging used more often? Cause the tooling's bad. Well, two things. One, if you're trying to debug Chrome, Chrome is a 200 megabyte binary that runs slowly on desktops. So that's gonna be really hard to use for that. But it's really good to use for like CTFs and for boot roms and for small parts of code. So it's hard if you're trying to debug like massive systems. What's a CTF and what's a boot rom? A boot rom is the first code that executes the minute you give power to your iPhone. Okay. And CTF where these competitions that I played capture the flag. Capture the flag, I was gonna ask you about that. What are those, look at, I watched a couple of videos on YouTube, those look fascinating. What have you learned about maybe at the high level of vulnerability of systems from these competitions? I feel like in the heyday of CTFs, you had all of the best security people in the world challenging each other and coming up with new toy exploitable things over here. And then everybody, okay, who can break it? And when you break it, you get like, there's like a file on the server called flag. And then there's a program running, listening on a socket that's vulnerable. So you write an exploit, you get a shell, and then you cat flag, and then you type the flag into like a web based scoreboard and you get points. So the goal is essentially, to find an exploit in the system that allows you to run shell, to run arbitrary code on that system. That's one of the categories. That's like the pwnable category. Pwnable? Yeah, pwnable. It's like, you know, you pwn the program. It's a program that's, yeah. Yeah, you know, first of all, I apologize. I'm gonna say it's because I'm Russian, but maybe you can help educate me. Some video game like misspelled own way back in the day. Yeah, and it's just, I wonder if there's a definition. I'll have to go to Urban Dictionary for it. It'll be interesting to see what it says. Okay, so what was the heyday of CTF, by the way? But was it, what decade are we talking about? I think like, I mean, maybe unbiased because it's the era that I played, but like 2011 to 2015, because the modern CTF scene is similar to the modern competitive programming scene. You have people who like do drills. You have people who practice. And then once you've done that, you've turned it less into a game of generic computer skill and more into a game of, okay, you drill on these five categories. And then before that, it wasn't, it didn't have like as much attention as it had. I don't know, they were like, I won $30,000 once in Korea for one of these competitions. Holy crap. Yeah, they were, they were, that was. So that means, I mean, money is money, but that means there was probably good people there. Exactly, yeah. Are the challenges human constructed or are they grounded in some real flaws and real systems? Usually they're human constructed, but they're usually inspired by real flaws. What kind of systems are imagined is really focused on mobile. Like what has vulnerabilities these days? Is it primarily mobile systems like Android? Oh, everything does. Still. Yeah, of course. The price has kind of gone up because less and less people can find them. And what's happened in security is now if you want to like jailbreak an iPhone, you don't need one exploit anymore, you need nine. Nine chained together, what would it mean? Yeah, wow. Okay, so it's really, what's the benefit speaking higher level philosophically about hacking? I mean, it sounds from everything I've seen about you, you just love the challenge and you don't want to do anything. You don't want to bring that exploit out into the world and do any actual, let it run wild. You just want to solve it and then you go on to the next thing. Oh yeah, I mean, doing criminal stuff's not really worth it. And I'll actually use the same argument for why I don't do defense for why I don't do crime. If you want to defend a system, say the system has 10 holes, right? If you find nine of those holes as a defender, you still lose because the attacker gets in through the last one. If you're an attacker, you only have to find one out of the 10. But if you're a criminal, if you log on with a VPN nine out of the 10 times, but one time you forget, you're done. Because you're caught, okay. Because you only have to mess up once to be caught as a criminal. That's why I'm not a criminal. But okay, let me, because I was having a discussion with somebody just at a high level about nuclear weapons actually, why we're having blown ourselves up yet. And my feeling is all the smart people in the world, if you look at the distribution of smart people, smart people are generally good. And then this other person I was talking to, Sean Carroll, the physicist, and he was saying, no, good and bad people are evenly distributed amongst everybody. My sense was good hackers are in general good people and they don't want to mess with the world. What's your sense? I'm not even sure about that. Like, I have a nice life. Crime wouldn't get me anything. But if you're good and you have these skills, you probably have a nice life too, right? Right, you can use it for other things. But is there an ethical, is there a little voice in your head that says, well, yeah, if you could hack something to where you could hurt people and you could earn a lot of money doing it though, not hurt physically perhaps, but disrupt their life in some kind of way, isn't there a little voice that says? Well, two things. One, I don't really care about money. So like the money wouldn't be an incentive. The thrill might be an incentive. But when I was 19, I read Crime and Punishment. And that was another great one that talked me out of ever really doing crime. Cause it's like, that's gonna be me. I'd get away with it, but it would just run through my head. Even if I got away with it, you know? And then you do crime for long enough, you'll never get away with it. That's right. In the end, that's a good reason to be good. I wouldn't say I'm good. I would just say I'm not bad. You're a talented programmer and a hacker in a good positive sense of the word. You've played around, found vulnerabilities in various systems. What have you learned broadly about the design of systems and so on from that whole process? You learn to not take things for what people say they are, but you look at things for what they actually are. Yeah. I understand that's what you tell me it is, but what does it do? Right. And you have nice visualization tools to really know what it's really doing. Oh, I wish. I'm a better programmer now than I was in 2014. I said, Kira, that was the first tool that I wrote that was usable. I wouldn't say the code was great. I still wouldn't say my code is great. So how was your evolution as a programmer except practice? So you started with C. At which point did you pick up Python? Because you're pretty big in Python now. Now, yeah, in college. I went to Carnegie Mellon when I was 22. I went back. I'm like, all right, I'm gonna take all your hardest CS courses. We'll see how I do, right? Like, did I miss anything by not having a real undergraduate education? Took operating systems, compilers, AI, and they're like a freshman wheat or math course. And... Operating systems, some of those classes you mentioned are pretty tough, actually. They're great. At least the 2012, circa 2012, operating systems and compilers were two of the, they were the best classes I've ever taken in my life. Because you write an operating system and you write a compiler. I wrote my operating system in C and I wrote my compiler in Haskell, but somehow I picked up Python that semester as well. I started using it for the CTFs, actually. That's when I really started to get into CTFs and CTFs, you're all, it's a race against the clock. So I can't write things in C. Oh, there's a clock component. So you really want to use the programming languages so you can be fastest. 48 hours, pwn as many of these challenges as you can. Pwn. Yeah, you got like a hundred points of challenge. Whatever team gets the most. You were both at Facebook and Google for a brief stint. Yeah. With Project Zero actually at Google for five months where you developed Kira. What was Project Zero about in general? What, I'm just curious about the security efforts in these companies. Well, Project Zero started the same time I went there. What years are there? 2015. 2015. So that was right at the beginning of Project Zero. It's small. It's Google's offensive security team. I'll try to give the best public facing explanation that I can. So the idea is basically these vulnerabilities exist in the world. Nation states have them. Some high powered bad actors have them. Sometime people will find these vulnerabilities and submit them in bug bounties to the companies. But a lot of the companies don't really care. They don't even fix the bug. It doesn't hurt for there to be a vulnerability. So Project Zero is like, we're going to do it different. We're going to announce a vulnerability and we're going to give them 90 days to fix it. And then whether they fix it or not, we're going to drop the zero day. Oh, wow. We're going to drop the weapon. That's so cool. That is so cool. I love the deadlines. Oh, that's so cool. Give them real deadlines. Yeah. And I think it's done a lot for moving the industry forward. I watched your coding sessions on the streamed online. You code things up, the basic projects, usually from scratch. I would say sort of as a programmer myself, just watching you that you type really fast and your brain works in both brilliant and chaotic ways. I don't know if that's always true, but certainly for the live streams. So it's interesting to me because I'm more, I'm much slower and systematic and careful. And you just move, I mean, probably in order of magnitude faster. So I'm curious, is there a method to your madness? Is it just who you are? There's pros and cons. There's pros and cons to my programming style. And I'm aware of them. Like if you ask me to like get something up and working quickly with like an API that's kind of undocumented, I will do this super fast because I will throw things at it until it works. If you ask me to take a vector and rotate it 90 degrees and then flip it over the XY plane, I'll spam program for two hours and won't get it. Oh, because it's something that you could do with a sheet of paper, think through design, and then just, do you really just throw stuff at the wall and you get so good at it that it usually works? I should become better at the other kind as well. Sometimes I'll do things methodically. It's nowhere near as entertaining on the Twitch streams. I do exaggerate it a bit on the Twitch streams as well. The Twitch streams, I mean, what do you want to see a game or you want to see actions per minute, right? I'll show you APM for programming too. Yeah, I recommend people go to it. I think I watched, I watched probably several hours of you, like I've actually left you programming in the background while I was programming because you made me, it was like watching a really good gamer. It's like energizes you because you're like moving so fast. It's so, it's awesome. It's inspiring and it made me jealous that like, because my own programming is inadequate in terms of speed. Oh, I was like. So I'm twice as frantic on the live streams as I am when I code without them. It's super entertaining. So I wasn't even paying attention to what you were coding, which is great. It's just watching you switch windows and Vim I guess is the most. Yeah, there's Vim on screen. I've developed the workload at Facebook and stuck with it. How do you learn new programming tools, ideas, techniques these days? What's your like a methodology for learning new things? So I wrote for comma, the distributed file systems out in the world are extremely complex. Like if you want to install something like like like Ceph, Ceph is I think the like open infrastructure distributed file system, or there's like newer ones like seaweed FS, but these are all like 10,000 plus line projects. I think some of them are even a hundred thousand line and just configuring them as a nightmare. So I wrote, I wrote one, it's 200 lines and it's, it uses like NGINX and volume servers and has this little master server that I wrote in Go. And the way I go, this, if I would say that I'm proud per line of any code I wrote, maybe there's some exploits that I think are beautiful. And then this, this is 200 lines. And just the way that I thought about it, I think was very good. And the reason it's very good is because that was the fourth version of it that I wrote. And I had three versions that I threw away. You mentioned, did you say Go? I wrote in Go, yeah. In Go. Is that a functional language? I forget what Go is. Go is Google's language. Right. It's not functional. It's some, it's like in a way it's C++, but easier. It's, it's strongly typed. It has a nice ecosystem around it. When I first looked at it, I was like, this is like Python, but it takes twice as long to do anything. Yeah. Now that I've, OpenPilot is migrating to C, but it still has large Python components. I now understand why Python doesn't work for large code bases and why you want something like Go. Interesting. So why, why doesn't Python work for, so even most, speaking for myself at least, like we do a lot of stuff, basically demo level work with autonomous vehicles and most of the work is Python. Yeah. Why doesn't Python work for large code bases? Because, well, lack of type checking is a big part. So errors creep in. Yeah. And like, you don't know, the compiler can tell you like nothing, right? So everything is either, you know, like, like syntax errors, fine. But if you misspell a variable in Python, the compiler won't catch that. There's like linters that can catch it some of the time. There's no types. This is really the biggest downside. And then, well, Python's slow, but that's not related to it. Well, maybe it's kind of related to it, so it's lack of. So what's, what's in your toolbox these days? Is it Python? What else? I need to move to something else. My adventure into dependently typed languages, I love these languages. They just have like syntax from the 80s. What do you think about JavaScript? ES6, like the modern, or TypeScript? JavaScript is, the whole ecosystem is unbelievably confusing. Right. NPM updates a package from 0.2.2 to 0.2.5, and that breaks your Babel linter, which translates your ES5 into ES6, which doesn't run on, so. Why do I have to compile my JavaScript again, huh? It may be the future, though. You think about, I mean, I've embraced JavaScript recently, just because, just like I've continually embraced PHP, it seems that these worst possible languages live on for the longest, like cockroaches never die. Yeah. Well, it's in the browser, and it's fast. It's fast. Yeah. It's in the browser, and compute might stay, become, you know, the browser. It's unclear what the role of the browser is in terms of distributed computation in the future, so. JavaScript is definitely here to stay. Yeah. It's interesting if autonomous vehicles will run on JavaScript one day. I mean, you have to consider these possibilities. Well, all our debug tools are JavaScript. We actually just open sourced them. We have a tool, Explorer, which you can annotate your disengagements, and we have a tool, Cabana, which lets you analyze the can traffic from the car. So basically, anytime you're visualizing something about the log, you're using JavaScript. Well, the web is the best UI toolkit by far, so. And then, you know what? You're coding in JavaScript. We have a React guy. He's good. React, nice. Let's get into it. So let's talk autonomous vehicles. Yeah. You founded Comma AI. Let's, at a high level, how did you get into the world of vehicle automation? Can you also just, for people who don't know, tell the story of Comma AI? Sure. So I was working at this AI startup, and a friend approached me, and he's like, dude, I don't know where this is going, but the coolest applied AI problem today is self driving cars. I'm like, well, absolutely. You want to meet with Elon Musk, and he's looking for somebody to build a vision system for autopilot. This is when they were still on AP1. They were still using Mobileye. Elon, back then, was looking for a replacement, and he brought me in, and we talked about a contract where I would deliver something that meets Mobileye level performance. I would get paid $12 million if I could deliver it tomorrow, and I would lose $1 million for every month I didn't deliver. Yeah. So I was like, okay, this is a great deal. This is a super exciting challenge. You know what? Even if it takes me 10 months, I get $2 million. It's good. Maybe I can finish up in five. Maybe I don't finish it at all, and I get paid nothing, and I can still work for 12 months for free. So maybe just take a pause on that. I'm also curious about this because I've been working in robotics for a long time, and I'm curious to see a person like you just step in and sort of somewhat naive, but brilliant, right? So that's the best place to be because you basically full steam take on a problem. How confident, how, from that time, because you know a lot more now, at that time, how hard do you think it is to solve all of autonomous driving? I remember I suggested to Elon in the meeting putting a GPU behind each camera to keep the compute local. This is an incredibly stupid idea. I leave the meeting 10 minutes later, and I'm like, I could have spent a little bit of time thinking about this problem before I went in. Why is it a stupid idea? Oh, just send all your cameras to one big GPU. You're much better off doing that. Oh, sorry. You said behind every camera have a GPU. Every camera have a small GPU. I was like, oh, I'll put the first few layers of my comms there. Ugh, why'd I say that? That's possible. It's possible, but it's a bad idea. It's not obviously a bad idea. Pretty obviously bad, but whether it's actually a bad idea or not, I left that meeting with Elon, beating myself up. I'm like, why'd I say something stupid? Yeah, you haven't at least thought through every aspect of it, yeah. He's very sharp too. Usually in life, I get away with saying stupid things and then kind of course, oh, right away he called me out about it. And usually in life, I get away with saying stupid things and then a lot of times people don't even notice and I'll correct it and bring the conversation back. But with Elon, it was like, nope, okay, well. That's not at all why the contract fell through. I was much more prepared the second time I met him. Yeah, but in general, how hard did you think it is? Like 12 months is a tough timeline. Oh, I just thought I'd clone Mobileye IQ3. I didn't think I'd solve level five self driving or anything. So the goal there was to do lane keeping, good lane keeping. I saw, my friend showed me the outputs from a Mobileye and the outputs from a Mobileye was just basically two lanes at a position of a lead car. I'm like, I can gather a data set and train this net in weeks and I did. Well, first time I tried the implementation of Mobileye in a Tesla, I was really surprised how good it is. It's going incredibly good. Cause I thought it's just cause I've done a lot of computer vision, I thought it'd be a lot harder to create a system that that's stable. So I was personally surprised, you know, have to admit it. Cause I was kind of skeptical before trying it. Cause I thought it would go in and out a lot more. It would get disengaged a lot more and it's pretty robust. So what, how hard is the problem when you tackled it? So I think AP1 was great. Like Elon talked about disengagements on the 405 down in LA with like the lane marks are kind of faded and the Mobileye system would drop out. Like I had something up and working that I would say was like the same quality in three months. Same quality, but how do you know? You say stuff like that confidently, but you can't, and I love it, but the question is you can't, you're kind of going by feel cause you test it out. Absolutely, absolutely. Like I would take, I borrowed my friend's Tesla. I would take AP1 out for a drive and then I would take my system out for a drive. And it seems reasonably like the same. So the 405, how hard is it to create something that could actually be a product that's deployed? I mean, I've read an article where Elon, this respondent said something about you saying that to build autopilot is more complicated than a single George Hodge level job. How hard is that job to create something that would work across globally? Why don't think globally is the challenge? But Elon followed that up by saying it's gonna take two years in a company of 10 people. And here I am four years later with a company of 12 people. And I think we still have another two to go. Two years, so yeah. So what do you think about how Tesla is progressing with autopilot of V2, V3? I think we've kept pace with them pretty well. I think navigate and autopilot is terrible. We had some demo features internally of the same stuff and we would test it. And I'm like, I'm not shipping this even as like open source software to people. Why do you think it's terrible? Consumer Reports does a great job of describing it. Like when it makes a lane change, it does it worse than a human. You shouldn't ship things like autopilot, open pilot. They lane keep better than a human. If you turn it on for a stretch of a highway, like an hour long, it's never gonna touch a lane line. Human will touch probably a lane line twice. You just inspired me. I don't know if you're grounded in data on that. I read your paper. Okay, but that's interesting. I wonder actually how often we touch lane lines in general, like a little bit, because it is. I could answer that question pretty easily with the common data set. Yeah, I'm curious. I've never answered it. I don't know. I just, two is like my personal. It feels right. That's interesting. Because every time you touch a lane, that's a source of a little bit of stress and kind of lane keeping is removing that stress. That's ultimately the biggest value add honestly is just removing the stress of having to stay in lane. And I think honestly, I don't think people fully realize, first of all, that that's a big value add, but also that that's all it is. And that not only, I find it a huge value add. I drove down when we moved to San Diego, I drove down in a enterprise rental car and I missed it. So I missed having the system so much. It's so much more tiring to drive without it. It is that lane centering. That's the key feature. Yeah. And in a way, it's the only feature that actually adds value to people's lives in autonomous vehicles today. Waymo does not add value to people's lives. It's a more expensive, slower Uber. Maybe someday it'll be this big cliff where it adds value, but I don't usually believe it. It is fascinating. I haven't talked to, this is good. Cause I haven't, I have intuitively, but I think we're making it explicit now. I actually believe that really good lane keeping is a reason to buy a car. Will be a reason to buy a car and it's a huge value add. I've never, until we just started talking about it, I haven't really quite realized it. That I've felt with Elon's chase of level four is not the correct chase. It was on, cause you should just say Tesla has the best as if from a Tesla perspective, say, Tesla has the best lane keeping. Comma AI should say, Comma AI is the best lane keeping. And that is it. Yeah. Yeah. So do you think? You have to do the longitudinal as well. You can't just lane keep. You have to do ACC, but ACC is much more forgiving than lane keep, especially on the highway. By the way, are you Comma AI's camera only, correct? No, we use the radar. From the car, you're able to get the, okay. Hmm? We can do a camera only now. It's gotten to the point, but we leave the radar there as like a, it's fusion now. Okay, so let's maybe talk through some of the system specs on the hardware. What's the hardware side of what you're providing? What's the capabilities on the software side with OpenPilot and so on? So OpenPilot, as the box that we sell, that it runs on, it's a phone in a plastic case. It's nothing special. We sell it without the software. So you buy the phone, it's just easy. It'll be easy set up, but it's sold with no software. OpenPilot right now is about to be 0.6. When it gets to 1.0, I think we'll be ready for a consumer product. We're not gonna add any new features. We're just gonna make the lane keeping really, really good. Okay, I got it. So what do we have right now? It's a Snapdragon 820. It's a Sony IMX 298 forward facing camera. Driver monitoring camera, it's just a selfie camera on the phone. And a CAN transceiver, maybe there's a little thing called PANDAS. And they talk over USB to the phone and then they have three CAN buses that they talk to the car. One of those CAN buses is the radar CAN bus. One of them is the main car CAN bus and the other one is the proxy camera CAN bus. We leave the existing camera in place so we don't turn AEB off. Right now, we still turn AEB off if you're using our longitudinal, but we're gonna fix that before 1.0. Got it. Wow, that's cool. And it's CAN both ways. So how are you able to control vehicles? So we proxy, the vehicles that we work with already have a lane keeping assist system. So lane keeping assist can mean a huge variety of things. It can mean it will apply a small torque to the wheel after you've already crossed a lane line by a foot, which is the system in the older Toyotas versus like, I think Tesla still calls it lane keeping assist, where it'll keep you perfectly in the center of the lane on the highway. You can control, like with the joystick, the car. So these cars already have the capability of drive by wire. So is it trivial to convert a car that it operates with? OpenPILOT is able to control the steering? Oh, a new car or a car that we, so we have support now for 45 different makes of cars. What are the cars in general? Mostly Hondas and Toyotas. We support almost every Honda and Toyota made this year. And then a bunch of GMs, a bunch of Subarus, a bunch of Chevys. It doesn't have to be like a Prius, it could be a Corolla as well. Oh, the 2020 Corolla is the best car with OpenPILOT. It just came out. The actuator has less lag than the older Corolla. I think I started watching a video with your, I mean, the way you make videos is awesome. You're just literally at the dealership streaming. Yeah, I had my friend on the phone, I'm like, bro, you wanna stream for an hour? Yeah, and basically, like if stuff goes a little wrong, you're just like, you just go with it. Yeah, I love it. Well, it's real. Yeah, it's real. That's so beautiful and it's so in contrast to the way other companies would put together a video like that. Kind of why I like to do it like that. Good. I mean, if you become super rich one day and successful, I hope you keep it that way because I think that's actually what people love, that kind of genuine. Oh, it's all that has value to me. Money has no, if I sell out to like make money, I sold out, it doesn't matter. What do I get? Yacht? I don't want a yacht. And I think Tesla's actually has a small inkling of that as well with Autonomy Day. They did reveal more than, I mean, of course, there's marketing communications, you could tell, but it's more than most companies would reveal, which is, I hope they go towards that direction more, other companies, GM, Ford. Oh, Tesla's gonna win level five. They really are. So let's talk about it. You think, you're focused on level two currently? Currently. We're gonna be one to two years behind Tesla getting to level five. Okay. We're Android, right? We're Android. You're Android. I'm just saying, once Tesla gets it, we're one to two years behind. I'm not making any timeline on when Tesla's gonna get it. That's right. You did, that was brilliant. I'm sorry, Tesla investors, if you think you're gonna have an autonomous Robo Taxi fleet by the end of the year. Yeah, so that's. I'll bet against that. So what do you think about this? The most level four companies are kind of just doing their usual safety driver, doing full autonomy kind of testing. And then Tesla does basically trying to go from lane keeping to full autonomy. What do you think about that approach? How successful would it be? It's a ton better approach. Because Tesla is gathering data on a scale that none of them are. They're putting real users behind the wheel of the cars. It's, I think, the only strategy that works. The incremental. Well, so there's a few components to Tesla approach that's more than just the incrementalists. What you spoke with is the ones, the software, so over the air software updates. Necessity. I mean Waymo crews have those too. Those aren't. But. Those differentiate from the automakers. Right, no lane keeping systems have, no cars with lane keeping system have that except Tesla. Yeah. And the other one is the data, the other direction, which is the ability to query the data. I don't think they're actually collecting as much data as people think, but the ability to turn on collection and turn it off. So I'm both in the robotics world and the psychology human factors world. Many people believe that level two autonomy is problematic because of the human factor. Like the more the task is automated, the more there's a vigilance decrement. You start to fall asleep. You start to become complacent, start texting more and so on. Do you worry about that? Cause if we're talking about transition from lane keeping to full autonomy, if you're spending 80% of the time, not supervising the machine, do you worry about what that means to the safety of the drivers? One, we don't consider open pilot to be 1.0 until we have 100% driver monitoring. You can cheat right now, our driver monitoring system. There's a few ways to cheat it. They're pretty obvious. We're working on making that better. Before we ship a consumer product that can drive cars, I want to make sure that I have driver monitoring that you can't cheat. What's like a successful driver monitoring system look like? Is it all about just keeping your eyes on the road? Well, a few things. So that's what we went with at first for driver monitoring. I'm checking, I'm actually looking at where your head is looking. The camera's not that high resolution. Eyes are a little bit hard to get. Well, head is this big. I mean, that's. Head is good. And actually a lot of it, just psychology wise, to have that monitor constantly there, it reminds you that you have to be paying attention. But we want to go further. We just hired someone full time to come on to do the driver monitoring. I want to detect phone in frame and I want to make sure you're not sleeping. How much does the camera see of the body? This one, not enough. Not enough. The next one, everything. Well, it's interesting, Fisheye, because we're doing just data collection, not real time. But Fisheye is a beautiful, being able to capture the body. And the smartphone is really like the biggest problem. I'll show you. I can show you one of the pictures from our new system. Awesome, so you're basically saying the driver monitoring will be the answer to that. I think the other point that you raised in your paper is good as well. You're not asking a human to supervise a machine without giving them the, they can take over at any time. Right. Our safety model, you can take over. We disengage on both the gas or the brake. We don't disengage on steering. I don't feel you have to. But we disengage on gas or brake. So it's very easy for you to take over and it's very easy for you to reengage. That switching should be super cheap. The cars that require, even autopilot requires a double press. That's almost, I see, I don't like that. And then the cancel, to cancel in autopilot, you either have to press cancel, which no one knows what that is, so they press the brake. But a lot of times you don't actually want to press the brake. You want to press the gas. So you should cancel on gas. Or wiggle the steering wheel, which is bad as well. Wow, that's brilliant. I haven't heard anyone articulate that point. Oh, this is all I think about. It's the, because I think, I think actually Tesla has done a better job than most automakers at making that frictionless. But you just described that it could be even better. I love Super Cruise as an experience once it's engaged. I don't know if you've used it, but getting the thing to try to engage. Yeah, I've used the, I've driven Super Cruise a lot. So what's your thoughts on the Super Cruise system? You disengage Super Cruise and it falls back to ACC. So my car's like still accelerating. It feels weird. Otherwise, when you actually have Super Cruise engaged on the highway, it is phenomenal. We bought that Cadillac. We just sold it. But we bought it just to like experience this. And I wanted everyone in the office to be like, this is what we're striving to build. GM pioneering with the driver monitoring. You like their driver monitoring system? It has some bugs. If there's a sun shining back here, it'll be blind to you. Right. But overall, mostly, yeah. That's so cool that you know all this stuff. I don't often talk to people that, because it's such a rare car, unfortunately, currently. We bought one explicitly for this. We lost like 25K in the deprecation, but I feel it's worth it. I was very pleasantly surprised that GM system was so innovative and really wasn't advertised much, wasn't talked about much. Yeah. And I was nervous that it would die, that it would disappear. Well, they put it on the wrong car. They should have put it on the Bolt and not some weird Cadillac that nobody bought. I think that's gonna be into, they're saying at least it's gonna be into their entire fleet. So what do you think about, as long as we're on the driver monitoring, what do you think about Elon Musk's claim that driver monitoring is not needed? Normally, I love his claims. That one is stupid. That one is stupid. And, you know, he's not gonna have his level five fleet by the end of the year. Hopefully he's like, okay, I was wrong. I'm gonna add driver monitoring. Because when these systems get to the point that they're only messing up once every thousand miles, you absolutely need driver monitoring. So let me play, cause I agree with you, but let me play devil's advocate. One possibility is that without driver monitoring, people are able to monitor, self regulate, monitor themselves. You know, that, so your idea is. You've seen all the people sleeping in Teslas? Yeah, well, I'm a little skeptical of all the people sleeping in Teslas because I've stopped paying attention to that kind of stuff because I want to see real data. It's too much glorified. It doesn't feel scientific to me. So I want to know how many people are really sleeping in Teslas versus sleeping. I was driving here sleep deprived in a car with no automation. I was falling asleep. I agree that it's hypey. It's just like, you know what? If you want to put driver monitoring, I rented a, my last autopilot experience was I rented a model three in March and drove it around. The wheel thing is annoying. And the reason the wheel thing is annoying, we use the wheel thing as well, but we don't disengage on wheel. For Tesla, you have to touch the wheel just enough to trigger the torque sensor, to tell it that you're there, but not enough as to disengage it, which don't use it for two things. Don't disengage on wheel. You don't have to. That whole experience, wow, beautifully put. All of those elements, even if you don't have driver monitoring, that whole experience needs to be better. Driver monitoring, I think would make, I mean, I think Super Cruise is a better experience once it's engaged over autopilot. I think Super Cruise is a transition to engagement and disengagement are significantly worse. Yeah. Well, there's a tricky thing, because if I were to criticize Super Cruise is, it's a little too crude. And I think like six seconds or something, if you look off road, it'll start warning you. It's some ridiculously long period of time. And just the way, I think it's basically, it's a binary. It should be adaptive. Yeah, it needs to learn more about you. It needs to communicate what it sees about you more. Tesla shows what it sees about the external world. It would be nice if Super Cruise would tell us what it sees about the internal world. It's even worse than that. You press the button to engage and it just says Super Cruise unavailable. Yeah. Why? Why? Yeah, that transparency is good. We've renamed the driver monitoring packet to driver state. Driver state. We have car state packet, which has the state of the car. And you have driver state packet, which has the state of the driver. So what is the... Estimate their BAC. What's BAC? Blood alcohol content. You think that's possible with computer vision? Absolutely. To me, it's an open question. I haven't looked into it too much. Actually, I quite seriously looked at the literature. It's not obvious to me that from the eyes and so on, you can tell. You might need stuff from the car as well. Yeah. You might need how they're controlling the car, right? And that's fundamentally at the end of the day, what you care about. But I think, especially when people are really drunk, they're not controlling the car nearly as smoothly as they would look at them walking, right? The car is like an extension of the body. So I think you could totally detect. And if you could fix people who are drunk, distracted, asleep, if you fix those three. Yeah, that's huge. So what are the current limitations of open pilot? What are the main problems that still need to be solved? We're hopefully fixing a few of them in 06. We're not as good as autopilot at stop cars. So if you're coming up to a red light at 55, so it's the radar stopped car problem, which is responsible for two autopilot accidents, it's hard to differentiate a stopped car from a signpost. Yeah, a static object. So you have to fuse. You have to do this visually. There's no way from the radar data to tell the difference. Maybe you can make a map, but I don't really believe in mapping at all anymore. Wait, wait, wait, what, you don't believe in mapping? No. So you basically, the open pilot solution is saying react to the environment as you see it, just like human beings do. And then eventually, when you want to do navigate on open pilot, I'll train the net to look at ways. I'll run ways in the background, I'll train a confident way. Are you using GPS at all? We use it to ground truth. We use it to very carefully ground truth the paths. We have a stack which can recover relative to 10 centimeters over one minute. And then we use that to ground truth exactly where the car went in that local part of the environment, but it's all local. How are you testing in general, just for yourself, like experiments and stuff? Where are you located? San Diego. San Diego. Yeah. OK. So you basically drive around there, collect some data, and watch the performance? We have a simulator now. And we have, our simulator is really cool. Our simulator is not, it's not like a Unity based simulator. Our simulator lets us load in real state. What do you mean? We can load in a drive and simulate what the system would have done on the historical data. Ooh, nice. Interesting. So what, yeah. Right now we're only using it for testing, but as soon as we start using it for training, that's it. That's all that matters. What's your feeling about the real world versus simulation? Do you like simulation for training, if this moves to training? So we have to distinguish two types of simulators, right? There's a simulator that is completely fake. I could get my car to drive around in GTA. I feel that this kind of simulator is useless. You're never, there's so many. My analogy here is like, OK, fine. You're not solving the computer vision problem, but you're solving the computer graphics problem. Right. And you don't think you can get very far by creating ultra realistic graphics? No, because you can create ultra realistic graphics of the road, now create ultra realistic behavioral models of the other cars. Oh, well, I'll just use myself driving. No, you won't. You need actual human behavior, because that's what you're trying to learn. Driving does not have a spec. The definition of driving is what humans do when they drive. Whatever Waymo does, I don't think it's driving. Right. Well, I think actually Waymo and others, if there's any use for reinforcement learning, I've seen it used quite well. I study pedestrians a lot, too, is try to train models from real data of how pedestrians move, and try to use reinforcement learning models to make pedestrians move in human like ways. By that point, you've already gone so many layers, you detected a pedestrian? Did you hand code the feature vector of their state? Did you guys learn anything from computer vision before deep learning? Well, OK, I feel like this is. So perception to you is the sticking point. I mean, what's the hardest part of the stack here? There is no human understandable feature vector separating perception and planning. That's the best way I can put that. There is no, so it's all together, and it's a joint problem. So you can take localization. Localization and planning, there is a human understandable feature vector between these two things. I mean, OK, so I have like three degrees position, three degrees orientation, and those derivatives, maybe those second derivatives. That's human understandable. That's physical. Between perception and planning, so like Waymo has a perception stack and then a planner. And one of the things Waymo does right is they have a simulator that can separate those two. They can like replay their perception data and test their system, which is what I'm talking about about like the two different kinds of simulators. There's the kind that can work on real data, and there's the kind that can't work on real data. Now, the problem is that I don't think you can hand code a feature vector, right? Like you have some list of like, oh, here's my list of cars in the scenes. Here's my list of pedestrians in the scene. This isn't what humans are doing. What are humans doing? Global. And you're saying that's too difficult to hand engineer. I'm saying that there is no state vector given a perfect. I could give you the best team of engineers in the world to build a perception system and the best team to build a planner. All you have to do is define the state vector that separates those two. I'm missing the state vector that separates those two. What do you mean? So what is the output of your perception system? Output of the perception system, it's, OK, well, there's several ways to do it. One is the SLAM components localization. The other is drivable area, drivable space. Drivable space, yeah. And then there's the different objects in the scene. And different objects in the scene over time, maybe, to give you input to then try to start modeling the trajectories of those objects. Sure. That's it. I can give you a concrete example of something you missed. What's that? So say there's a bush in the scene. Humans understand that when they see this bush that there may or may not be a car behind that bush. Drivable area and a list of objects does not include that. Humans are doing this constantly at the simplest intersections. So now you have to talk about occluded area. But even that, what do you mean by occluded? OK, so I can't see it. Well, if it's the other side of a house, I don't care. What's the likelihood that there's a car in that occluded area? And if you say, OK, we'll add that, I can come up with 10 more examples that you can't add. Certainly, occluded area would be something that Simulator would have because it's simulating the entire occlusion is part of it. Occlusion is part of a vision stack. But what I'm saying is if you have a hand engineered, if your perception system output can be written in a spec document, it is incomplete. Yeah, I mean, certainly, it's hard to argue with that because in the end, that's going to be true. Yeah, and I'll tell you what the output of our perception system is. What's that? It's a 1,024 dimensional vector, trained by neural net. Oh, you know that. No, it's 1,024 dimensions of who knows what. Because it's operating on real data. Yeah. And that's the perception. That's the perception state. Think about an autoencoder for faces. If you have an autoencoder for faces and you say it has 256 dimensions in the middle, and I'm taking a face over here and projecting it to a face over here. Can you hand label all 256 of those dimensions? Well, no, but those have to generate automatically. But even if you tried to do it by hand, could you come up with a spec between your encoder and your decoder? No, because it wasn't designed, but there. No, no, no, but if you could design it. If you could design a face reconstructor system, could you come up with a spec? No, but I think we're missing here a little bit. I think you're just being very poetic about expressing a fundamental problem of simulators, that they're going to be missing so much that the feature vector will just look fundamentally different in the simulated world than the real world. I'm not making a claim about simulators. I'm making a claim about the spec division between perception and planning, even in your system. Just in general. Just in general. If you're trying to build a car that drives, if you're trying to hand code the output of your perception system, like saying, here's a list of all the cars in the scene, here's a list of all the people, here's a list of the occluded areas, here's a vector of drivable areas, it's insufficient. And if you start to believe that, you realize that what Waymo and Cruz are doing is impossible. Currently, what we're doing is the perception problem is converting the scene into a chessboard. And then you reason some basic reasoning around that chessboard. And you're saying that really, there's a lot missing there. First of all, why are we talking about this? Because isn't this a full autonomy? Is this something you think about? Oh, I want to win self driving cars. So your definition of win includes? Level four or five. Level five. I don't think level four is a real thing. I want to build the AlphaGo of driving. So AlphaGo is really end to end. Yeah. Is, yeah, it's end to end. And do you think this whole problem, is that also kind of what you're getting at with the perception and the planning? Is that this whole problem, the right way to do it is really to learn the entire thing. I'll argue that not only is it the right way, it's the only way that's going to exceed human performance. Well. It's certainly true for Go. Everyone who tried to hand code Go things built human inferior things. And then someone came along and wrote some 10,000 line thing that doesn't know anything about Go that beat everybody. It's 10,000 lines. True, in that sense, the open question then that maybe I can ask you is driving is much harder than Go. The open question is how much harder? So how, because I think the Elon Musk approach here with planning and perception is similar to what you're describing, which is really turning into not some kind of modular thing, but really do formulate it as a learning problem and solve the learning problem with scale. So how many years, put one is how many years would it take to solve this problem or just how hard is this freaking problem? Well, the cool thing is I think there's a lot of value that we can deliver along the way. I think that you can build lane keeping assist actually plus adaptive cruise control, plus, okay, looking at ways, extends to like all of driving. Yeah, most of driving, right? Oh, your adaptive cruise control treats red lights like cars, okay. So let's jump around. You mentioned that you didn't like navigate an autopilot. What advice, how would you make it better? Do you think as a feature that if it's done really well, it's a good feature? I think that it's too reliant on like hand coded hacks for like, how does navigate an autopilot do a lane change? It actually does the same lane change every time and it feels mechanical. Humans do different lane changes. Humans sometime will do a slow one, sometimes do a fast one. Navigate an autopilot, at least every time I use it, it is the identical lane change. How do you learn? I mean, this is a fundamental thing actually is the braking and then accelerating something that's still, Tesla probably does it better than most cars, but it still doesn't do a great job of creating a comfortable natural experience. And navigate an autopilot is just lane changes and extension of that. So how do you learn to do a natural lane change? So we have it and I can talk about how it works. So I feel that we have the solution for lateral. We don't yet have the solution for longitudinal. There's a few reasons longitudinal is harder than lateral. The lane change component, the way that we train on it very simply is like our model has an input for whether it's doing a lane change or not. And then when we train the end to end model, we hand label all the lane changes, cause you have to. I've struggled a long time about not wanting to do that, but I think you have to. Or the training data. For the training data, right? Oh, we actually, we have an automatic ground truther which automatically labels all the lane changes. Was that possible? To automatically label the lane changes? Yeah. Yeah, detect the lane, I see when it crosses it, right? And I don't have to get that high percent accuracy, but it's like 95, good enough. Now I set the bit when it's doing the lane change in the end to end learning. And then I set it to zero when it's not doing a lane change. So now if I wanted to do a lane change at test time, I just put the bit to a one and it'll do a lane change. Yeah, but so if you look at the space of lane change, you know, some percentage, not a hundred percent that we make as humans is not a pleasant experience cause we messed some part of it up. It's nerve wracking to change the look, you have to see, it has to accelerate. How do we label the ones that are natural and feel good? You know, that's the, cause that's your ultimate criticism. The current navigate and autopilot just doesn't feel good. Well, the current navigate and autopilot is a hand coded policy written by an engineer in a room who probably went out and tested it a few times on the 280. Probably a more, a better version of that, but yes. That's how we would have written it at Comma AI. Yeah, yeah, yeah. Maybe Tesla did, Tesla, they tested it in the end. That might've been two engineers. Two engineers, yeah. No, but so if you learn the lane change, if you learn how to do a lane change from data, just like you have a label that says lane change and then you put it in when you want it to do the lane change, it'll automatically do the lane change that's appropriate for the situation. Now, to get at the problem of some humans do bad lane changes, we haven't worked too much on this problem yet. It's not that much of a problem in practice. My theory is that all good drivers are good in the same way and all bad drivers are bad in different ways. And we've seen some data to back this up. Well, beautifully put. So you just basically, if that's true hypothesis, then your task is to discover the good drivers. The good drivers stand out because they're in one cluster and the bad drivers are scattered all over the place and your net learns the cluster. Yeah, that's, so you just learn from the good drivers and they're easy to cluster. In fact, we learned from all of them and the net automatically learns the policy that's like the majority, but we'll eventually probably have to filter them out. If that theory is true, I hope it's true because the counter theory is there is many clusters, maybe arbitrarily many clusters of good drivers. Because if there's one cluster of good drivers, you can at least discover a set of policies. You can learn a set of policies, which would be good universally. Yeah. That would be a nice, that would be nice if it's true. And you're saying that there is some evidence that. Let's say lane changes can be clustered into four clusters. Right. Right. There's this finite level of. I would argue that all four of those are good clusters. All the things that are random are noise and probably bad. And which one of the four you pick, or maybe it's 10 or maybe it's 20. You can learn that. It's context dependent. It depends on the scene. And the hope is it's not too dependent on the driver. Yeah. The hope is that it all washes out. The hope is that there's, that the distribution's not bimodal. The hope is that it's a nice Gaussian. So what advice would you give to Tesla, how to fix, how to improve navigating autopilot? That's the lessons that you've learned from Comm AI? The only real advice I would give to Tesla is please put driver monitoring in your cars. With respect to improving it? You can't do that anymore. I decided to interrupt, but you know, there's a practical nature of many of hundreds of thousands of cars being produced that don't have a good driver facing camera. The Model 3 has a selfie cam. Is it not good enough? Did they not put IR LEDs for night? That's a good question. But I do know that it's fisheye and it's relatively low resolution. So it's really not designed. It wasn't. It wasn't designed for driver monitoring. You can hope that you can kind of scrape up and have something from it. Yeah. But why didn't they put it in today? Put it in today. Put it in today. Every time I've heard Karpathy talk about the problem and talking about like software 2.0 and how the machine learning is gobbling up everything, I think this is absolutely the right strategy. I think that he didn't write navigate on autopilot. I think somebody else did and kind of hacked it on top of that stuff. I think when Karpathy says, wait a second, why did we hand code this lane change policy with all these magic numbers? We're gonna learn it from data. They'll fix it. They already know what to do there. Well, that's Andrei's job is to turn everything into a learning problem and collect a huge amount of data. The reality is though, not every problem can be turned into a learning problem in the short term. In the end, everything will be a learning problem. The reality is like if you wanna build L5 vehicles today, it will likely involve no learning. And that's the reality is, so at which point does learning start? It's the crutch statement that LiDAR is a crutch. At which point will learning get up to part of human performance? It's over human performance on ImageNet, classification, on driving, it's a question still. It is a question. I'll say this, I'm here to play for 10 years. I'm not here to try to, I'm here to play for 10 years and make money along the way. I'm not here to try to promise people that I'm gonna have my L5 taxi network up and working in two years. Do you think that was a mistake? Yes. What do you think was the motivation behind saying that? Other companies are also promising L5 vehicles with very different approaches in 2020, 2021, 2022. If anybody would like to bet me that those things do not pan out, I will bet you. Even money, even money, I'll bet you as much as you want. Yeah. So are you worried about what's going to happen? Cause you're not in full agreement on that. What's going to happen when 2022, 21 come around and nobody has fleets of autonomous vehicles? Well, you can look at the history. If you go back five years ago, they were all promised by 2018 and 2017. But they weren't that strong of promises. I mean, Ford really declared pretty, I think not many have declared as like definitively as they have now these dates. Well, okay, so let's separate L4 and L5. Do I think that it's possible for Waymo to continue to kind of like hack on their system until it gets to level four in Chandler, Arizona? Yes. When there's no safety driver? Chandler, Arizona? Yeah. By, sorry, which year are we talking about? Oh, I even think that's possible by like 2020, 2021. But level four, Chandler, Arizona, not level five, New York City. Level four, meaning some very defined streets, it works out really well. Very defined streets. And then practically these streets are pretty empty. If most of the streets are covered in Waymo's, Waymo can kind of change the definition of what driving is. Right? If your self driving network is the majority of cars in an area, they only need to be safe with respect to each other and all the humans will need to learn to adapt to them. Now go drive in downtown New York. Well, yeah, that's. I mean, already you can talk about autonomy and like on farms, it already works great because you can really just follow the GPS line. So what does success look like for common AI? What are the milestones? Like where you can sit back with some champagne and say, we did it, boys and girls? Well, it's never over. Yeah, but. You must drink champagne and celebrate. So what is a good, what are some wins? A big milestone that we're hoping for by mid next year is profitability of the company. And we're gonna have to revisit the idea of selling a consumer product, but it's not gonna be like the comma one. When we do it, it's gonna be perfect. Open pilot has gotten so much better in the last two years. We're gonna have a few features. We're gonna have a hundred percent driver monitoring. We're gonna disable no safety features in the car. Actually, I think it'd be really cool what we're doing right now. Our project this week is we're analyzing the data set and looking for all the AEB triggers from the manufacturer systems. We have better data set on that than the manufacturers. How much, just how many, does Toyota have 10 million miles of real world driving to know how many times their AEB triggered? So let me give you, cause you asked, right? Financial advice. Yeah. Cause I work with a lot of automakers and one possible source of money for you, which I'll be excited to see you take on is basically selling the data. So, which is something that most people, and not selling in a way where here, here at Automaker, but creating, we've done this actually at MIT, not for money purposes, but you could do it for significant money purposes and make the world a better place by creating a consortia where automakers would pay in and then they get to have free access to the data. And I think a lot of people are really hungry for that and would pay significant amount of money for it. Here's the problem with that. I like this idea all in theory. It'd be very easy for me to give them access to my servers and we already have all open source tools to access this data. It's in a great format. We have a great pipeline, but they're gonna put me in the room with some business development guy. And I'm gonna have to talk to this guy and he's not gonna know most of the words I'm saying. I'm not willing to tolerate that. Okay, Mick Jagger. No, no, no, no, no. I think I agree with you. I'm the same way, but you just tell them the terms and there's no discussion needed. If I could just tell them the terms, Yeah. and like, all right, who wants access to my data? I will sell it to you for, let's say, you want a subscription? I'll sell to you for 100K a month. Anyone. 100K a month. 100K a month. I'll give you access to this data subscription. Yeah. Yeah, I think that's kind of fair. Came up with that number off the top of my head. If somebody sends me like a three line email where it's like, we would like to pay 100K a month to get access to your data. We would agree to like reasonable privacy terms of the people who are in the data set. I would be happy to do it, but that's not going to be the email. The email is going to be, hey, do you have some time in the next month where we can sit down and we can, I don't have time for that. We're moving too fast. Yeah. You could politely respond to that email, but not saying, I don't have any time for your bullshit. You say, oh, well, unfortunately these are the terms. And so this is, we try to, we brought the cost down for you in order to minimize the friction and communication. Absolutely. Here's the, whatever it is, one, two million dollars a year and you have access. And it's not like I get that email from like, but okay, am I going to reach out? Am I going to hire a business development person who's going to reach out to the automakers? No way. Yeah. Okay. I got you. If they reached into me, I'm not going to ignore the email. I'll come back with something like, yeah, if you're willing to pay 100K a month for access to the data, I'm happy to set that up. That's worth my engineering time. That's actually quite insightful of you. You're right. Probably because many of the automakers are quite a bit old school, there will be a need to reach out and they want it, but there'll need to be some communication. You're right. Mobileye circa 2015 had the lowest R&D spend of any chip maker, like per, per, and you look at all the people who work for them and it's all business development people because the car companies are impossible to work with. Yeah. So you're, you have no patience for that and you're, you're legit Android, huh? I have something to do, right? Like, like it's not like, it's not like, I don't, like, I don't mean to like be a dick and say like, I don't have patience for that, but it's like that stuff doesn't help us with our goal of winning self driving cars. If I want money in the short term, if I showed off like the actual, like the learning tech that we have, it's, it's somewhat sad. Like it's years and years ahead of everybody else's. Not to, maybe not Tesla's. I think Tesla has some more stuff to us actually. Yeah. I think Tesla has similar stuff, but when you compare it to like what the Toyota Research Institute has, you're not even close to what we have. No comments. But I also can't, I have to take your comments. I intuitively believe you, but I have to take it with a grain of salt because I mean, you are an inspiration because you basically don't care about a lot of things that other companies care about. You don't try to bullshit in a sense, like make up stuff. So to drive up valuation, you're really very real and you're trying to solve the problem and admire that a lot. What I don't necessarily fully can't trust you on, with all due respect, is how good it is, right? I can only, but I also know how bad others are. And so. I'll say two things about, trust but verify, right? I'll say two things about that. One is try, get in a 2020 Corolla and try open pilot 0.6 when it comes out next month. I think already you'll look at this and you'll be like, this is already really good. And then I could be doing that all with hand labelers and all with like the same approach that Mobileye uses. When we release a model that no longer has the lanes in it, that only outputs a path, then think about how we did that machine learning and then right away when you see, and that's gonna be an open pilot, that's gonna be an open pilot before 1.0. When you see that model, you'll know that everything I'm saying is true because how else did I get that model? Good. You know what I'm saying is true about the simulator. Yeah, yeah, this is super exciting, that's super exciting. But like, you know, I listened to your talk with Kyle and Kyle was originally building the aftermarket system and he gave up on it because of technical challenges, because of the fact that he's gonna have to support 20 to 50 cars, we support 45, because what is he gonna do when the manufacturer ABS system triggers? We have alerts and warnings to deal with all of that and all the cars. And how is he going to formally verify it? Well, I got 10 million miles of data, it's probably better, it's probably better verified than the spec. Yeah, I'm glad you're here talking to me. This is, I'll remember this day, because it's interesting. If you look at Kyle's from cruise, I'm sure they have a large number of business development folks and you work with, he's working with GM, you could work with Argo AI, working with Ford. It's interesting because chances that you fail, business wise, like bankrupt, are pretty high. Yeah. And yet, it's the Android model, is you're actually taking on the problem. So that's really inspiring, I mean. Well, I have a long term way for Comma to make money too. And one of the nice things when you really take on the problem, which is my hope for Autopilot, for example, is things you don't expect, ways to make money or create value that you don't expect will pop up. Oh, I've known how to do it since kind of, 2017 is the first time I said it. Which part, to know how to do which part? Our long term plan is to be a car insurance company. Insurance, yeah, I love it, yep, yep. I make driving twice as safe. Not only that, I have the best data such to know who statistically is the safest drivers. And oh, oh, we see you, we see you driving unsafely, we're not gonna insure you. And that causes a bifurcation in the market because the only people who can't get Comma insurance are the bad drivers, Geico can insure them, their premiums are crazy high, our premiums are crazy low. We'll win car insurance, take over that whole market. Okay, so. If we win, if we win. But that's what I'm saying, how do you turn Comma into a $10 billion company? It's that. That's right. So you, Elon Musk, who else? Who else is thinking like this and working like this in your view? Who are the competitors? Are there people seriously, I don't think anyone that I'm aware of is seriously taking on lane keeping, like where it's a huge business that turns eventually into full autonomy that then creates, yeah, like that creates other businesses on top of it and so on. Thinks insurance, thinks all kinds of ideas like that. Do you know anyone else thinking like this? Not really. That's interesting. I mean, my sense is everybody turns to that in like four or five years. Like Ford, once the autonomy doesn't fall through. Yeah. But at this time. Elon's the iOS. By the way, he paved the way for all of us. It's the iOS, true. I would not be doing Comma AI today if it was not for those conversations with Elon. And if it were not for him saying like, I think he said like, well, obviously we're not gonna use LiDAR, we use cameras, humans use cameras. So what do you think about that? How important is LiDAR? Everybody else on L5 is using LiDAR. What are your thoughts on his provocative statement that LiDAR is a crutch? See, sometimes he'll say dumb things, like the driver monitoring thing, but sometimes he'll say absolutely, completely, 100% obviously true things. Of course LiDAR is a crutch. It's not even a good crutch. You're not even using it. Oh, they're using it for localization. Yeah. Which isn't good in the first place. If you have to localize your car to centimeters in order to drive, like that's not driving. Currently not doing much machine learning I thought for LiDAR data. Meaning like to help you in the task of, general task of perception. The main goal of those LiDARs on those cars I think is actually localization more than perception. Or at least that's what they use them for. Yeah, that's true. If you want to localize to centimeters, you can't use GPS. The fanciest GPS in the world can't do it. Especially if you're under tree cover and stuff. With LiDAR you can do this pretty easily. So you really, they're not taking on, I mean in some research they're using it for perception, but, and they're certainly not, which is sad, they're not fusing it well with vision. They do use it for perception. I'm not saying they don't use it for perception, but the thing that, they have vision based and radar based perception systems as well. You could remove the LiDAR and keep around a lot of the dynamic object perception. You want to get centimeter accurate localization? Good luck doing that with anything else. So what should Cruz, Waymo do? Like what would be your advice to them now? I mean Waymo is actually, they're, I mean they're doing, they're serious. Waymo out of the ball of them are quite so serious about the long game. If L5 is a lot, requires 50 years, I think Waymo will be the only one left standing at the end with the, given the financial backing that they have. Buku Google bucks. I'll say nice things about both Waymo and Cruz. Let's do it. Nice is good. Waymo is by far the furthest along with technology. Waymo has a three to five year lead on all the competitors. If that, if the Waymo looking stack works, maybe three year lead. If the Waymo looking stack works, they have a three year lead. Now I argue that Waymo has spent too much money to recapitalize, to gain back their losses in those three years. Also self driving cars have no network effect like that. Uber has a network effect. You have a market, you have drivers and you have riders. Self driving cars, you have capital and you have riders. There's no network effect. If I want to blanket a new city in self driving cars, I buy the off the shelf Chinese knockoff self driving cars and I buy enough of them in the city. I can't do that with drivers. And that's why Uber has a first mover advantage that no self driving car company will. Can you disentangle that a little bit? Uber, you're not talking about Uber, the autonomous vehicle Uber. You're talking about the Uber car, the, yeah. I'm Uber. I open for business in Austin, Texas, let's say. I need to attract both sides of the market. I need to both get drivers on my platform and riders on my platform. And I need to keep them both sufficiently happy, right? Riders aren't gonna use it if it takes more than five minutes for an Uber to show up. Drivers aren't gonna use it if they have to sit around all day and there's no riders. So you have to carefully balance a market. And whenever you have to carefully balance a market, there's a great first mover advantage because there's a switching cost for everybody, right? The drivers and the riders would have to switch at the same time. Let's even say that, you know, let's say a Luber shows up and Luber somehow, you know, agrees to do things at a bigger, you know, we're just gonna, we've done it more efficiently, right? Luber is only takes 5% of a cut instead of the 10% that Uber takes. No one is gonna switch because the switching cost is higher than that 5%. So you actually can, in markets like that, you have a first mover advantage. Yeah. Autonomous vehicles of the level five variety have no first mover advantage. If the technology becomes commoditized, say I wanna go to a new city, look at the scooters. It's gonna look a lot more like scooters. Every person with a checkbook can blanket a city in scooters. And that's why you have 10 different scooter companies. Which one's gonna win? It's a race to the bottom. It's a terrible market to be in because there's no market for scooters. And the scooters don't get a say in whether they wanna be bought and deployed to a city or not. Right. So the, yeah. We're gonna entice the scooters with subsidies and deals and. So whenever you have to invest that capital, it doesn't. It doesn't come back. Yeah. That can't be your main criticism of the Waymo approach. Oh, I'm saying even if it does technically work. Even if it does technically work, that's a problem. Yeah. I don't know if I were to say, I would say you're already there. I haven't even thought about that, but I would say the bigger challenge is the technical approach. The. So Waymo's cruises. And not just the technical approach, but of creating value. I still don't understand how you beat Uber, the human driven cars. In terms of financially, it doesn't make sense to me that people wanna get in an autonomous vehicle. I don't understand how you make money. In the longterm, yes. Like real longterm. But it just feels like there's too much capital investment needed. Oh, and they're gonna be worse than Ubers because they're gonna stop for every little thing, everywhere. I'll say a nice thing about cruise. That was my nice thing about Waymo. They're three years ahead. Wait, what was the nice? Oh, because they're three. They're three years technically ahead of everybody. Their tech stack is great. My nice thing about cruise is GM buying them was a great move for GM. For $1 billion, GM bought an insurance policy against Waymo. They put, cruise is three years behind Waymo. That means Google will get a monopoly on the technology for at most three years. And if technology works, so you might not even be right about the three years, it might be less. Might be less. Cruise actually might not be that far behind. I don't know how much Waymo has waffled around or how much of it actually is just that long tail. Yeah, okay. If that's the best you could say in terms of nice things, that's more of a nice thing for GM that that's the smart insurance policy. It's a smart insurance policy. I mean, I think that's how, I can't see cruise working out any other. For cruise to leapfrog Waymo would really surprise me. Yeah, so let's talk about the underlying assumption of everything is. We're not gonna leapfrog Tesla. Tesla would have to seriously mess up for us because you're. Okay, so the way you leapfrog, right? Is you come up with an idea or you take a direction perhaps secretly that the other people aren't taking. And so the cruise, Waymo, even Aurora. I don't know Aurora, Zooks is the same stack as well. They're all the same code base even. And they're all the same DARPA Urban Challenge code base. So the question is, do you think there's a room for brilliance and innovation that will change everything? Like say, okay, so I'll give you examples. It could be if revolution and mapping, for example, that allow you to map things, do HD maps of the whole world, all weather conditions somehow really well, or revolution and simulation to where the all the way you said before becomes incorrect. That kind of thing. Any room for breakthrough innovation? What I said before about, oh, they actually get the whole thing. Well, I'll say this about, we divide driving into three problems and I actually haven't solved the third yet, but I haven't had you how to do it. So there's the static. The static driving problem is assuming you are the only car on the road, right? And this problem can be solved 100% with mapping and localization. This is why farms work the way they do. If all you have to deal with is the static problem and you can statically schedule your machines, right? It's the same as like statically scheduling processes. You can statically schedule your tractors to never hit each other on their paths, right? Cause they know the speed they go at. So that's the static driving problem. Maps only helps you with the static driving problem. Yeah, the question about static driving, you've just made it sound like it's really easy. Static driving is really easy. How easy? How, well, cause the whole drifting out of lane, when Tesla drifts out of lane, it's failing on the fundamental static driving problem. Tesla is drifting out of lane? The static driving problem is not easy for the world. The static driving problem is easy for one route. One route and one weather condition with one state of lane markings and like no deterioration, no cracks in the road. No, I'm assuming you have a perfect localizer. So that's solved for the weather condition and the lane marking condition. But that's the problem is, how do you have a perfect localizer? Perfect localizers are not that hard to build. Okay, come on now, with LIDAR? With LIDAR, yeah. Oh, with LIDAR, okay. With LIDAR, yeah, but you use LIDAR, right? Like use LIDAR, build a perfect localizer. Building a perfect localizer without LIDAR, it's gonna be hard. You can get 10 centimeters without LIDAR, you can get one centimeter with LIDAR. I'm not even concerned about the one or 10 centimeters. I'm concerned if every once in a while, you're just way off. Yeah, so this is why you have to carefully make sure you're always tracking your position. You wanna use LIDAR camera fusion, but you can get the reliability of that system up to 100,000 miles, and then you write some fallback condition where it's not that bad if you're way off, right? I think that you can get it to the point, it's like ASLD that you're never in a case where you're way off and you don't know it. Yeah, okay, so this is brilliant. So that's the static. Static. We can, especially with LIDAR and good HG maps, you can solve that problem. Easy. No, I just disagree with your word easy. The static problem's so easy. It's very typical for you to say something is easy. I got it. No. It's not as challenging as the other ones, okay. Well, okay, maybe it's obvious how to solve it. The third one's the hardest. And a lot of people don't even think about the third one and even see it as different from the second one. So the second one is dynamic. The second one is like, say there's an obvious example is like a car stopped at a red light, right? You can't have that car in your map because you don't know whether that car is gonna be there or not. So you have to detect that car in real time and then you have to do the appropriate action, right? Also, that car is not a fixed object. That car may move and you have to predict what that car will do, right? So this is the dynamic problem. Yeah. So you have to deal with this. This involves, again, like you're gonna need models of other people's behavior. Are you including in that, I don't wanna step on the third one. Oh. But are you including in that your influence on people? Ah, that's the third one. Okay. That's the third one. We call it the counterfactual. Yeah, brilliant. And that. I just talked to Judea Pearl who's obsessed with counterfactuals. And the counterfactual. Oh yeah, yeah, I read his books. So the static and the dynamic Yeah. Our approach right now for lateral will scale completely to the static and dynamic. The counterfactual, the only way I have to do it yet, the thing that I wanna do once we have all of these cars is I wanna do reinforcement learning on the world. I'm always gonna turn the exploiter up to max. I'm not gonna have them explore. But the only real way to get at the counterfactual is to do reinforcement learning because the other agents are humans. So that's fascinating that you break it down like that. I agree completely. I've spent my life thinking about this problem. It's beautiful. And part of it, because you're slightly insane, it's good. Because. Not my life. Just the last four years. No, no. You have some nonzero percent of your brain has a madman in it, which is good. That's a really good feature. But there's a safety component to it that I think sort of with counterfactuals and so on that would just freak people out. How do you even start to think about just in general? I mean, you've had some friction with NHTSA and so on. I am frankly exhausted by safety engineers. The prioritization on safety over innovation to a degree where it kills, in my view, kills safety in the long term. So the counterfactual thing, they just actually exploring this world of how do you interact with dynamic objects and so on. How do you think about safety? You can do reinforcement learning without ever exploring. And I said that, so you can think about your, in reinforcement learning, it's usually called a temperature parameter. And your temperature parameter is how often you deviate from the argmax. I could always set that to zero and still learn. And I feel that you'd always want that set to zero on your actual system. Gotcha. But the problem is you first don't know very much. And so you're going to make mistakes. So the learning, the exploration happens through mistakes. Yeah, but okay. So the consequences of a mistake. Open pilot and autopilot are making mistakes left and right. We have 700 daily active users, a thousand weekly active users. Open pilot makes tens of thousands of mistakes a week. These mistakes have zero consequences. These mistakes are, oh, I wanted to take this exit and it went straight. So I'm just going to carefully touch the wheel. The humans catch them. The humans catch them. And the human disengagement is labeling that reinforcement learning in a completely consequence free way. So driver monitoring is the way you ensure they keep. Yes. They keep paying attention. How is your messaging? Say I gave you a billion dollars, you would be scaling it now. Oh, I couldn't scale it with any amount of money. I'd raise money if I could, if I had a way to scale it. Yeah, you're now not focused on scale. I don't know how to do, oh, like I guess I could sell it to more people, but I want to make the system better. Better, better. And I don't know how to, I mean. But what's the messaging here? I got a chance to talk to Elon and he basically said that the human factor doesn't matter. You know, the human doesn't matter because the system will perform, there'll be sort of a, sorry to use the term, but like a singular, like a point where it gets just much better. And so the human, it won't really matter. But it seems like that human catching the system when it gets into trouble is like the thing which will make something like reinforcement learning work. So how do you think messaging for Tesla, for you should change, for the industry in general should change? I think our messaging is pretty clear. At least like our messaging wasn't that clear in the beginning and I do kind of fault myself for that. We are proud right now to be a level two system. We are proud to be level two. If we talk about level four, it's not with the current hardware. It's not gonna be just a magical OTA upgrade. It's gonna be new hardware. It's gonna be very carefully thought out. Right now, we are proud to be level two and we have a rigorous safety model. I mean, not like, okay, rigorous, who knows what that means, but we at least have a safety model and we make it explicit as in safety.md in OpenPilot. And it says, seriously though, safety.md. This is brilliant, this is so Android. Well, this is the safety model and I like to have conversations like, sometimes people will come to you and they're like, your system's not safe. Okay, have you read my safety docs? Would you like to have an intelligent conversation about this? And the answer is always no. They just like scream about, it runs Python. Okay, what? So you're saying that because Python's not real time, Python not being real time never causes disengagements. Disengagements are caused by, the model is QM. But safety.md says the following, first and foremost, the driver must be paying attention at all times. I still consider the software to be alpha software until we can actually enforce that statement, but I feel it's very well communicated to our users. Two more things. One is the user must be able to easily take control of the vehicle at all times. So if you step on the gas or brake with OpenPilot, it gives full manual control back to the user or press the cancel button. Step two, the car will never react so quickly, we define so quickly to be about one second, that you can't react in time. And we do this by enforcing torque limits, braking limits and acceleration limits. So we have like our torque limits way lower than Tesla's. This is another potential. If I could tweak Autopilot, I would lower their torque limit and I would add driver monitoring. Because Autopilot can jerk the wheel hard. OpenPilot can't. We limit, and all this code is open source, readable. And I believe now it's all Misra C compliant. What's that mean? Misra is like the automotive coding standard. At first, I've come to respect. I've been reading like the standards lately and I've come to respect them. They're actually written by very smart people. Yeah, they're brilliant people actually. They have a lot of experience. They're sometimes a little too cautious, but in this case, it pays off. Misra is written by like computer scientists. And you can tell by the language they use. You can tell by the language they use, they talk about like whether certain conditions in Misra are decidable or undecidable. And you mean like the halting problem? And yes, all right, you've earned my respect. I will read carefully what you have to say and we wanna make our code compliant with that. All right, so you're proud level two, beautiful. So you were the founder and I think CEO of Kama AI, then you were the head of research. What the heck are you now? What's your connection to Kama AI? I'm the president, but I'm one of those like unelected presidents of like a small dictatorship country, not one of those like elected presidents. Oh, so you're like Putin when he was like the, yeah, I got you. So there's a, what's the governance structure? What's the future of Kama AI? I mean, yeah, it's a business. Do you want, are you just focused on getting things right now, making some small amount of money in the meantime and then when it works, it works and you scale. Our burn rate is about 200K a month and our revenue is about 100K a month. So we need to forex our revenue, but we haven't like tried very hard at that yet. And the revenue is basically selling stuff online. Yeah, we sell stuff shop.kama.ai. Is there other, well, okay, so you'll have to figure out the revenue. That's our only, see, but to me, that's like respectable revenues. We make it by selling products to consumers who are honest and transparent about what they are. Most actually level four companies, right? Cause you could easily start blowing up like smoke, like overselling the hype and feeding into, getting some fundraisers. Oh, you're the guy, you're a genius because you hacked the iPhone. Oh, I hate that, I hate that. Yeah, well, I can trade my social capital for more money. I did it once, I almost regret it doing it the first time. Well, on a small tangent, what's your, you seem to not like fame and yet you're also drawn to fame. Where are you on that currently? Have you had some introspection, some soul searching? Yeah, I actually, I've come to a pretty stable position on that. Like after the first time, I realized that I don't want attention from the masses. I want attention from people who I respect. Who do you respect? I can give a list of people. So are these like Elon Musk type characters? Yeah, well, actually, you know what? I'll make it more broad than that. I won't make it about a person, I respect skill. I respect people who have skills, right? And I would like to like be, I'm not gonna say famous, but be like known among more people who have like real skills. Who in cars do you think have skill, not do you respect? Oh, Kyle Vogt has skill. A lot of people at Waymo have skill and I respect them. I respect them as engineers. Like I can think, I mean, I think about all the times in my life where I've been like dead set on approaches and they turn out to be wrong. So, I mean, this might, I might be wrong. I accept that. I accept that there's a decent chance that I'm wrong. And actually, I mean, having talked to Chris Hermsons, Sterling Anderson, those guys, I mean, I deeply respect Chris. I just admire the guy. He's legit. When you drive a car through the desert when everybody thinks it's impossible, that's legit. And then I also really respect the people who are like writing the infrastructure of the world, like the Linus Torvalds and the Chris Lattiners. They were doing the real work. I know, they're doing the real work. This, having talked to Chris, like Chris Lattiners, you realize, especially when they're humble, it's like you realize, oh, you guys, we're just using your, Oh yeah. All the hard work that you did. Yeah, that's incredible. What do you think, Mr. Anthony Lewandowski, what do you, he's another mad genius. Sharp guy, oh yeah. What, do you think he might long term become a competitor? Oh, to comma? Well, so I think that he has the other right approach. I think that right now there's two right approaches. One is what we're doing, and one is what he's doing. Can you describe, I think it's called Pronto AI. He started a new thing. Do you know what the approach is? I actually don't know. Embark is also doing the same sort of thing. The idea is almost that you want to, so if you're, I can't partner with Honda and Toyota. Honda and Toyota are like 400,000 person companies. It's not even a company at that point. I don't think of it like, I don't personify it. I think of it like an object, but a trucker drives for a fleet, maybe that has like, some truckers are independent. Some truckers drive for fleets with a hundred trucks. There are tons of independent trucking companies out there. Start a trucking company and drive your costs down or figure out how to drive down the cost of trucking. Another company that I really respect is Nato. Actually, I respect their business model. Nato sells a driver monitoring camera and they sell it to fleet owners. If I owned a fleet of cars and I could pay 40 bucks a month to monitor my employees, this is gonna, it like reduces accidents 18%. It's so like that, in the space, that is like the business model that I like most respect. Cause they're creating value today. Yeah, which is a, that's a huge one. How do we create value today with some of this? And the lane keeping thing is huge. And it sounds like you're creeping in or full steam ahead on the driver monitoring too, which I think actually where the short term value, if you can get it right. I still, I'm not a huge fan of the statement that everything has to have driver monitoring. I agree with that completely, but that statement usually misses the point that to get the experience of it right is not trivial. Oh no, not at all. In fact, like, so right now we have, I think the timeout depends on speed of the car, but we want to depend on like the scene state. If you're on like an empty highway, it's very different if you don't pay attention than if like you're like coming up to a traffic light. And longterm, it should probably learn from the driver because that's to do, I watched a lot of video. We've built a smartphone detector just to analyze how people are using smartphones and people are using it very differently. It's a texting styles. There's. We haven't watched nearly enough of the videos. We haven't, I got millions of miles of people driving cars. In this moment, I spent a large fraction of my time just watching videos because it's never fails to learn. Like it never, I've never failed from a video watching session to learn something I didn't know before. In fact, I usually like when I eat lunch, I'll sit, especially when the weather is good and just watch pedestrians with an eye to understand like from a computer vision eye, just to see can this model, can you predict, what are the decisions made? And there's so many things that we don't understand. This is what I mean about the state vector. Yeah, it's, I'm trying to always think like, cause I'm understanding in my human brain, how do we convert that into, how hard is the learning problem here? I guess is the fundamental question. So something that's from a hacking perspective, this is always comes up, especially with folks. Well, first the most popular question is the trolley problem, right? So that's not a sort of a serious problem. There are some ethical questions I think that arise. Maybe you wanna, do you think there's any ethical, serious ethical questions? We have a solution to the trolley problem at Comm.ai. Well, so there is actually an alert in our code, ethical dilemma detected. It's not triggered yet. We don't know how yet to detect the ethical dilemmas, but we're a level two system. So we're going to disengage and leave that decision to the human. You're such a troll. No, but the trolley problem deserves to be trolled. Yeah, that's a beautiful answer actually. I know, I gave it to someone who was like, sometimes people will ask, like you asked about the trolley problem, like you can have a kind of discussion about it. Like you get someone who's like really like earnest about it because it's the kind of thing where, if you ask a bunch of people in an office, whether we should use a SQL stack or a no SQL stack, if they're not that technical, they have no opinion. But if you ask them what color they want to paint the office, everyone has an opinion on that. And that's why the trolley problem is... I mean, that's a beautiful answer. Yeah, we're able to detect the problem and we're able to pass it on to the human. Wow, I've never heard anyone say it. This is your nice escape route. Okay, but... Proud level two. I'm proud level two. I love it. So the other thing that people have some concern about with AI in general is hacking. So how hard is it, do you think, to hack an autonomous vehicle, either through physical access or through the more sort of popular now, these adversarial examples on the sensors? Okay, the adversarial examples one. You want to see some adversarial examples that affect humans, right? Oh, well, there used to be a stop sign here, but I put a black bag over the stop sign and then people ran it, adversarial, right? Like there's tons of human adversarial examples too. The question in general about like security, if you saw something just came out today and like there are always such hypey headlines about like how navigate on autopilot was fooled by a GPS spoof to take an exit. Right. At least that's all they could do was take an exit. If your car is relying on GPS in order to have a safe driving policy, you're doing something wrong. If you're relying, and this is why V2V is such a terrible idea. V2V now relies on both parties getting communication right. This is not even, so I think of safety, security is like a special case of safety, right? Safety is like we put a little, you know, piece of caution tape around the hole so that people won't walk into it by accident. Security is like put a 10 foot fence around the hole so you actually physically cannot climb into it with barbed wire on the top and stuff, right? So like if you're designing systems that are like unreliable, they're definitely not secure. Your car should always do something safe using its local sensors. And then the local sensor should be hardwired. And then could somebody hack into your CAN bus and turn your steering wheel on your brakes? Yes, but they could do it before common AI too, so. Let's think out of the box on some things. So do you think teleoperation has a role in any of this? So remotely stepping in and controlling the cars? No, I think that if the safety operation by design requires a constant link to the cars, I think it doesn't work. So that's the same argument you're using for V2I, V2V? Well, there's a lot of non safety critical stuff you can do with V2I. I like V2I, I like V2I way more than V2V. Because V2I is already like, I already have internet in the car, right? There's a lot of great stuff you can do with V2I. Like for example, you can, well, I already have V2I, Waze is V2I, right? Waze can route me around traffic jams. That's a great example of V2I. And then, okay, the car automatically talks to that same service, like it works. So it's improving the experience, but it's not a fundamental fallback for safety. No, if any of your things that require wireless communication are more than QM, like have an ASL rating, it shouldn't be. You previously said that life is work and that you don't do anything to relax. So how do you think about hard work? What do you think it takes to accomplish great things? And there's a lot of people saying that there needs to be some balance. You need to, in order to accomplish great things, you need to take some time off, you need to reflect and so on. Now, and then some people are just insanely working, burning the candle on both ends. How do you think about that? I think I was trolling in the Siraj interview when I said that. Off camera, right before I smoked a little bit of weed, like, you know, come on, this is a joke, right? Like I do nothing to relax. Look where I am, I'm at a party, right? Yeah, yeah, yeah, that's true. So no, no, of course I don't. When I say that life is work though, I mean that like, I think that what gives my life meaning is work. I don't mean that every minute of the day you should be working. I actually think this is not the best way to maximize results. I think that if you're working 12 hours a day, you should be working smarter and not harder. Well, so work gives you meaning. For some people, other sorts of meaning is personal relationships, like family and so on. You've also, in that interview with Siraj, or the trolling, mentioned that one of the things you look forward to in the future is AI girlfriends. So that's a topic that I'm very much fascinated by, not necessarily girlfriends, but just forming a deep connection with AI. What kind of system do you imagine when you say AI girlfriend, whether you were trolling or not? No, that one I'm very serious about. And I'm serious about that on both a shallow level and a deep level. I think that VR brothels are coming soon and are going to be really cool. It's not cheating if it's a robot. I see the slogan already. But there's, I don't know if you've watched, or just watched the Black Mirror episode. I watched the latest one, yeah. Yeah, yeah. Oh, the Ashley 2 one? No, where there's two friends who are having sex with each other in... Oh, in the VR game. In the VR game. It's just two guys, but one of them was a female, yeah. Which is another mind blowing concept. That in VR, you don't have to be the form. You can be two animals having sex. It's weird. I mean, I'll see how nice that the software maps the nerve endings, right? Yeah, it's huge. I mean, yeah, they sweep a lot of the fascinating, really difficult technical challenges under the rug, like assuming it's possible to do the mapping of the nerve endings, then... I wish, yeah, I saw that, the way they did it with the little like stim unit on the head, that'd be amazing. So, well, no, no, on a shallow level, like you could set up like almost a brothel with like real dolls and Oculus Quests, write some good software. I think it'd be a cool novelty experience. But no, on a deeper, like emotional level, I mean, yeah, I would really like to fall in love with a machine. Do you see yourself having a long term relationship of the kind monogamous relationship that we have now with a robot, with a AI system even, not even just a robot? So I think about maybe my ideal future. When I was 15, I read Eliezer Yudkowsky's early writings on the singularity and like that AI is going to surpass human intelligence massively. He made some Moore's law based predictions that I mostly agree with. And then I really struggled for the next couple of years of my life. Like, why should I even bother to learn anything? It's all gonna be meaningless when the machines show up. Right. Maybe when I was that young, I was still a little bit more pure and really like clung to that. And then I'm like, well, the machines ain't here yet, you know, and I seem to be pretty good at this stuff. Let's try my best, you know, like what's the worst that happens. But the best possible future I see is me sort of merging with the machine. And the way that I personify this is in a long term monogamous relationship with a machine. Oh, you don't think there's a room for another human in your life, if you really truly merge with another machine? I mean, I see merging. I see like the best interface to my brain is like the same relationship interface to merge with an AI, right? What does that merging feel like? I've seen couples who've been together for a long time. And like, I almost think of them as one person, like couples who spend all their time together and... That's fascinating. You're actually putting, what does that merging actually looks like? It's not just a nice channel. Like a lot of people imagine it's just an efficient link, search link to Wikipedia or something. I don't believe in that. But it's more, you're saying that there's the same kind of relationship you have with another human, that's a deep relationship. That's what merging looks like. That's pretty... I don't believe that link is possible. I think that that link, so you're like, oh, I'm gonna download Wikipedia right to my brain. My reading speed is not limited by my eyes. My reading speed is limited by my inner processing loop. And to like bootstrap that sounds kind of unclear how to do it and horrifying. But if I am with somebody and I'll use a somebody who is making a super sophisticated model of me and then running simulations on that model, I'm not gonna get into the question whether the simulations are conscious or not. I don't really wanna know what it's doing. But using those simulations to play out hypothetical futures for me, deciding what things to say to me, to guide me along a path. And that's how I envision it. So on that path to AI of superhuman level intelligence, you've mentioned that you believe in the singularity, that singularity is coming. Again, could be trolling, could be not, could be part, all trolling has truth in it. I don't know what that means anymore. What is the singularity? Yeah, so that's really the question. How many years do you think before the singularity, what form do you think it will take? Does that mean fundamental shifts in capabilities of AI? Or does it mean some other kind of ideas? Maybe that's just my roots, but. So I can buy a human beings worth of compute for like a million bucks today. It's about one TPU pod V3. I want like, I think they claim a hundred pay to flops. That's being generous. I think humans are actually more like 20. So that's like five humans. That's pretty good. Google needs to sell their TPUs. But I could buy, I could buy, I could buy GPUs. I could buy a stack of like, I'd buy 1080 TIs, build data center full of them. And for a million bucks, I can get a human worth of compute. But when you look at the total number of flops in the world, when you look at human flops, which goes up very, very slowly with the population and machine flops, which goes up exponentially, but it's still nowhere near. I think that's the key thing to talk about when the singularity happened. When most flops in the world are Silicon and not biological, that's kind of the crossing point. Like they're now the dominant species on the planet. And just looking at how technology is progressing, when do you think that could possibly happen? You think it would happen in your lifetime? Oh yeah, definitely in my lifetime. I've done the math. I like 2038 because it's the Unix timestamp rollover. Yeah, beautifully put. So you've said that the meaning of life is to win. If you look five years into the future, what does winning look like? So, there's a lot of, I can go into like technical depth to what I mean by that, to win. It may not mean, I was criticized for that in the comments. Like, doesn't this guy wanna like save the penguins in Antarctica or like, oh man, listen to what I'm saying. I'm not talking about like I have a yacht or something. But I am an agent. I am put into this world. And I don't really know what my purpose is. But if you're an intelligent agent and you're put into a world, what is the ideal thing to do? Well, the ideal thing mathematically, you can go back to like Schmidt Hoover theories about this, is to build a compressive model of the world. To build a maximally compressive, to explore the world such that your exploration function maximizes the derivative of compression of the past. Schmidt Hoover has a paper about this. And like, I took that kind of as like a personal goal function. So what I mean to win, I mean like, maybe this is religious, but like I think that in the future, I might be given a real purpose or I may decide this purpose myself. And then at that point, now I know what the game is and I know how to win. I think right now, I'm still just trying to figure out what the game is. But once I know, so you have imperfect information, you have a lot of uncertainty about the reward function and you're discovering it. Exactly. But the purpose is... That's a better way to put it. The purpose is to maximize it while you have a lot of uncertainty around it. And you're both reducing the uncertainty and maximizing at the same time. Yeah. And so that's at the technical level. What is the, if you believe in the universal prior, what is the universal reward function? That's the better way to put it. So that win is interesting. I think I speak for everyone in saying that I wonder what that reward function is for you. And I look forward to seeing that in five years, in 10 years. I think a lot of people, including myself, are cheering you on, man. So I'm happy you exist and I wish you the best of luck. Thanks for talking to me, man. Thank you. Have a good one.
George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | Lex Fridman Podcast #31
The following is a conversation with Paola Arlotta. She's a professor of stem cell and regenerative biology at Harvard University and is interested in understanding the molecular laws that govern the birth, differentiation, and assembly of the human brain's cerebral cortex. She explores the complexity of the brain by studying and engineering elements of how the brain develops. This was a fascinating conversation to me. It's part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And I'd like to give a special thank you to Amy Jeffress for her support of the podcast on Patreon. She's an artist and you should definitely check out her Instagram at lovetruthgood, three beautiful words. Your support means a lot and inspires me to keep the series going. And now here's my conversation with Paola Arlotta. You studied the development of the human brain for many years. So let me ask you an out of the box question first. How likely is it that there's intelligent life out there in the universe outside of earth with something like the human brain? So I can put it another way. How unlikely is the human brain? How difficult is it to build a thing through the evolutionary process? Well, it has happened here, right? On this planet. Once, yes. Once. So that simply tells you that it could, of course, happen again other places. It's only a matter of probability. What the probability that you would get a brain like the ones that we have, like the human brain. So how difficult is it to make the human brain? It's pretty difficult, but most importantly, I guess we know very little about how this process really happens. And there is a reason for that, actually multiple reasons for that. Most of what we know about how the mammalian brain, so the brain of mammals develop comes from studying in labs other brains, not our own brain, the brain of mice, for example. But if I showed you a picture of a mouse brain, and then you put it next to a picture of a human brain, they don't look at all like each other. So they're very different. And therefore there is a limit to what you can learn about how the human brain is made by studying the mouse brain. There is a huge value in studying the mouse brain. There are many things that we have learned, but it's not the same thing. So in having studied the human brain, or through the mouse and through other methodologies that we'll talk about, do you have a sense? I mean, you're one of the experts in the world. How much do you feel you know about the brain and how often do you find yourself in awe of this mysterious thing? Yeah, you pretty much find yourself in awe all the time. It's an amazing process. It's a process by which, by means that we don't fully understand, at the very beginning of embryogenesis, the structure called the neural tube, literally self assembles. And it happens in an embryo and it can happen also from stem cells in a dish. Okay. And then from there, these stem cells that are present within the neural tube give rise to all of the thousands and thousands of different cell types that are present in the brain through time, right? With the interesting, very intriguing, interesting observation is that the time that it takes for the human brain to be made, it's human time. Meaning that for me and you, it took almost nine months of gestation to build the brain and then another 20 years of learning postnatally to get the brain that we have today that allows us to this conversation. A mouse takes 20 days or so for an embryo to be born. And so the brain is built in a much shorter period of time. And the beauty of it is that if you take mouse stem cells and you put them in a culture dish, the brain organoid that you get from a mouse is formed faster than if you took human stem cells and put them in the dish and let them make a human brain organoid. So the very developmental process is... Controlled by the speed of the species. Which means it's on purpose, it's not accidental or there is something in that temporal... It's very, exactly, that is very important for us to get the brain we have. And we can speculate for why that is. You know, it takes us a long time as human beings after we're born to learn all the things that we have to learn to have the adult brain. It's actually 20 years, think about it. From when a baby is born to when a teenager goes through puberty to adults, it's a long time. Do you think you can maybe talk through the first few months and then on to the first 20 years and then for the rest of their lives? What is the development of the human brain look like? What are the different stages? Yeah, at the beginning, you have to build a brain, right? And the brain is made of cells. What's the very beginning? Which beginning are we talking about? In the embryo, as the embryo is developing in the womb, in addition to making all of the other tissues of the embryo, the muscle, the heart, the blood, the embryo is also building the brain. And it builds from a very simple structure called the neural tube, which is basically nothing but a tube of cells that spans sort of the length of the embryo from the head all the way to the tail, let's say, of the embryo. And then over in human beings, over many months of gestation from that neural tube, which contains stem cell like cells of the brain, you will make many, many other building blocks of the brain. So all of the other cell types, because there are many, many different types of cells in the brain that will form specific structures of the brain. So you can think about embryonic development of the brain as just the time in which you are making the building blocks, the cells. Are the stem cells relatively homogeneous, like uniform, or are they all different types? It's a very good question. It's exactly how it works. You start with a more homogeneous, perhaps more multipotent type of stem cell. With multipotent. With multipotent it means that it has the potential to make many, many different types of other cells. And then with time, these progenitors become more heterogeneous, which means more diverse. There are gonna be many different types of the stem cells. And also they will give rise to progeny to other cells that are not stem cells, that are specific cells of the brain that are very different from the mother stem cell. And now you think about this process of making cells from the stem cells over many, many months of development for humans. And what you're doing, you're building the cells that physically make the brain, and then you arrange them in specific structures that are present in the final brain. So you can think about the embryonic development of the brain as the time where you're building the bricks, you're putting the bricks together to form buildings, structures, regions of the brain. And where you make the connections between these many different type of cells, especially nerve cells, neurons, right? That transmit action potentials and electricity. I've heard you also say somewhere, I think, correct me if I'm wrong, that the order of the way this builds matters. Oh yes. If you are an engineer and you think about development, you can think of it as, well, I could also take all the cells and bring them all together into a brain in the end. But development is much more than that. So the cells are made in a very specific order that subserve the final product that you need to get. And so, for example, all of the nerve cells, the neurons are made first, and all of the supportive cells of the neurons, like the glia, is made later. And there is a reason for that because they have to assemble together in specific ways. But you also may say, well, why don't we just put them all together in the end? It's because as they develop next to each other, they influence their own development. So it's a different thing for a glia to be made alone in a dish, than a glia cell be made in a developing embryo with all these other cells around it that produce all these other signals. First of all, that's mind blowing, this development process. From my perspective in artificial intelligence, you often think of how incredible the final product is, the final product, the brain. But you're making me realize that the final product is just, the beautiful thing is the actual development process. Do we know the code that drives that development? Yeah. Do we have any sense? First of all, thank you for saying that it's really the formation of the brain. It's really its development. It is this incredibly choreographed dance that happens the same way every time each one of us builds the brain, right? And that builds an organ that allows us to do what we're doing today, right? That is mind blowing. And this is why developmental neurobiologists never get tired of studying that. Now you're asking about the code. What drives this? How is this done? Well, it's millions of years of evolution of really fine tuning gene expression programs that allow certain cells to be made at a certain time and to become a certain cell type, but also mechanical forces of pressure bending. This embryo is not just, it will not stay a tube, this brain for very long. At some point, this tube in the front of the embryo will expand to make the primordium of the brain, right? Now the forces that control that the cells feel, and this is another beautiful thing, the very force that they feel, which is different from a week before, a week ago, will tell the cell, oh, you're being squished in a certain way, begin to produce these new genes because now you are at the corner or you are in a stretch of cells or whatever it is, and that, so that mechanical physical force shapes the fate of the cell as well. So it's not only chemical, it's also mechanical. So from my perspective, biology is this incredibly complex mess, gooey mess. So you're saying mechanical forces. How different is like a computer or any kind of mechanical machine that we humans build and the biological systems? Have you been, because you've worked a lot with biological systems. Are they as much of a mess as it seems from a perspective of an engineer, a mechanical engineer? Yeah, they are much more prone to taking alternative routes, right? So if you, we go back to printing a brain versus developing a brain, of course, if you print a brain, given that you start with the same building blocks, the same cells, you could potentially print it the same way every time, but that final brain may not work the same way as a brain built during development does because the very same building blocks that you're using developed in a completely different environment, right? It was not the environment of the brain. Therefore, they're gonna be different just by definition. So if you instead use development to build, let's say a brain organoid, which maybe we will be talking about in a few minutes. Those things are fascinating. Yes, so if you use processes of development, then when you watch it, you can see that sometimes things can go wrong in some organoids and by wrong, I mean different one organoid from the next. While if you think about that embryo, it always goes right. So this development, it's for as complex as it is. Every time a baby is born has, with very few exceptions, so the brain is like the next baby, but it's not the same if you develop it in a dish. And first of all, we don't even develop a brain, you develop something much simpler in the dish, but there are more options for building things differently, which really tells you that evolution has played a really tight game here for how in the end the brain is built in vivo. So just a quick, maybe dumb question, but it seems like this is not, the building process is not a dictatorship. It seems like there's not a centralized, like high level mechanism that says, okay, this cell built itself the wrong way, I'm gonna kill it. It seems like there's a really strong distributed mechanism. Is that in your sense for what you mean? There are a lot of possibilities, right? And if you think about, for example, different species building their brain, each brain is a little bit different. So the brain of a lizard is very different from that of a chicken, from that of one of us and so on and so forth and still is a brain, but it was built differently starting from stem cells that pretty much had the same potential, but in the end, evolution builds different brains in different species because that serves in a way the purpose of that species and the wellbeing of that organism. And so there are many possibilities, but then there is a way and you were talking about a code. Nobody knows what the entire code of development is. Of course we don't. We know bits and pieces of very specific aspects of development of the brain, what genes are involved to make a certain cell types, how those two cells interact to make the next level structure that we might know, but the entirety of it, how it's so well controlled, it's really mind blowing. So in the first two months in the embryo or whatever, the first few weeks, months, so yeah, the building blocks are constructed. The actual, the different regions of the brain, I guess in the nervous system. Well, this continues way longer than just the first few months. So over the very first few months, you build a lot of the cells, but then there is continuous building of new cell types all the way through birth. And then even postnatally, I don't know if you've ever heard of myelin. Myelin is this sort of insulation that is built around the cables of the neurons so that the electricity can go really fast from. The axons, I guess they're called. The axons, they're called axons, exactly. And so as human beings, we myelinate our cells postnatally. A kid, a six year old kid has barely started the process of making the mature oligodendrocytes, which are the cells that then eventually will wrap the axons into myelin. And this will continue, believe it or not, until we are about 25, 30 years old. So there is a continuous process of maturation and tweaking and additions, and also in response to what we do. I remember taking AP Biology in high school, and in the textbook, it said that, I'm going by memory here, that scientists disagree on the purpose of myelin in the brain. Is that totally wrong? So like, I guess it speeds up the, okay, I might be wrong here, but I guess it speeds up the electricity traveling down the axon or something. Yeah, so that's the most sort of canonical, and definitely that's the case. So you have to imagine an axon, and you can think about it as a cable of some type with electricity going through. And what myelin does, by insulating the outside, I should say there are tracts of myelin and pieces of axons that are naked without myelin. And so by having the insulation, the electricity, instead of going straight through the cable, it will jump over a piece of myelin, right, to the next naked little piece and jump again. And therefore, that's the idea that you go faster. And it was always thought that in order to build a big brain, a big nervous system, in order to have a nervous system that can do very complex type of things, then you need a lot of myelin because you wanna go fast with this information from point A to point B. Well, a few years ago, maybe five years ago or so, we discovered that some of the most evolved, which means the newest type of neurons that we have as nonhuman primates, as human beings in the top of our cerebral cortex, which should be the neurons that do some of the most complex things that we do, well, those have axons that have very little myelin. Wow. And they have very interesting ways in which they put this myelin on their axons. You know, a little piece here, then a long track with no myelin, another chunk there. And some don't have myelin at all. So now, you have to explain where we're going with evolution. And if you think about it, perhaps as an electrical engineer, when I looked at it, I initially thought, and I'm a developmental neurobiologist, I thought maybe this is what we see now, but if we give evolution another few million years, we'll see a lot of myelin on these neurons too. But I actually think now that that's instead the future of the brain. Less myelin. Less myelin might allow for more flexibility on what you do with your axons, and therefore more complicated and unpredictable type of functions, which is also a bit mind blowing. So it seems like it's controlling the timing of the signal. So they're in the timing, you can encode a lot of information. Yeah. And so the brain. The timing, the chemistry of that little piece of axon, perhaps it's a dynamic process where the myelin can move. Now you see how many layers of variability you can add, and that's actually really good if you're trying to come up with a new function or a new capability or something unpredictable in a way. So we're gonna jump around a little bit, but the old question of how much is nature and how much is nurture? In terms of this incredible thing after the development is over, we seem to be kind of somewhat smart, intelligent, cognition, consciousness, all of these things are just incredible, ability to reason and so on emerge. In your sense, how much is in the hardware, in the nature and how much is in the nurture is learned through with our parents through interacting with the environment and so on. It's really both, right? If you think about it. So we are born with a brain as babies that has most of its cells and most of its structures. And that will take a few years to grow, to add more, to be better. But really then we have this 20 years of interacting with the environment around us. And so what that brain that was so perfectly built or imperfectly built due to our genetic cues will then be used to incorporate the environment in its further maturation and development. And so your experiences do shape your brain. I mean, we know that like if you and I may have had a different childhood or a different, we have been going to different schools, we have been learning different things and our brain is a little bit different because of that. We behave differently because of that. And so especially postnatally experience is extremely important. We are born with a plastic brain. What that means is a brain that is able to change in response to stimuli that can be sensory. So perhaps some of the most illuminating studies that were done were studies in which the sensory organs were not working, right? Like if you are born with eyes that don't work, then your very brain, that piece of the brain that normally would process vision, the visual cortex, develops postnatally differently and it might be used to do something different, right? So that's the most extreme. The plasticity of the brain, I guess, is the magic hardware that it, and then it's flexibility in all forms is what enables the learning postnatally. Can you talk about organoids? What are they? And how can you use them to help us understand the brain and the development of the brain? This is very, very important. So the first thing I'd like to say, please skip this in the video. The first thing I'd like to say is that an organoid, a brain organoid is not the same as a brain. Okay? It's a fundamental distinction. It's a system, a cellular system that one can develop in the culture dish, starting from stem cells that will mimic some aspects of the development of the brain, but not all of it. They are very small, maximum, they become about four to five millimeters in diameters. They are much simpler than our brain, of course, but yet they are the only system where we can literally watch a process of human brain development unfold. And by watch, I mean, study it. Remember when I told you that we can't understand everything about development in our own brain by studying a mouse? Well, we can't study the actual process of development of the human brain because it all happens in utero. So we will never have access to that process ever. And therefore, this is our next best thing. Like a bunch of stem cells that can be coaxed into starting a process of neural tube formation. Remember that tube that is made by the embryo early on. And from there, a lot of the cell types that are present within the brain, and you can simply watch it and study, but you can also think about diseases where development of the brain does not proceed normally, right, properly. Think about neurodevelopmental diseases. There are many, many different types. Think about autism spectrum disorders. There are also many different types of autism. So there you could take a stem cell, which really means either a sample of blood or a sample of skin from the patient, make a stem cell, and then with that stem cell, watch a process of formation of a brain organ or a brain organoid of that person with that genetics, with that genetic code in it. And you can ask, what is this genetic code doing to some aspects of development of the brain? And for the first time, you may come to solutions like what cells are involved in autism, right? So many questions around this. So if you take this human stem cell for that particular person with that genetic code, how, and you try to build an organoid, how often will it look similar? What's the, yeah, so. The reproducibility? Yes, or how much variability is the flip side of that? Yeah, so there is much more variability in building organoids than there is in building brain. It's really true that the majority of us, when we are born as babies, our brains look a lot like each other. This is the magic that the embryo does, where it builds a brain in the context of a body and there is very little variability there. There is disease, of course, but in general, a little variability. When you build an organoid, we don't have the full code for how this is done. And so in part, the organoid somewhat builds itself because there are some structures of the brain that the cells know how to make. And another part comes from the investigator, the scientist adding to the media factors that we know in the mouse, for example, would foster a certain step of development, but it's very limited. And so as a result, the kind of product you get in the end is much more reductionist, is much more simple than what you get in vivo. It mimics early events of development as of today, and it doesn't build very complex type of anatomy and structure does not as of today, which happens instead in vivo. And also the variability that you see, one organ to the next tends to be higher than when you compare an embryo to the next. So, okay, then the next question is, how hard and maybe another flip side of that expensive is it to go from one stem cell to an organoid? How many can you build in like, because it sounds very complicated. It's work definitely, and it's money definitely, but you can really grow a very high number of these organoids, can go perhaps, I told you the maximum, they become about five millimeters in diameter. So this is about the size of a tiny, tiny raisin, or perhaps the seed of an apple. And so you can grow 50 to 100 of those inside one big bioreactors, which are these flasks where the media provides nutrients for the organoids. So the problem is not to grow more or less of them. It's really to figure out how to grow them in a way that they are more and more reproducible, for example, organoid to organoid, so they can be used to study a biological process. Because if you have too much variability, then you never know if what you see is just an exception or really the rule. So what does an organoid look like? Are there different neurons already emerging? Is there, well, first, can you tell me what kind of neurons are there? Yes. Are they sort of all the same? Are they not all the same? How much do we understand? And how much of that variance, if any, can exist in organoids? Yes. So you could grow, I told you that the brain has different parts. So the cerebral cortex is on the top part of the brain, but there is another region called the striatum that is below the cortex and so on and so forth. All of these regions have different types of cells in the actual brain, okay? And so scientists have been able to grow organoids that may mimic some aspects of development of these different regions of the brain. And so we are very interested in the cerebral cortex. That's the coolest part, right? Very cool. I agree with you. We wouldn't be here talking if we didn't have a cerebral cortex. It's also, I like to think, the part of the brain that really truly makes us human, the most evolved in recent evolution. And so in the attempt to make the cerebral cortex and by figuring out a way to have these organoids continue to grow and develop for extended periods of times, much like it happens in the real embryo, months and months in culture, then you can see that many different types of neurons of the cortex appear. And at some point, also the astrocytes, so the glia cells of the cerebral cortex also appear. What are these astrocytes? The astrocytes are not neurons, so they're not nerve cells, but they play very important roles. One important role is to support the neuron. But of course, they have much more active type of roles. They're very important, for example, to make the synapses, which are the point of contact and communication between two neurons. So all that chemistry fun happens in the synapses, happens because of these cells? Are they the medium in which? It happens because of the interactions, happens because you are making the cells and they have certain properties, including the ability to make neurotransmitters, which are the chemicals that are secreted to the synapses, including the ability of making these axons grow with their growth cones and so on and so forth. And then you have other cells around it that release chemicals or touch the neurons or interact with them in different ways to really foster this perfect process, in this case of synaptogenesis. And this does happen within organoids. So the mechanical and the chemical stuff happens. The connectivity between neurons, this in a way is not surprising because scientists have been culturing neurons forever. And when you take a neuron, even a very young one, and you culture it, eventually finds another cell or another neuron to talk to, it will form a synapse. Are we talking about mice neurons? Are we talking about human neurons? It doesn't matter, both. So you can culture a neuron, like a single neuron and give it a little friend and it starts interacting? Yes, so neurons are able to, it sounds, it's more simple than what it may sound to you. Neurons have molecular properties and structural properties that allow them to really communicate with other cells. And so if you put not one neuron, but if you put several neurons together, chances are that they will form synapses with each other. Okay, great. So an organoid is not a brain. No. But there's some, it's able to, especially what you're talking about, mimics some properties of the cerebral cortex, for example. So what can you understand about the brain by studying an organoid of a cerebral cortex? I can literally study all this incredible diversity of cell type, all these many, many different classes of cells, how are they made? How do they look like? What do they need to be made properly? And what goes wrong if now the genetics of that stem cell that I used to make the organoid came from a patient with a neurodevelopmental disease? Can I actually watch for the very first time what may have gone wrong years before in this kid when its own brain was being made? Think about that loop. In a way, it's a little tiny rudimentary window into the past, into the time when that brain in a kid that had this neurodevelopmental disease was being made. And I think that's unbelievably powerful because today we have no idea of what cell types, we barely know what brain regions are affected in these diseases. Now we have an experimental system that we can study in the lab. And we can ask, what are the cells affected? When during development things went wrong? What are the molecules among the many, many different molecules that control brain development? Which ones are the ones that really messed up here and we want perhaps to fix? And what is really the final product? Is it a less strong kind of circuit and brain? Is it a brain that lacks a cell type? What is it? Because then we can think about treatment and care for these patients that is informed rather than just based on current diagnostics. So how hard is it to detect through the developmental process? It's a super exciting tool to see how different conditions develop. How hard is it to detect that, wait a minute, this is abnormal development. Yeah. How much signal is there? How much of it is it a mess? Because things can go wrong at multiple levels, right? You could have a cell that is born and built but then doesn't work properly or a cell that is not even born or a cell that doesn't interact with other cells differently and so on and so forth. So today we have technology that we did not have even five years ago that allows us to look for example at the molecular picture of a cell, of a single cell in a sea of cells with high precision. And so that molecular information where you compare many, many single cells for the genes that they produce between a control individual and an individual with a neurodevelopmental disease, that may tell you what is different molecularly. Or you could see that some cells are not even made, for example, or that the process of maturation of the cells may be wrong. There are many different levels here and we can study the cells at the molecular level but also we can use the organoids to ask questions about the properties of the neurons, the functional properties, how they communicate with each other, how they respond to a stimulus and so on and so forth. And we may get an abnormalities there, right? Detect those. So how early is this work in the, maybe in the history of science? So, I mean like, so if you were to, if you and I time travel a thousand years into the future, organoids seem to be, maybe I'm romanticizing the notion but you're building not a brain but something that has properties of a brain. So it feels like you might be getting close to, in the building process, to build this to understand. So how far are we in this understanding process of development? A thousand years from now, it's a long time from now. So if this planet is still gonna be here a thousand years from now. So, I mean, if, you know, like they write a book, obviously there'll be a chapter about you. That's right, that science fiction book, today. Yeah, today, about, I mean, I guess where we really understood very little about the brain a century ago, I was a big fan in high school of reading Freud and so on, still am of psychiatry. I would say we still understand very little about the functional aspect of just, but how in the history of understanding the biology of the brain, the development, how far are we along? It's a very good question. And so this is just, of course, my opinion. I think that we did not have technology even 10 years ago or certainly not 20 years ago to even think about experimentally investigating the development of the human brain. So we've done a lot of work in science to study the brain or many other organisms. Now we have some technologies which I'll spell out that allow us to actually look at the real thing and look at the brain, at the human brain. So what are these technologies? There has been huge progress in stem cell biology. The moment someone figured out how to turn a skin cell into an embryonic stem cell, basically, and that how that embryonic stem cell could begin a process of development again to, for example, make a brain, there was a huge advance, and in fact, there was a Nobel Prize for that. That started the field, really, of using stem cells to build organs. Now we can build on all the knowledge of development that we build over the many, many, many years to say, how do we make the stem cells now make more and more complex aspects of development of the human brain? So this field is young, the field of brain organoids, but it's moving faster. And it's moving fast in a very serious way that is rooted in labs with the right ethical framework and really building on solid science for what reality is and what is not. But it will go faster and it will be more and more powerful. We also have technology that allows us to basically study the properties of single cells across many, many millions of single cells, which we didn't have perhaps five years ago. So now with that, even an organoid that has millions of cells can be profiled in a way, looked at with very, very high resolution, the single cell level to really understand what is going on. And you could do it in multiple stages of development and you can build your hypothesis and so on and so forth. So it's not gonna be a thousand years. It's gonna be a shorter amount of time. And I see this as sort of an exponential growth of this field enabled by these technologies that we didn't have before. And so we're gonna see something transformative that we didn't see at all in the prior thousand years. So I apologize for the crazy sci fi questions, but the developmental process is fascinating to watch and study, but how far are we away from and maybe how difficult is it to build not just an organoid, but a human brain from a stem cell? Yeah, first of all, that's not the goal for the majority of the serious scientists that work on this because you don't have to build the whole human brain to make this model useful for understanding how the brain develops or understanding disease. You don't have to build the whole thing. So let me just comment on this, fascinating. It shows to me the difference between you and I as you're actually trying to understand the beauty of the human brain and to use it to really help thousands or millions of people with disease and so on, right? From an artificial intelligence perspective, we're trying to build systems that we can put in robots and try to create systems that have echoes of the intelligence about reasoning about the world, navigating the world. It's different objectives, I think. Yeah, that's very much science fiction. Science fiction, but we operate in science fiction a little bit. So on that point of building a brain, even though that is not the focus or interest, perhaps, of the community, how difficult is it? Is it truly science fiction at this point? I think the field will progress, like I said, and that the system will be more and more complex in a way, right? But there are properties that emerge from the human brain that have to do with the mind, that may have to do with consciousness, that may have to do with intelligence or whatever that we really don't understand even how they can emerge from an actual, real brain. And therefore, we can now measure or study in an organoid. So I think that this field, many, many years from now, may lead to the building of better neural circuits that really are built out of understanding of how this process really works. And it's hard to predict how complex this really will be. I really don't think we're so far from, it makes me laugh, really. It's really that far from building the human brain. But you're gonna be building something that is always a bad version of it, but that may have really powerful properties and might be able to respond to stimuli or be used in certain context. And this is why I really think that there is no other way to do this science, but within the right ethical framework, because where you're going with this is also, we can talk about science fiction and write that book, and we could today, but this work happens in a specific ethical framework that we don't decide just as scientists, but also as a society. So the ethical framework here is a fascinating one, is a complicated one. Yes. Do you have a sense, a grasp of how we think about ethically of building organoids from human stem cells to understand the brain? It seems like a tool for helping potentially millions of people cure diseases or at least start the cure by understanding it. But is there more, is there gray areas that we have to think about ethically? Absolutely. We must think about that. Every discussion about the ethics of this needs to be based on actual data from the models that we have today and from the ones that we will have tomorrow. So it's a continuous conversation. It's not something that you decide now. Today, there is no issue really. Very simple models that clearly can help you in many ways without much think about, but tomorrow we need to have another conversation and so on and so forth. And so the way we do this is to actually really bring together constantly a group of people that are not only scientists, but also bioethicists, the lawyers, philosophers, psychiatrists and so on, psychologists and so on and so forth to decide as a society really what we should and what we should not do. So that's the way to think about the ethics. Now, I also think though, that as a scientist, I have a moral responsibility. So if you think about how transformative it could be for understanding and curing a neuropsychiatric disease, to be able to actually watch and study and treat with drugs the very brain of the patient that you are trying to study. How transformative at this moment in time this could be. We couldn't do it five years ago, we could do it now, right? If we didn't do it. Taking a stem cell of a particular patient. Patient and make an organoid for a simple and different from the human brain, it still is his process of brain development with his or her genetics. And we could understand perhaps what is going wrong. Perhaps we could use as a platform, as a cellular platform to screen for drugs, to fix a process and so on and so forth, right? So we could do it now, we couldn't do it five years ago. Should we not do it? What is the downside of doing it? I don't see a downside at this very moment. If we invited a lot of people, I'm sure there would be somebody who would argue against it. What would be the devil's advocate argument? Yeah, yeah. So it's exactly perhaps what you alluded at with your question, that you are enabling some process of formation of the brain that could be misused at some point, or that could be showing properties that ethically we don't wanna see in a tissue. So today, I repeat, today, this is not an issue. And so you just gain dramatically from the science without, because the system is so simple and so different in a way from the actual brain. But because it is the brain, we have an obligation to really consider all of this, right? And again, it's a balanced conversation where we should put disease and betterment of humanity also on that plate. What do you think, at least historically, there was some politicization, politicization of embryonic stem cells, a stem cell research. Do you still see that out there? Is that still a force that we have to think about, especially in this larger discourse that we're having about the role of science in at least American society? Yeah, this is a very good question. It's very, very important. I see a very central role for scientists to inform decisions about what we should or should not do in society. And this is because the scientists have the firsthand look and understanding of really the work that they are doing. And again, this varies depending on what we're talking about here. So now we're talking about brain organoids. I think that the scientists need to be part of that conversation about what is, will be allowed in the future or not allowed in the future to do with the system. And I think that is very, very important because they bring the reality of data to the conversation. And so they should have a voice. So data should have a voice. Data needs to have a voice. Because in not only data, we should also be good at communicating with non scientists, the data. So there has been often time, there is a lot of discussion and, you know, excitement and fights about certain topics just because of the way they are described. I'll give you an example. If I called the same cellular system we just talked about a brain organoid, or if I called it a human mini brain, your reaction is gonna be very different to this. And so the way the systems are described, I mean, we and journalists alike need to be a bit careful that this debate is a real debate and informed by real data. That's all I'm asking. And yeah, the language matters here. So I work on autonomous vehicles and there the use of language could drastically change the interpretation and the way people feel about what is the right way to proceed forward. You are, as I've seen from a presentation, you're a parent. I saw you show a couple of pictures of your son. Is it just the one? Two. Two. Son and a daughter. Son and a daughter. So what have you learned from the human brain by raising two of them? More than I could ever learn in the lab. What have I learned? I've learned that children really have these amazing plastic minds, right? That we have a responsibility to, you know, foster their growth in good, healthy ways. That keep them curious, that keeps them adventurous, that doesn't raise them in fear of things. But also respecting who they are, which is in part, you know, coming from the genetics we talked about. My children are very different from each other despite the fact that they're the product of the same two parents. I also learned that what you do for them comes back to you. Like, you know, if you're a good parent, you're gonna, most of the time, have, you know, perhaps a decent kids at the end. So what do you think, just a quick comment, what do you think is the source of that difference? That's often the surprising thing for parents. Is that they can't believe that our kids, oh, they're so different, yet they came from the same parents. Well, they are genetically different. Even they came from the same two parents because the mixing of gametes, you know, we know this genetics, creates every time a genetically different individual, which will have a specific mix of genes that is a different mix every time from the two parents. And so they're not twins. They are genetically different. Even just that little bit of variation, because you said really from a biological perspective, the brains look pretty similar. Well, so let me clarify that. So the genetics you have, the genes that you have, that play that beautiful orchestrated symphony of development, different genes will play it slightly differently. It's like playing the same piece of music, but with a different orchestra and a different director. The music will not come out. It will be still a piece by the same author, but it will come out differently if it's played by the high school orchestra instead of the Scala in Milan. And so you are born superficially with the same brain. It has the same cell types, similar patterns of connectivity, but the properties of the cells and how the cells will then react to the environment as you experience your world will be also shaped by who genetically you are. Speaking just as a parent, this is not something that comes from my work. I think you can tell at birth that these kids are different, that they have a different personality in a way, right? So both is needed, the genetics, as well as the nurturing afterwards. So you are one human with a brain, sort of living through the whole mess of it, the human condition, full of love, maybe fear, ultimately mortal. How has studying the brain changed the way you see yourself? When you look in the mirror, when you think about your life, the fears, the love, when you see your own life, your own mortality. Yeah, that's a very good question. It's almost impossible to dissociate some time for me. Some of the things we do or some of the things that other people do from, oh, that's because that part of the brain is working in a certain way. Or thinking about a teenager, going through teenage years and being at time funny in the way they think. And impossible for me not to think it's because they're going through this period of time called critical periods of plasticity where their synapses are being eliminated here and there, and they're just confused. And so from that comes perhaps a different take on that behavior, or maybe I can justify it scientifically in some sort of way. I also look at humanity in general, and I am amazed by what we can do and the kind of ideas that we can come up with. And I cannot stop thinking about how the brain is continuing to evolve. I don't know if you do this, but I think about the next brain sometimes. Where are we going with this? Like, what are the features of this brain that evolution is really playing with to get us in the future, the new brain? It's not over, right? It's a work in progress. So let me just a quick comment on that. Do you think there's a lot of fascination and hope for artificial intelligence of creating artificial brains? You said the next brain. When you imagine over a period of a thousand years, the evolution of the human brain, do you sometimes envisioning that future see an artificial one, artificial intelligence, as it is hoped by many, not hoped, thought by many people would be actually the next evolutionary step in the development of humans? Yeah, I think in a way that will happen, right? It's almost like a part of the way we evolve. We evolve in the world that we created, that we interact with, that shape us as we grow up and so on and so forth. Sometime I think about something that may sound silly, but think about the use of cell phones. Part of me thinks that somehow in their brain, there will be a region of the cortex that is attuned to that tool. And this comes from a lot of studies in modern organisms where really the cortex, especially adapts to the kind of things you have to do. So if we need to move our fingers in a very specific way, we have a part of our cortex that allows us to do this kind of very precise movement. An owl that has to see very, very far away with big eyes, the visual cortex, very big. The brain attunes to your environment. So the brain will attune to the technologies that we will have and will be shaped by it. So the cortex very well may be. Will be shaped by it. In artificial intelligence, it may merge with it, it may get, envelop it and adjust. Even if it's not a merge of the kind of, oh, let's have a synthetic element together with a biological one. The very space around us, the fact, for example, think about we put on some goggles of virtual reality and we physically are surfing the ocean, right? Like I've done it. And you have all these emotions that come to you. Your brain placed you in that reality. And it was able to do it like that just by putting the goggles on. It didn't take thousands of years of adapting to this. The brain is plastic. So adapts to new technology. So you could do it from the outside by simply hijacking some sensory capacities that we have. So clearly over recent evolution, the cerebral cortex has been a part of the brain that has known the most evolution. So we have put a lot of chips on evolving this specific part of the brain. And the evolution of cortex is plasticity. It's this ability to change in response to things. So yes, they will integrate. That we want it or not. Well, there's no better way to end it, Paola. Thank you so much for talking today. You're very welcome. This is very exciting.
Paola Arlotta: Brain Development from Stem Cell to Organoid | Lex Fridman Podcast #32
The following is a conversation with Keoki Jackson. He's the CTO of Lockheed Martin, a company that through its long history has created some of the most incredible engineering marvels human beings have ever built, including planes that fly fast and undetected, defense systems that intersect nuclear threats that can take the lives of millions, and systems that venture out into space, the moon, Mars, and beyond. And these days, more and more artificial intelligence has an assistive role to play in these systems. I've read several books in preparation for this conversation. It is a difficult one, because in part Lockheed Martin builds military systems that operate in a complicated world that often does not have easy solutions in the gray area between good and evil. I hope one day this world will rid itself of war in all its forms. But the path to achieving that in a world that does have evil is not obvious. What is obvious is good engineering and artificial intelligence research has a role to play on the side of good. Lockheed Martin and the rest of our community are hard at work at exactly this task. We talk about these and other important topics in this conversation. Also, most certainly, both Keoki and I have a passion for space, us humans venturing out toward the stars. We talk about this exciting future as well. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Keoki Jackson. I read several books on Lockheed Martin recently. My favorite in particular is by Ben Rich, Carlos Concord's personal memoir. It gets a little edgy at times. But from that, I was reminded that the engineers at Lockheed Martin have created some of the most incredible engineering marvels human beings have ever built throughout the 20th century and the 21st. Do you remember a particular project or system at Lockheed or before that at the Space Shuttle Columbia that you were just in awe at the fact that us humans could create something like this? You know, that's a great question. There's a lot of things that I could draw on there. When you look at the Skunk Works and Ben Rich's book in particular, of course, it starts off with basically the start of the jet age and the P 80. And I had the opportunity to sit next to one of the Apollo astronauts, Charlie Duke, recently at dinner. And I said, hey, what's your favorite aircraft? And he said, well, it was by far the F 104 Starfighter, which was another aircraft that came out of Lockheed there. It was the first Mach 2 jet fighter aircraft. They called it the missile with a man in it. And so those are the kinds of things I grew up hearing stories about. You know, of course, the SR 71 is incomparable as kind of the epitome of speed, altitude, and just the coolest looking aircraft ever. So there's a reconnaissance, that's a plane. That's a, yeah, intelligence surveillance and reconnaissance aircraft that was designed to be able to outrun, basically go faster than any air defense system. But, you know, I'll tell you, I'm a space junkie. That's why I came to MIT. That's really what took me ultimately to Lockheed Martin. And I grew up, and so Lockheed Martin, for example, has been essentially at the heart of every planetary mission, like all the Mars missions we've had a part in. And we've talked a lot about the 50th anniversary of Apollo here in the last couple of weeks, right? But remember, 1976, July 20th, again, National Space Days, the landing of the Viking lander on the surface of Mars, just a huge accomplishment. And when I was a young engineer at Lockheed Martin, I got to meet engineers who had designed, you know, various pieces of that mission as well. So that's what I grew up on is these planetary missions, the start of the space shuttle era, and ultimately had the opportunity to see Lockheed Martin's part. Lockheed Martin's part, and we can maybe talk about some of these here, but Lockheed Martin's part in all of these space journeys over the years. Do you dream, and I apologize for getting philosophical at times, or sentimental. I do romanticize the notion of space exploration. So do you dream of the day when us humans colonize another planet like Mars, or a man, a woman, a human being steps on Mars? Absolutely, and that's a personal dream of mine. I haven't given up yet on my own opportunity to fly into space, but as, you know, from the Lockheed Martin perspective, this is something that we're working towards every day. And of course, you know, we're building the Orion spacecraft, which is the most sophisticated human rated spacecraft ever built. And it's really designed for these deep space journeys, you know, starting with the moon, but ultimately going to Mars and being the platform, you know, from a design perspective, we call the Mars base camp to be able to take humans to the surface, and then after a mission of a couple of weeks, bring them back up safely. And so that is something I want to see happen during my time at Lockheed Martin. So I'm pretty excited about that. And I think, you know, once we prove that's possible, you know, colonization might be a little bit further out, but it's something that I'd hope to see. So maybe you can give a little bit of an overview of, so Lockheed Martin has partnered with a few years ago with Boeing to work with the DOD and NASA to build launch systems and rockets with the ULA. What's beyond that? What's Lockheed's mission timeline, long term dream in terms of space? You mentioned the moon, I've heard you talk about asteroids. As Mars, what's the timeline? What's the engineering challenges and what's the dream long term? Yeah, I think the dream long term is to have a permanent presence in space beyond low earth orbit, ultimately with a long term presence on the moon and then to the planets, to Mars. And... Sorry to interrupt on that. So long term presence means... Sustained and sustainable presence in an economy, a space economy that really goes alongside that. With human beings and being able to launch perhaps from those, so like hop? You know, there's a lot of energy that goes in those hops, right? So I think the first step is being able to get there and to be able to establish sustained bases, right? And build from there. And a lot of that means getting, as you know, things like the cost of launch down and you mentioned United Launch Alliance. And so I don't wanna speak for ULA, but obviously they're working really hard to on their next generation of launch vehicles to maintain that incredible mission success record that ULA has, but ultimately continue to drive down the cost and make the flexibility, the speed and the access ever greater. So what's the missions that are in the horizon that you could talk to? Is there a hope to get to the moon? Absolutely, absolutely. I mean, I think you know this, or you may know this, there's a lot of ways to accomplish some of these goals. And so that's a lot of what's in discussion today. But ultimately the goal is to be able to establish a base essentially in cislunar space that would allow for ready transfer from orbit to the lunar surface and back again. And so that's sort of that near term, I say near term in the next decade or so vision, starting off with a stated objective by this administration to get back to the moon in the 2024, 2025 timeframe, which is right around the corner here. How big of an engineering challenge is that? I think the big challenge is not so much to go, but to stay, right? And so we demonstrated in the 60s that you could send somebody up, do a couple of days of mission and bring them home again successfully. Now we're talking about doing that, I'd say more to, I don't wanna say an industrial scale, but a sustained scale, right? So permanent habitation, regular reuse of vehicles, the infrastructure to get things like fuel, air, consumables, replacement parts, all the things that you need to sustain that kind of infrastructure. So those are certainly engineering challenges, there are budgetary challenges, and those are all things that we're gonna have to work through. The other thing, and I shouldn't, I don't wanna minimize this, I mean, I'm excited about human exploration, but the reality is our technology and where we've come over the last 40 years essentially has changed what we can do with robotic exploration as well. And to me, it's incredibly thrilling, and this seems like old news now, but the fact that we have rovers driving around the surface of Mars and sending back data is just incredible. The fact that we have satellites in orbit around Mars that are collecting weather, they're looking at the terrain, they're mapping, all of these kinds of things on a continuous basis, that's incredible. And the fact that you got the time lag, of course, going to the planets, but you can effectively have virtual human presence there in a way that we have never been able to do before. And now with the advent of even greater processing power, better AI systems, better cognitive systems and decision systems, you put that together with the human piece and we've really opened up the solar system in a whole different way. And I'll give you an example, we've got OSIRIS REx, which is a mission to the asteroid Bennu. So the spacecraft is out there right now on basically a year mapping activity to map the entire surface of that asteroid in great detail. You know, all autonomously piloted, right? But the idea then that, and this is not too far away, it's gonna go in, it's got a sort of fancy vacuum cleaner with a bucket, it's gonna collect the sample off the asteroid and then send it back here to Earth. And so, you know, we have gone from sort of those tentative steps in the 70s, you know, early landings, video of the solar system to now we've sent spacecraft to Pluto, we have gone to comets and brought and intercepted comets, we've brought stardust, you know, material back. So that's, we've gone far and there's incredible opportunity to go even farther. So it seems quite crazy that this is even possible, that can you talk a little bit about what it means to orbit an asteroid and with a bucket to try to pick up some soil samples? Yeah, so part of it is just kind of the, you know, these are the same kinds of techniques we use here on Earth for high speed, high accuracy imagery, stitching these scenes together and creating essentially high accuracy world maps, right? And so that's what we're doing, obviously, on a much smaller scale with an asteroid. But the other thing that's really interesting, you put together sort of that neat control and, you know, data and imagery problem. But the stories around how we designed the collection, I mean, as essentially, you know, this is the sort of the human ingenuity element, right? That, you know, essentially had an engineer who had a, one day he's like, oh, starts messing around with parts, vacuum cleaner, bucket, you know, maybe we could do something like this. And that was what led to what we call the pogo stick collection, right? Where basically a thing comes down, it's only there for seconds, does that collection, grabs the, essentially blows the regolith material into the collection hopper and off it goes. It doesn't really land almost. It's a very short landing. Wow, that's incredible. So what is, in those, we talked a little bit more about space, what's the role of the human in all of this? What are the challenges? What are the opportunities for humans as they pilot these vehicles in space? And for humans that may step foot on either the moon or Mars? Yeah, it's a great question because, you know, I just have been extolling the virtues of robotic and, you know, rovers, autonomous systems, and those absolutely have a role. I think the thing that we don't know how to replace today is the ability to adapt on the fly to new information. And I believe that will come, but we're not there yet. There's a ways to go. And so, you know, you think back to Apollo 13 and the ingenuity of the folks on the ground and on the spacecraft essentially cobbled together a way to get the carbon dioxide scrubbers to work. Those are the kinds of things that ultimately, you know, and I'd say not just from dealing with anomalies, but, you know, dealing with new information. You see something and rather than waiting 20 minutes or half an hour, an hour to try to get information back and forth, but be able to essentially revector on the fly, collect, you know, different samples, take a different approach, choose different areas to explore. Those are the kinds of things that human presence enables that is still a ways ahead of us on the AI side. Yeah, there's some interesting stuff we'll talk about on the teaming side here on Earth. That's pretty cool to explore. And in space, let's not leave the space piece out. So what does teaming, what does AI and humans working together in space look like? Yeah, one of the things we're working on is a system called Maya, which is, you think of it, so it's an AI assistant. In space. In space, exactly. And you think of it as the Alexa in space, right? But this goes hand in hand with a lot of other developments. And so today's world, everything is essentially model based, model based systems engineering to the actual digital tapestry that goes through the design, the build, the manufacture, the testing, and ultimately the sustainment of these system. And so our vision is really that, you know, when our astronauts are there around Mars, you're gonna have that entire digital library of the spacecraft, of its operations, all the test data, all the test data and flight data from previous missions to be able to look and see if there are anomalous conditions and tell the humans and potentially deal with that before it becomes a bad situation and help the astronauts work through those kinds of things. And it's not just, you know, dealing with problems as they come up, but also offering up opportunities for additional exploration capability, for example. So that's the vision is that, you know, these are gonna take the best of the human to respond to changing circumstances and rely on the best of AI capabilities to monitor these, you know, this almost infinite number of data points and correlations of data points that humans frankly aren't that good at. So how do you develop systems in space like this, whether it's Alexa in space or in general, any kind of control systems, any kind of intelligent systems when you can't really test stuff too much out in space? It's very expensive to test stuff. So how do you develop such systems? Yeah, that's the beauty of this digital twin, if you will. And of course, with Lockheed Martin, we've over the past, you know, five plus decades been refining our knowledge of the space environment, of how materials behave, dynamics, the controls, the radiation environments, all of these kinds of things. So we're able to create very sophisticated models. They're not perfect, but they're very good. And so you can actually do a lot. I spent part of my career, you know, simulating communication spacecraft, you know, missile warning spacecraft, GPS spacecraft in all kinds of scenarios and all kinds of environments. So this is really just taking that to the next level. The interesting thing is that now you're bringing into that loop a system depending on how it's developed that may be non deterministic, it may be learning as it goes. And in fact, we anticipate that it will be learning as it goes. And so that brings a whole new level of interest, I guess, into how do you do verification and validation of these non deterministic learning systems in scenarios that may go out of the bounds or the envelope that you have initially designed them to. So had this system and its intelligence has the same complexity, some of the same complexity human does and learns over time, it's unpredictable in certain kinds of ways in the, so you still, you also have to model that when you're thinking about it. So in your thoughts, it's possible to model the majority of situations, the important aspects of situations here on earth and in space enough to test stuff? Yeah, this is really an active area of research and we're actually funding university research in a variety of places, including MIT. This is in the realm of trust and verification and validation of I'd say autonomous systems in general and then as a subset of that autonomous systems that incorporate artificial intelligence capabilities. And this is not an easy problem. We're working with startup companies, we've got internal R&D, but our conviction is that autonomy and more and more AI enabled autonomy is gonna be in everything that Lockheed Martin develops and fields and it's gonna be retrofitting it. Autonomy and AI are gonna be retrofit into existing systems, they're gonna be part of the design for all of our future systems. And so maybe I should take a step back and say the way we define autonomy. So we talk about autonomy essentially a system that composes, selects and then executes decisions with varying levels of human intervention. And so you could think of no autonomy. So this is essentially the human doing the task. You can think of effectively partial autonomy where the human is in the loop. So making decisions in every case about what the autonomous system can do. Either in the cockpit or remotely. Or remotely, exactly, but still in that control loop. And then there's what you'd call supervisory autonomy. So the autonomous system is doing most of the work, the human can intervene to stop it or to change the direction. And then ultimately full autonomy where the human is off the loop altogether. And for different types of missions wanna have different levels of autonomy. So now take that spectrum and this conviction that autonomy and more and more AI are in everything that we develop. The kinds of things that Lockheed Martin does, a lot of times are safety of life critical kinds of missions. You think about aircraft, for example. And so we require and our customers require an extremely high level of confidence. One, that we're gonna protect life. Two, that these systems will behave in ways that their operators can understand. And so this gets into that whole field. Again, being able to verify and validate that the systems have been and that they will operate the way they're designed and the way they're expected. And furthermore, that they will do that in ways that can be explained and understood. And that is an extremely difficult challenge. Yeah, so here's a difficult question. I don't mean to bring this up, but I think it's a good case study that people are familiar with the Boeing 737 Max commercial airplane has had two recent crashes where their flight control software system failed and it's software. So I don't mean to speak about Boeing, but broadly speaking, we have this in the autonomous vehicle space too, semi autonomous. We have millions of lines of code software making decisions. There is a little bit of a clash of cultures because software engineers don't have the same culture of safety often that people who build systems like at Lockheed Martin do where it has to be exceptionally safe, you have to test this on. So how do we get this right when software is making so many decisions? Yeah, and there's a lot of things that have to happen. And by and large, I think it starts with the culture, which is not necessarily something that A, is taught in school or B is something that would come, depending on what kind of software you're developing, it may not be relevant, right? If you're targeting ads or something like that. So, and by and large, I'd say not just Lockheed Martin, but certainly the aerospace industry as a whole has developed a culture that does focus on safety, safety of life, operational safety, mission success. But as you note, these systems have gotten incredibly complex. And so they're to the point where it's almost impossible, you know, state spaces become so huge that it's impossible to, or very difficult to do a systematic verification across the entire set of potential ways that an aircraft could be flown, all the conditions that could happen, all the potential failure scenarios. Now, maybe that's soluble one day, maybe when we have our quantum computers at our fingertips, we'll be able to actually simulate across an entire, you know, almost infinite state space. But today, you know, there's a lot of work to really try to bound the system, to make sure that it behaves in predictable ways, and then have this culture of continuous inquiry and skepticism and questioning to say, did we really consider the right realm of possibilities? Have we done the right range of testing? Do we really understand, you know, in this case, you know, human and machine interactions, the human decision process alongside the machine processes? And so that's that culture, we call it the culture of mission success at Lockheed Martin that really needs to be established. And it's not something, you know, it's something that people learn by living in it. And it's something that has to be promulgated, you know, and it's done, you know, from the highest levels at a company of Lockheed Martin, like Lockheed Martin. Yeah, and the same is being faced at certain autonomous vehicle companies where that culture is not there because it started mostly by software engineers. So that's what they're struggling with. Is there lessons that you think we should learn as an industry and a society from the Boeing 737 MAX crashes? These crashes obviously are tremendous tragedies. They're tragedies for all of the people, the crew, the families, the passengers, the people on the ground involved. And, you know, it's also a huge business and economic setback as well. I mean, you know, we've seen that it's impacting essentially the trade balance of the US. So these are important questions. And these are the kinds that, you know, we've seen similar kinds of questioning at times. You know, you go back to the Challenger accident. And it is, I think, always important to remind ourselves that humans are fallible, that the systems we create, as perfect as we strive to make them, we can always make them better. And so another element of that culture of mission success is really that commitment to continuous improvement. If there's something that goes wrong, a real commitment to root cause and true root cause understanding, to taking the corrective actions and to making the future systems better. And certainly we strive for, you know, no accidents. And if you look at the record of the commercial airline industry as a whole and the commercial aircraft industry as a whole, you know, there's a very nice decaying exponential to years now where we have no commercial aircraft accidents at all, right? Fatal accidents at all. So that didn't happen by accident. It was through the regulatory agencies, FAA, the airframe manufacturers really working on a system to identify root causes and drive them out. So maybe we can take a step back and many people are familiar, but Lockheed Martin broadly, what kind of categories of systems are you involved in building? You know, Lockheed Martin, we think of ourselves as a company that solves hard mission problems. And the output of that might be an airplane or a spacecraft or a helicopter or a radar or something like that. But ultimately we're driven by these, you know, what is our customer? What is that mission that they need to achieve? And so that's what drove the SR71, right? How do you get pictures of a place where you've got sophisticated air defense systems that are capable of handling any aircraft that was out there at the time, right? So that, you know, that's what yielded an SR71. Let's build a nice flying camera. Exactly. And make sure it gets out and it gets back, right? And that led ultimately to really the start of the space program in the US as well. So now take a step back to Lockheed Martin of today. And we are, you know, on the order of 105 years old now between Lockheed and Martin, the two big heritage companies. Of course, we're made up of a whole bunch of other companies that came in as well. General Dynamics, you know, kind of go down the list. Today, you can think of us in this space of solving mission problems. So obviously on the aircraft side, tactical aircraft, building the most advanced fighter aircraft that the world has ever seen. We're up to now several hundred of those delivered, building almost a hundred a year. And of course, working on the things that come after that. On the space side, we are engaged in pretty much every venue of space utilization and exploration you can imagine. So I mentioned things like navigation and timing GPS, communication satellites, missile warning satellites. We've built commercial surveillance satellites. We've built commercial communication satellites. We do civil space. So everything from human exploration to the robotic exploration of the outer planets. And keep going on the space front. But a couple of other areas that I'd like to put out, we're heavily engaged in building critical defensive systems. And so a couple that I'll mention, the Aegis Combat System. This is basically the integrated air and missile defense system for the US and allied fleets. And so protects carrier strike groups, for example, from incoming ballistic missile threats, aircraft threats, cruise missile threats, and kind of go down the list. So the carriers, the fleet itself is the thing that is being protected. The carriers aren't serving as a protection for something else. Well, that's a little bit of a different application. We've actually built the version called Aegis Ashore, which is now deployed in a couple of places around the world. So that same technology, I mean, basically can be used to protect either an ocean going fleet or a land based activity. Another one, the THAAD program. So THAAD, this is the Theater High Altitude Area Defense. This is to protect relatively broad areas against sophisticated ballistic missile threats. And so now it's deployed with a lot of US capabilities. And now we have international customers that are looking to buy that capability as well. And so these are systems that defend, not just defend militaries and military capabilities, but defend population areas. We saw maybe the first public use of these back in the first Gulf War with the Patriot Systems. And these are the kinds of things that Lockheed Martin delivers. And there's a lot of stuff that goes into it. A lot of stuff that goes with it. So think about the radar systems and the sensing systems that cue these, the command and control systems that decide how you pair a weapon against an incoming threat. And then all the human and machine interfaces to make sure that they can be operated successfully in very strenuous environments. Yeah, there's some incredible engineering that at every front, like you said. So maybe if we just take a look at Lockheed history broadly, maybe even looking at Skunk Works. What are the biggest, most impressive milestones of innovation? So if you look at stealth, I would have called you crazy if you said that's possible at the time. And supersonic and hypersonic. So traveling at, first of all, traveling at the speed of sound is pretty damn fast. And supersonic and hypersonic, three, four, five times the speed of sound. That seems, I would also call you crazy if you say you can do that. So can you tell me how it's possible to do these kinds of things? And is there other milestones and innovation that's going on that you can talk about? Yeah. Well, let me start on the Skunk Works saga. And you kind of alluded to it in the beginning. Skunk Works is as much an idea as a place. And so it's driven really by Kelly Johnson's 14 principles. And I'm not gonna list all 14 of them off, but the idea, and this I'm sure will resonate with any engineer who's worked on a highly motivated small team before. The idea that if you can essentially have a small team of very capable people who wanna work on really hard problems, you can do almost anything. Especially if you kind of shield them from bureaucratic influences, if you create very tight relationships with your customers so that you have that team and shared vision with the customer. Those are the kinds of things that enable the Skunk Works to do these incredible things. And we listed off a number that you brought up stealth. And I wish I could have seen Ben Rich with a ball bearing rolling it across the desk to a general officer and saying, would you like to have an aircraft that has the radar cross section of this ball bearing? Probably one of the least expensive and most effective marketing campaigns in the history of the industry. So just for people that are not familiar, the way you detect aircraft, I'm sure there's a lot of ways, but radar for the longest time, there's a big blob that appears in the radar. How do you make a plane disappear so it looks as big as a ball bearing? What's involved in technology wise there? What's the broadly sort of the stuff you can speak about? I'll stick to what's in Ben Rich's book. But obviously the geometry of how radar gets reflected and the kinds of materials that either reflect or absorb are kind of the couple of the critical elements there. And it's a cat and mouse game, right? I mean, you know, radars get better, stealth capabilities get better. And so it's a really a game of continuous improvement and innovation there. I'll leave it at that. Yeah, so the idea that something is essentially invisible is quite fascinating. But the other one is flying fast. So speed of sound is 750, 60 miles an hour. So supersonic is three, you know, Mach three, something like that. Yeah, we talk about the supersonic obviously, and we kind of talk about that as that realm from Mach one up through about Mach five and then hypersonic. So, you know, high supersonic speeds would be past Mach five. And you got to remember Lockheed Martin and actually other companies have been involved in hypersonic development since the late 60s. You know, you think of everything from the X 15 to the space shuttle as examples of that. I think the difference now is if you look around the world, particularly the threat environment that we're in today, you're starting to see, you know, publicly, folks like the Russians and the Chinese saying they have hypersonic weapons capability that could threaten US and allied capabilities. And also basically, you know, the claims are these could get around defensive systems that are out there today. And so there's a real sense of urgency. You hear it from folks like the undersecretary of defense for research and engineering, Dr. Mike Griffin, and others in the department of defense that hypersonics is something that's really important to the nation in terms of both parity, but also defensive capabilities. And so that's something that, you know, we're pleased. It's something that Lockheed Martin's, you know, had a heritage in, we've invested R and D dollars on our side for many years. And we have a number of things going on with various US government customers in that field today that we're very excited about. So I would anticipate we'll be hearing more about that in the future from our customers. And I've actually haven't read much about this. Probably you can't talk about much of it at all, but on the defensive side, it's a fascinating problem of perception of trying to detect things that are really hard to see. Can you comment on how hard that problem is and how hard is it to stay ahead, even if we go back a few decades, stay ahead of the competition? Well, maybe I'd, again, you gotta think of these as ongoing capability development. And so think back to the early days of missile defense. So this would be in the 80s, the SDI program. And in that timeframe, we proved and Lockheed Martin proved that you could hit a bullet with a bullet, essentially, and which is something that had never been done before to take out an incoming ballistic missile. And so that's led to these incredible hit to kill kinds of capabilities, PAC 3. That's the Patriot Advanced Capability Model 3 that Lockheed Martin builds, the THAAD system that I talked about. So now hypersonics, they're different from ballistic systems. And so we gotta take the next step in defensive capability. I can, I'll leave that there, but I can only imagine. Now, let me just comment sort of as an engineer, it's sad to know that so much that Lockheed has done in the past is classified or today, and it's shrouded in secrecy. It has to be by the nature of the application. So like what I do, so what we do here at MIT, we would like to inspire young engineers, young scientists, and yet in the Lockheed case, some of that engineer has to stay quiet. How do you think about that? How does that make you feel? Is there a future where more can be shown or is it just the nature of this world that it has to remain secret? It's a good question. I think the public can see enough of, and including students who may be in grade school, high school, college today, to understand the kinds of really hard problems that we work on. And I mean, look at the F35, right? And obviously a lot of the detailed performance levels are sensitive and controlled. But we can talk about what an incredible aircraft this is, supersonic, super cruise, kind of a fighter, stealth capabilities. It's a flying information system in the sky with data fusion, sensor fusion capabilities that have never been seen before. So these are the kinds of things that I believe, these are the kinds of things that got me excited when I was a student. I think these still inspire students today. And the other thing I'd say, I mean, people are inspired by space. People are inspired by aircraft. Our employees are also inspired by that sense of mission. And I'll just give you an example. I had the privilege to work and lead our GPS programs for some time. And that was a case where I actually worked on a program that touches billions of people every day. And so when I said, I worked on GPS, everybody knew what I was talking about, even though they didn't maybe appreciate the technical challenges that went into that. But I'll tell you, I got a briefing one time from a major in the Air Force. And he said, I go by callsign GIMP, GPS is my passion. I love GPS. And he was involved in the operational test of the system. And he said, I was out in Iraq, and I was on a helicopter, Blackhawk helicopter, and I was bringing back a sergeant and a handful of troops from a deployed location. And he said, my job is GPS. So I asked that sergeant, and he's beaten down and kind of half asleep. And I said, what do you think about GPS? And he brightened up, his eyes lit up, and he said, well, GPS, that brings me and my troops home every day. I love GPS. And that's the kind of story where it's like, okay, I'm really making a difference here in the kind of work. So that mission piece is really important. The last thing I'll say is, and this gets to some of these questions around advanced technologies. It's not, they're not just airplanes and spacecraft anymore. For people who are excited about advanced software capabilities, about AI, about bringing machine learning, these are the things that we're doing to exponentially increase the mission capabilities that go on those platforms. And those are the kinds of things that I think are more and more visible to the public. Yeah, I think autonomy, especially in flight, is super exciting. Do you see a day, here we go, back into philosophy, future when most fighter jets will be highly autonomous to a degree where a human doesn't need to be in the cockpit in almost all cases? Well, I mean, that's a world that to a certain extent we're in today. Now these are remotely piloted aircraft, to be sure. But we have hundreds of thousands of flight hours a year now in remotely piloted aircraft. And then if you take the F35, there are huge layers, I guess, in levels of autonomy built into that aircraft so that the pilot is essentially more of a mission manager rather than doing the data, the second to second elements of flying the aircraft. So in some ways it's the easiest aircraft in the world to fly. And kind of a funny story on that. So I don't know if you know how aircraft carrier landings work, but basically there's what's called a tail hook and it catches wires on the deck of the carrier. And that's what brings the aircraft to a screeching halt, right? And there's typically three of these wires. So if you miss the first, the second one, you catch the next one, right? And we got a little criticism. I don't know how true this story is, but we got a little criticism. The F35 is so perfect, it always gets the second wires. We're wearing out the wire because it always hits that one. But that's the kind of autonomy that just makes these, essentially up levels what the human is doing to more of that mission manager. So much of that landing by the F35 is autonomous. Well, it's just, the control systems are such that you really have dialed out the variability that comes with all the environmental conditions. You're wearing it out. So my point is to a certain extent, that world is here today. Do I think that we're gonna see a day anytime soon when there are no humans in the cockpit? I don't believe that. But I do think we're gonna see much more human machine teaming, and we're gonna see that much more at the tactical edge. And we did a demo, and you asked about what the Skunk Works is doing these days. And so this is something I can talk about, but we did a demo with the Air Force Research Laboratory. We called it Have Raider. And so using an F16 as an autonomous wingman, and we demonstrated all kinds of maneuvers and various mission scenarios with the autonomous F16 being that so called loyal or trusted wingman. And so those are the kinds of things that, we've shown what is possible now. Given that you've up leveled that pilot to be a mission manager, now they can control multiple other aircraft. Think of them almost as extensions of your own aircraft flying alongside with you. So that's another example of how this is really coming to fruition. And then I mentioned the landings, but think about just the implications for humans and flight safety, and this goes a little bit back to the discussion we were having about how do you continuously improve the level of safety through automation while working through the complexities that automation introduces. So one of the challenges that you have in high performance fighter aircraft is what's called G lock. So this is G induced loss of consciousness. So you pull nine Gs, you're wearing a pressure suit, that's not enough to keep the blood going to your brain, you black out. And of course that's bad if you happen to be flying low, near the deck and in an obstacle or terrain environment. And so we developed a system in our aeronautics division called Auto Gcast, so autonomous ground collision avoidance system. And we built that into the F16. It's actually saved seven aircraft, eight pilots already in a relatively short time it's been deployed. It was so successful that the Air Force said, hey, we need to have this in the F35 right away. So we've actually done testing of that now on the F35. And we've also integrated an autonomous air collision avoidance system. So think the air to air problem. So now it's the integrated collision avoidance system. But these are the kinds of capabilities, I wouldn't call them AI. I mean, they're very sophisticated models of the aircraft dynamics coupled with the terrain models to be able to predict when essentially the pilot is doing something that is gonna take the aircraft or the pilot's not doing something in this case. But it just gives you an example of how autonomy can be really a lifesaver in today's world. It's like a autonomous automated emergency braking in cars. But is there any exploration of perception of, for example, detecting a G lock that the pilot is out? So as opposed to perceiving the external environment to infer that the pilot is out, but actually perceiving the pilot directly. Yeah, this is one of those cases where you'd like to not take action if you think the pilot's there. And it's almost like systems that try to detect if a driver's falling asleep on the road, right? With limited success. So, I mean, this is what I call the system of last resort, right? Where if the aircraft has determined that it's going into the terrain, get it out of there. And this is not something that we're just doing in the aircraft world. And I wanted to highlight, we have a technology we call Matrix, but this is developed at Sikorsky Innovations. The whole idea there is what we call optimal piloting. So not optional piloting or unpiloted, but optimal piloting. So an FAA certified system. So you have a high degree of confidence. It's generally pretty deterministic. So we know that it'll do in different situations, but effectively be able to fly a mission with two pilots, one pilot, no pilots. And you can think of it almost as like a dial of the level of autonomy that you want, but able, so it's running in the background at all times and able to pick up tasks, whether it's sort of autopilot kinds of tasks or more sophisticated path planning kinds of activities to be able to do things like, for example, land on an oil rig in the North Sea in bad weather, zero, zero conditions. And you can imagine, of course, there's a lot of military utility to capability like that. You could have an aircraft that you want to send out for a crewed mission, but then at night, if you want to use it to deliver supplies in an unmanned mode, that could be done as well. And so there's clear advantages there. But think about on the commercial side, if you're an aircraft taken, you're gonna fly out to this oil rig. If you get out there and you can't land, then you gotta bring all those people back, reschedule another flight, pay the overtime for the crew that you just brought back because they didn't get where they were going, pay for the overtime for the folks that are out there in the oil rig. This is real economic, these are dollars and cents kinds of advantages we're bringing in the commercial world as well. So here's a difficult question from the AI space that I would love it if you're able to comment. So a lot of this autonomy in AI you've mentioned just now has this empowering effect. One is the last resort, it keeps you safe. The other is there's a, with the teaming and in general, assistive AI. And I think there's always a race. So the world is full of, the world is complex. It's full of bad actors. So there's often a race to make sure that we keep this country safe, right? But with AI, there is a concern that it's a slightly different race. Though there's a lot of people in the AI space that are concerned about the AI arms race. That as opposed to the United States becoming, having the best technology and therefore keeping us safe, even we lose ability to keep control of it. So this, the AI arms race getting away from all of us humans. So do you share this worry? Do you share this concern when we're talking about military applications that too much control and decision making capabilities giving to software or AI? Well, I don't see it happening today. And in fact, this is something from a policy perspective, it's obviously a very dynamic space, but the Department of Defense has put quite a bit of thought into that. And maybe before talking about the policy, I'll just talk about some of the why. And you alluded to it being a sort of a complicated and a little bit scary world out there, but there's some big things happening today. You hear a lot of talk now about a return to great powers competition, particularly around China and Russia with the US, but there are some other big players out there as well. And what we've seen is the deployment of some very, I'd say concerning new weapon systems, particularly with Russia and breaching some of the IRBM, Intermediate Range Ballistic Missile Treaties, that's been in the news a lot. The building of islands, artificial islands in the South China Sea by the Chinese and then arming those islands. The annexation of Crimea by Russia, the invasion of Ukraine. So there's some pretty scary things. And then you add on top of that, the North Korean threat has certainly not gone away. There's a lot going on in the Middle East with Iran in particular. And we see this global terrorism threat has not abated. So there are a lot of reasons to look for technology to assist with those problems, whether it's AI or other technologies like hypersonics, which we discussed. So now let me give just a couple of hypotheticals. So people react sort of in the second timeframe, right? Photon hitting your eye to movement is on the order of a few tenths of a second kinds of processing time. Roughly speaking, computers are operating in the nanosecond timescale, right? So just to bring home what that means, a nanosecond to a second is like a second to 32 years. So seconds on the battlefield, in that sense, literally are lifetimes. And so if you can bring an autonomous or AI enabled capability that will enable the human to shrink, maybe you've heard the term the OODA loop. So this whole idea that a typical battlefield decision is characterized by observe. So information comes in, orient. How does that, what does that mean in the context? Decide, what do I do about it? And then act, take that action. If you can use these capabilities to compress that OODA loop to stay inside what your adversary is doing, that's an incredible powerful force on the battlefield. That's a really nice way to put it, that the role of AI and computing in general has a lot to benefit from just decreasing from 32 years to one second, as opposed to on the scale of seconds and minutes and hours making decisions that humans are better at making. And it actually goes the other way too. So that's on the short timescale. So humans kind of work in the one second, two seconds to eight hours. After eight hours, you get tired, you gotta go to the bathroom, whatever the case might be. So there's this whole range of other things. Think about surveillance and guarding facilities. Think about moving material, logistics, sustainment. A lot of these, what they call dull, dirty and dangerous things that you need to have sustained activity, but it's sort of beyond the length of time that a human can practically do as well. So there's this range of things that are critical in military and defense applications that AI and autonomy are particularly well suited to. Now, the interesting question that you brought up is, okay, how do you make sure that stays within human control? So that was the context for now the policy. And so there is a DOD directive called 3000.09 because that's the way we name stuff in this world. But I'd say it's well worth reading. It's only a couple of pages long, but it makes some key points. And it's really around making sure that there's human agency and control over use of semi autonomous and autonomous weapons systems, making sure that these systems are tested, verified and evaluated in realistic, real world type scenarios, making sure that the people are actually trained on how to use them, making sure that the systems have human machine interfaces that can show what state they're in and what kinds of decisions they're making, making sure that you've established doctrine and tactics and techniques and procedures for the use of these kinds of systems. And so, and by the way, I mean, none of this is easy, but I'm just trying to lay kind of the picture of how the US has said, this is the way we're gonna treat AI and autonomous systems, that it's not a free for all. And like there are rules of war and rules of engagement with other kinds of systems, think chemical weapons, biological weapons, we need to think about the same sorts of implications. And this is something that's really important for Lockheed Martin. I mean, obviously we are a hundred percent complying with our customer and the policies and regulations, but I mean, AI is an incredible enabler, say within the walls of Lockheed Martin in terms of improving production efficiency, doing helping engineers, doing generative design, improving logistics, driving down energy costs. I mean, there are so many applications, but we're also very interested in some of the elements of ethical application within Lockheed Martin. So we need to make sure that things like privacy is taken care of, that we do everything we can to drive out bias in AI enabled kinds of systems, that we make sure that humans are involved in decisions, that we're not just delegating accountability to algorithms. And so for us, it all comes back, I talked about culture before, and it comes back to sort of the Lockheed Martin culture and our core values. And so it's pretty simple for us and do what's right, respect others, perform with excellence. And now how do we tie that back to the ethical principles will govern how AI is used within Lockheed Martin. And we actually have a world, pretty, so you might not know this, but there are actually awards for ethics programs. Lockheed Martin's had a recognized ethics program for many years. And this is one of the things that our ethics team is working with our engineering team on. One of the miracles to me, perhaps a layman, again, I was born in the Soviet Union. So I have echoes, at least in my family history of World War II and the Cold War. Do you have a sense of why human civilization has not destroyed itself through nuclear war, so nuclear deterrence? And thinking about the future, does this technology have a role to play here? And what is the long term future of nuclear deterrence look like? Yeah, this is one of those hard, hard questions. And I should note that Lockheed Martin is both proud and privileged to play a part in multiple legs of our nuclear and strategic deterrent systems like the Trident submarine launch ballistic missiles. You talk about, is there still a possibility that the human race could destroy itself? I'd say that possibility is real. But interestingly, in some sense, I think the strategic deterrence have prevented the kinds of incredibly destructive world wars that we saw in the first half of the 20th century. Now, things have gotten more complicated since that time and since the Cold War. It is more of a multipolar great powers world today. Just to give you an example, back then, there were, in the Cold War timeframe, just a handful of nations that had ballistic missile capability by last count. And this is a few years old. There's over 70 nations today that have that. Similar kinds of numbers in terms of space based capabilities. So the world has gotten more complex and more challenging and the threats, I think, have proliferated in ways that we didn't expect. The nation today is in the middle of a recapitalization of our strategic deterrent. I look at that as one of the most important things that our nation can do. What is involved in deterrence? Is it being ready to attack or is it the defensive systems that catch attacks? A little bit of both. And so it's a complicated game theoretical kind of program. But ultimately, we are trying to prevent the use of any of these weapons. And the theory behind prevention is that even if an adversary uses a weapon against you, you have the capability to essentially strike back and do harm to them that's unacceptable. And so that will deter them from making use of these weapons systems. The deterrence calculus has changed, of course, with more nations now having these kinds of weapons. But I think from my perspective, it's very important to maintain a strategic deterrent. You have to have systems that you know will work when they're required to work. Now you know that they have to be adaptable to a variety of different scenarios in today's world. And so that's what this recapitalization of systems that were built over previous decades, making sure that they are appropriate, not just for today, but for the decades to come. So the other thing I'd really like to note is strategic deterrence has a very different character today. We used to think of weapons of mass destruction in terms of nuclear, chemical, biological. And today we have a cyber threat. We've seen examples of the use of cyber weaponry. And if you think about the possibilities of using cyber capabilities or an adversary attacking the US to take out things like critical infrastructure, electrical grids, water systems, those are scenarios that are strategic in nature to the survival of a nation as well. So that is the kind of world that we live in today. And part of my hope on this is one that we can also develop technical or technological systems, perhaps enabled by AI and autonomy, that will allow us to contain and to fight back against these kinds of new threats that were not conceived when we first developed our strategic deterrence. Yeah, I know that Lockheed is involved in cyber, so I saw that you mentioned that. It's an incredibly, nuclear almost seems easier than cyber because there's so many attack, there's so many ways that cyber can evolve in such an uncertain future. But talking about engineering with a mission, I mean, in this case that you're engineering systems that basically save the world. Well, like I said, we're privileged to work on some very challenging problems for very critical customers here in the US and with our allies abroad as well. Lockheed builds both military and nonmilitary systems. And perhaps the future of Lockheed may be more in nonmilitary applications if you talk about space and beyond. I say that as a preface to a difficult question. So President Eisenhower in 1961 in his farewell address talked about the military industrial complex and that it shouldn't grow beyond what is needed. So what are your thoughts on those words, on the military industrial complex, on the concern of growth of their developments beyond what may be needed? That where it may be needed is a critical phrase, of course. And I think it is worth pointing out, as you noted, that Lockheed Martin, we are in a number of commercial businesses from energy to space to commercial aircraft. And so I wouldn't neglect the importance of those parts of our business as well. I think the world is dynamic and there was a time, and it doesn't seem that long ago to me, it was while I was a graduate student here at MIT and we were talking about the peace dividend at the end of the Cold War. If you look at expenditure on military systems as a fraction of GDP, we're far below peak levels of the past. And to me, at least, it looks like a time where you're seeing global threats changing in a way that would warrant relevant investments in defensive capabilities. The other thing I'd note, for military and defensive systems, it's not quite a free market, right? We don't sell to people on the street. And that warrants a very close partnership between, I'd say, the customers and the people that design, build, and maintain these systems because of the very unique nature, the very difficult requirements, the very great importance on safety and on operating the way they're intended every time. And so that does create, and frankly, it's one of Lockheed Martin's great strengths is that we have this expertise built up over many years in partnership with our customers to be able to design and build these systems that meet these very unique mission needs. Yeah, because building those systems is very costly, there's very little room for mistake. I mean, it's, yeah, just Ben Rich's book and so on just tells the story. It's nerve wracking just reading it. If you're an engineer, it reads like a thriller. Okay, let me, let's go back to space for a second. I guess. I'm always happy to go back to space. So a few quick, maybe out there, maybe fun questions, maybe a little provocative. What are your thoughts on the efforts of the new folks, SpaceX and Elon Musk? What are your thoughts about what Elon is doing? Do you see him as competition? Do you enjoy competition? What are your thoughts? Yeah, first of all, certainly Elon, I'd say SpaceX and some of his other ventures are definitely a competitive force in the space industry. And do we like competition? Yeah, we do. And we think we're very strong competitors. I think it's, you know, competition is what the US is founded on in a lot of ways and always coming up with a better way. And I think it's really important to continue to have fresh eyes coming in, new innovation. I do think it's important to have level playing fields. And so you wanna make sure that you're not giving different requirements to different players. But, you know, I tell people, you know, I spent a lot of time at places like MIT. I'm gonna be at the MIT Beaverwork Summer Institute over the weekend here. And I tell people, this is the most exciting time to be in the space business in my entire life. And it is this explosion of new capabilities that have been driven by things like the, you know, the massive increase in computing power, things like the massive increase in comms capabilities, advanced and additive manufacturing are really bringing down the barriers to entry in this field and it's driving just incredible innovation. And it's happening at startups, but it's also happening at Lockheed Martin. You may not realize this, but Lockheed Martin, working with Stanford actually built the first CubeSat that was launched here out of the US that was called QuakeSat. And we did that with Stellar Solutions. This was right around just after 2000, I guess. And so we've been in that, you know, from the very beginning. And, you know, I talked about some of these, like, you know, Maya and Orion, but, you know, we're in the middle of what we call smartsats and software defined satellites that can essentially restructure and remap their purpose, their mission on orbit to give you almost, you know, unlimited flexibility for these satellites over their lifetimes. So those are just a couple of examples, but yeah, this is a great time to be in space. Absolutely. So Wright Brothers flew for the first time 116 years ago. So now we have supersonic stealth planes and all the technology we've talked about. What innovations, obviously you can't predict the future, but do you see Lockheed in the next 100 years? If you take that same leap, how will the world of technology and engineering change? I know it's an impossible question, but nobody could have predicted that we could even fly 120 years ago. So what do you think is the edge of possibility that we're going to be exploring in the next 100 years? I don't know that there is an edge. I, you know, we've been around for almost that entire time, right? The Lockheed brothers and Glen L. Martin starting their companies in the basement of a church and an old service station. We're very different companies today than we were back then, right? And that's because we've continuously reinvented ourselves over all of those decades. I think it's fair to say, I know this for sure, the world of the future, it's gonna move faster, it's gonna be more connected, it's gonna be more autonomous, and it's gonna be more complex than it is today. And so this is the world, you know, as a CTO at Lockheed Martin that I think about, what are the technologies that we have to invest in? Whether it's things like AI and autonomy, you know, you can think about quantum computing, which is an area that we've invested in to try to stay ahead of these technological changes, and frankly, some of the threats that are out there. I believe that we're gonna be out there in the solar system, that we're gonna be defending and defending well against probably, you know, military threats that nobody has even thought about today. We are going to be, we're gonna use these capabilities to have far greater knowledge of our own planet, the depths of the oceans, you know, all the way to the upper reaches of the atmosphere and everything out to the sun and to the edge of the solar system. So that's what I look forward to, and I'm excited, I mean, just looking ahead in the next decade or so to the steps that I see ahead of us in that time. I don't think there's a better place to end, Keoki, thank you so much. Lex, it's been a real pleasure, and sorry it took so long to get up here, but I'm glad we were able to make it happen.
Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33
The following is a conversation with Pamela McCordick. She's an author who has written on the history and the philosophical significance of artificial intelligence. Her books include Machines Who Think in 1979, The Fifth Generation in 1983 with Ed Feigenbaum, who's considered to be the father of expert systems, The Edge of Chaos that features women, and many more books. I came across her work in an unusual way by stumbling in a quote from Machines Who Think that is something like, artificial intelligence began with the ancient wish to forge the gods. That was a beautiful way to draw a connecting line between our societal relationship with AI from the grounded day to day science, math and engineering, to popular stories and science fiction and myths of automatons that go back for centuries. Through her literary work, she has spent a lot of time with the seminal figures of artificial intelligence, including the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched. I reached out to Pamela for a conversation in hopes of getting a sense of what those early days were like, and how their dreams continue to reverberate through the work of our community today. I often don't know where the conversation may take us, but I jump in and see. Having no constraints, rules, or goals is a wonderful way to discover new ideas. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Pamela McCordick. In 1979, your book Machines Who Think was published. In it, you interview some of the early AI pioneers and explore the idea that AI was born not out of maybe math and computer science, but out of myth and legend. So, tell me if you could the story of how you first arrived at the book, the journey of beginning to write it. I had been a novelist. I'd published two novels, and I was sitting under the portal at Stanford one day, the house we were renting for the summer. And I thought, I should write a novel about these weird people in AI, I know. And then I thought, ah, don't write a novel, write a history. Simple. Just go around, interview them, splice it together, voila, instant book. Ha, ha, ha. It was much harder than that. But nobody else was doing it. And so, I thought, well, this is a great opportunity. And there were people who, John McCarthy, for example, thought it was a nutty idea. The field had not evolved yet, so on. And he had some mathematical thing he thought I should write instead. And I said, no, John, I am not a woman in search of a project. This is what I want to do. I hope you'll cooperate. And he said, oh, mutter, mutter, well, okay, it's your time. What was the pitch for the, I mean, such a young field at that point. How do you write a personal history of a field that's so young? I said, this is wonderful. The founders of the field are alive and kicking and able to talk about what they're doing. Did they sound or feel like founders at the time? Did they know that they have founded something? Oh, yeah. They knew what they were doing was very important. Very. What I now see in retrospect is that they were at the height of their research careers. And it's humbling to me that they took time out from all the things that they had to do as a consequence of being there. And to talk to this woman who said, I think I'm going to write a book about you. No, it was amazing. Just amazing. So who stands out to you? Maybe looking 63 years ago, the Dartmouth conference, so Marvin Minsky was there, McCarthy was there, Claude Shannon, Alan Newell, Herb Simon, some of the folks you've mentioned. Then there's other characters, right? One of your coauthors He wasn't at Dartmouth. He wasn't at Dartmouth. No. He was, I think, an undergraduate then. And of course, Joe Traub. All of these are players, not at Dartmouth, but in that era. Right. CMU and so on. So who are the characters, if you could paint a picture, that stand out to you from memory? Those people you've interviewed and maybe not, people that were just in the In the atmosphere. In the atmosphere. Of course, the four founding fathers were extraordinary guys. They really were. Who are the founding fathers? Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy. They were the four who were not only at the Dartmouth conference, but Newell and Simon arrived there with a working program called The Logic Theorist. Everybody else had great ideas about how they might do it, but But they weren't going to do it yet. And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe because I had been Ed Feigenbaum's assistant at Stanford. And before that, I had worked on a book edited by Feigenbaum and Julian Feldman called Computers and Thought. It was the first textbook of readings of AI. And they only did it because they were trying to teach AI to people at Berkeley. And there was nothing, you'd have to send them to this journal and that journal. This was not the internet where you could go look at an article. So I was fascinated from the get go by AI. I was an English major. What did I know? And yet I was fascinated. And that's why you saw that historical, that literary background, which I think is very much a part of the continuum of AI, that AI grew out of that same impulse. That traditional, what was, what drew you to AI? How did you even think of it back then? What was the possibilities, the dreams? What was interesting to you? The idea of intelligence outside the human cranium, this was a phenomenal idea. And even when I finished Machines Who Think, I didn't know if they were going to succeed. In fact, the final chapter is very wishy washy, frankly. Succeed, the field did. Yeah. So was there the idea that AI began with the wish to forge the gods? So the spiritual component that we crave to create this other thing greater than ourselves. For those guys, I don't think so. Newell and Simon were cognitive psychologists. What they wanted was to simulate aspects of human intelligence, and they found they could do it on the computer. Minsky just thought it was a really cool thing to do. Likewise, McCarthy. McCarthy had got the idea in 1949 when he was a Caltech student. And he listened to somebody's lecture. It's in my book. I forget who it was. And he thought, oh, that would be fun to do. How do we do that? And he took a very mathematical approach. Minsky was hybrid, and Newell and Simon were very much cognitive psychology. How can we simulate various things about human cognition? What happened over the many years is, of course, our definition of intelligence expanded tremendously. These days, biologists are comfortable talking about the intelligence of the cell, the intelligence of the brain, not just human brain, but the intelligence of any kind of brain. Cephalopods, I mean, an octopus is really intelligent by any amount. We wouldn't have thought of that in the 60s, even the 70s. So all these things have worked in. And I did hear one behavioral primatologist, Franz De Waal, say, AI taught us the questions to ask. Yeah, this is what happens, right? When you try to build it, is when you start to actually ask questions. It puts a mirror to ourselves. Yeah, right. So you were there in the middle of it. It seems like not many people were asking the questions that you were, or just trying to look at this field the way you were. I was so low. When I went to get funding for this because I needed somebody to transcribe the interviews and I needed travel expenses, I went to everything you could think of, the NSF, the DARPA. There was an Air Force place that doled out money. And each of them said, well, that's a very interesting idea. But we'll think about it. And the National Science Foundation actually said to me in plain English, hey, you're only a writer. You're not a historian of science. And I said, yeah, that's true. But the historians of science will be crawling all over this field. I'm writing for the general audience, so I thought. And they still wouldn't budge. I finally got a private grant without knowing who it was from, from Ed Fredkin at MIT. He was a wealthy man, and he liked what he called crackpot ideas. And he considered this a crackpot idea, and he was willing to support it. I am ever grateful, let me say that. Some would say that a history of science approach to AI, or even just a history, or anything like the book that you've written, hasn't been written since. Maybe I'm not familiar, but it's certainly not many. If we think about bigger than just these couple of decades, few decades, what are the roots of AI? Oh, they go back so far. Yes, of course, there's all the legendary stuff, the Golem and the early robots of the 20th century. But they go back much further than that. If you read Homer, Homer has robots in the Iliad. And a classical scholar was pointing out to me just a few months ago, well, you said you just read the Odyssey. The Odyssey is full of robots. It is, I said? Yeah. How do you think Odysseus's ship gets from one place to another? He doesn't have the crew people to do that, the crewmen. Yeah, it's magic. It's robots. Oh, I thought, how interesting. So we've had this notion of AI for a long time. And then toward the end of the 19th century, the beginning of the 20th century, there were scientists who actually tried to make this happen some way or another, not successfully. They didn't have the technology for it. And of course, Babbage in the 1850s and 60s, he saw that what he was building was capable of intelligent behavior. And when he ran out of funding, the British government finally said, that's enough. He and Lady Lovelace decided, oh, well, why don't we play the ponies with this? He had other ideas for raising money too. But if we actually reach back once again, I think people don't actually really know that robots do appear and ideas of robots. You talk about the Hellenic and the Hebraic points of view. Oh, yes. Can you tell me about each? I defined it this way. The Hellenic point of view is robots are great. They are party help. They help this guy Hephaestus, this god Hephaestus in his forge. I presume he made them to help him and so on and so forth. And they welcome the whole idea of robots. The Hebraic view has to do with, I think it's the second commandment, thou shalt not make any graven image. In other words, you better not start imitating humans because that's just forbidden. It's the second commandment. And a lot of the reaction to artificial intelligence has been a sense that this is somehow wicked, this is somehow blasphemous. We shouldn't be going there. Now, you can say, yeah, but there are going to be some downsides. And I say, yes, there are, but blasphemy is not one of them. You know, there is a kind of fear that feels to be almost primal. Is there religious roots to that? Because so much of our society has religious roots. And so there is a feeling of, like you said, blasphemy of creating the other, of creating something, you know, it doesn't have to be artificial intelligence. It's creating life in general. It's the Frankenstein idea. There's the annotated Frankenstein on my coffee table. It's a tremendous novel. It really is just beautifully perceptive. Yes, we do fear this and we have good reason to fear it, but because it can get out of hand. Maybe you can speak to that fear, the psychology, if you've thought about it. You know, there's a practical set of fears, concerns in the short term. You can think if we actually think about artificial intelligence systems, you can think about bias of discrimination in algorithms. You can think about their social networks have algorithms that recommend the content you see, thereby these algorithms control the behavior of the masses. There's these concerns. But to me, it feels like the fear that people have is deeper than that. So have you thought about the psychology of it? I think in a superficial way I have. There is this notion that if we produce a machine that can think, it will outthink us and therefore replace us. I guess that's a primal fear of almost kind of a kind of mortality. So around the time you said you worked at Stanford with Ed Feigenbaum. So let's look at that one person. Throughout his history, clearly a key person, one of the many in the history of AI. How has he changed in general around him? How has Stanford changed in the last, how many years are we talking about here? Oh, since 65. 65. So maybe it doesn't have to be about him. It could be bigger. But because he was a key person in expert systems, for example, how is that, how are these folks who you've interviewed in the 70s, 79 changed through the decades? In Ed's case, I know him well. We are dear friends. We see each other every month or so. He told me that when Machines Who Think first came out, he really thought all the front matter was kind of bologna. And 10 years later, he said, no, I see what you're getting at. Yes, this is an impulse that has been a human impulse for thousands of years to create something outside the human cranium that has intelligence. I think it's very hard when you're down at the algorithmic level, and you're just trying to make something work, which is hard enough to step back and think of the big picture. It reminds me of when I was in Santa Fe, I knew a lot of archaeologists, which was a hobby of mine. And I would say, yeah, yeah, well, you can look at the shards and say, oh, this came from this tribe and this came from this trade route and so on. But what about the big picture? And a very distinguished archaeologist said to me, they don't think that way. No, they're trying to match the shard to where it came from. Where did the remainder of this corn come from? Was it grown here? Was it grown elsewhere? And I think this is part of any scientific field. You're so busy doing the hard work, and it is hard work, that you don't step back and say, oh, well, now let's talk about the general meaning of all this. Yes. So none of the even Minsky and McCarthy, they... Oh, those guys did. Yeah. The founding fathers did. Early on or later? Pretty early on. But in a different way from how I looked at it. The two cognitive psychologists, Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really, really understand the brain. Minsky was more speculative. And John McCarthy saw it as, I think I'm doing him right by this, he really saw it as a great boon for human beings to have this technology. And that was reason enough to do it. And he had wonderful, wonderful fables about how if you do the mathematics, you will see that these things are really good for human beings. And if you had a technological objection, he had an answer, a technological answer. But here's how we could get over that and then blah, blah, blah. And one of his favorite things was what he called the literary problem, which of course he presented to me several times. That is everything in literature, there are conventions in literature. One of the conventions is that you have a villain and a hero. And the hero in most literature is human, and the villain in most literature is a machine. And he said, that's just not the way it's going to be. But that's the way we're used to it. So when we tell stories about AI, it's always with this paradigm. I thought, yeah, he's right. Looking back, the classics RUR is certainly the machines trying to overthrow the humans. Frankenstein is different. Frankenstein is a creature. He never has a name. Frankenstein, of course, is the guy who created him, the human, Dr. Frankenstein. This creature wants to be loved, wants to be accepted. And it is only when Frankenstein turns his head, in fact, runs the other way. And the creature is without love, that he becomes the monster that he later becomes. So who's the villain in Frankenstein? It's unclear, right? Oh, it is unclear, yeah. It's really the people who drive him. By driving him away, they bring out the worst. That's right. They give him no human solace. And he is driven away, you're right. He becomes, at one point, the friend of a blind man. And he serves this blind man, and they become very friendly. But when the sighted people of the blind man's family come in, ah, you've got a monster here. So it's very didactic in its way. And what I didn't know is that Mary Shelley and Percy Shelley were great readers of the literature surrounding abolition in the United States, the abolition of slavery. And they picked that up wholesale. You are making monsters of these people because you won't give them the respect and love that they deserve. Do you have, if we get philosophical for a second, do you worry that once we create machines that are a little bit more intelligent, let's look at Roomba, the vacuums, the cleaner, that this darker part of human nature where we abuse the other, the somebody who's different, will come out? I don't worry about it. I could imagine it happening. But I think that what AI has to offer the human race will be so attractive that people will be won over. So you have looked deep into these people, had deep conversations, and it's interesting to get a sense of stories of the way they were thinking and the way it was changed, the way your own thinking about AI has changed. So you mentioned McCarthy. What about the years at CMU, Carnegie Mellon, with Joe? Sure. Joe was not in AI. He was in algorithmic complexity. Was there always a line between AI and computer science, for example? Is AI its own place of outcasts? Was that the feeling? There was a kind of outcast period for AI. For instance, in 1974, the new field was hardly 10 years old. The new field of computer science was asked by the National Science Foundation, I believe, but it may have been the National Academies, I can't remember, to tell your fellow scientists where computer science is and what it means. And they wanted to leave out AI. And they only agreed to put it in because Don Knuth said, hey, this is important. You can't just leave that out. Really? Don, dude? Don Knuth, yes. I talked to him recently, too. Out of all the people. Yes. But you see, an AI person couldn't have made that argument. He wouldn't have been believed. But Knuth was believed. Yes. So Joe Traub worked on the real stuff. Joe was working on algorithmic complexity. But he would say in plain English again and again, the smartest people I know are in AI. Really? Oh, yes. No question. Anyway, Joe loved these guys. What happened was that I guess it was as I started to write Machines Who Think, Herb Simon and I became very close friends. He would walk past our house on Northumberland Street every day after work. And I would just be putting my cover on my typewriter. And I would lean out the door and say, Herb, would you like a sherry? And Herb almost always would like a sherry. So he'd stop in and we'd talk for an hour, two hours. My journal says we talked this afternoon for three hours. What was on his mind at the time in terms of on the AI side of things? Oh, we didn't talk too much about AI. We talked about other things. Just life. We both love literature. And Herb had read Proust in the original French twice all the way through. I can't. I've read it in English in translation. So we talked about literature. We talked about languages. We talked about music because he loved music. We talked about art because he was actually enough of a painter that he had to give it up because he was afraid it was interfering with his research and so on. So no, it was really just chat, chat. But it was very warm. So one summer I said to Herb, my students have all the really interesting conversations. I was teaching at the University of Pittsburgh then in the English department. They get to talk about the meaning of life and that kind of thing. And what do I have? I have university meetings where we talk about the photocopying budget and whether the course on romantic poetry should be one semester or two. So Herb laughed. He said, yes, I know what you mean. He said, but you could do something about that. Dot, that was his wife, Dot and I used to have a salon at the University of Chicago every Sunday night. And we would have essentially an open house and people knew. It wasn't for a small talk. It was really for some topic of depth. He said, but my advice would be that you choose the topic ahead of time. Fine, I said. So we exchanged mail over the summer. That was US Post in those days because you didn't have personal email. And I decided I would organize it and there would be eight of us, Alan Noland, his wife, Herb Simon and his wife Dorothea. There was a novelist in town, a man named Mark Harris. He had just arrived and his wife Josephine. Mark was most famous then for a novel called Bang the Drum Slowly, which was about baseball. And Joe and me, so eight people. And we met monthly and we just sank our teeth into really hard topics and it was great fun. TK How have your own views around artificial intelligence changed through the process of writing Machines Who Think and afterwards, the ripple effects? RL I was a little skeptical that this whole thing would work out. It didn't matter. To me, it was so audacious. AI generally. And in some ways, it hasn't worked out the way I expected so far. That is to say, there's this wonderful lot of apps, thanks to deep learning and so on. But those are algorithmic. And in the part of symbolic processing, there's very little yet. And that's a field that lies waiting for industrious graduate students. TK Maybe you can tell me some figures that popped up in your life in the 80s with expert systems where there was the symbolic AI possibilities of what most people think of as AI, if you dream of the possibilities of AI, it's really expert systems. And those hit a few walls and there was challenges there. And I think, yes, they will reemerge again with some new breakthroughs and so on. But what did that feel like, both the possibility and the winter that followed the slowdown in research? BG Ah, you know, this whole thing about AI winter is to me a crock. TK Snow winters. BG Because I look at the basic research that was being done in the 80s, which is supposed to be, my God, it was really important. It was laying down things that nobody had thought about before, but it was basic research. You couldn't monetize it. Hence the winter. TK That's the winter. BG You know, research, scientific research goes and fits and starts. It isn't this nice smooth, oh, this follows this follows this. No, it just doesn't work that way. TK The interesting thing, the way winters happen, it's never the fault of the researchers. It's the some source of hype over promising. Well, no, let me take that back. Sometimes it is the fault of the researchers. Sometimes certain researchers might over promise the possibilities. They themselves believe that we're just a few years away. Sort of just recently talked to Elon Musk and he believes he'll have an autonomous vehicle, will have autonomous vehicles in a year. And he believes it. BG A year? TK A year. Yeah. With mass deployment of a time. BG For the record, this is 2019 right now. So he's talking 2020. TK To do the impossible, you really have to believe it. And I think what's going to happen when you believe it, because there's a lot of really brilliant people around him, is some good stuff will come out of it. Some unexpected brilliant breakthroughs will come out of it when you really believe it, when you work that hard. BG I believe that. And I believe autonomous vehicles will come. I just don't believe it'll be in a year. I wish. TK But nevertheless, there's, autonomous vehicles is a good example. There's a feeling many companies have promised by 2021, by 2022, Ford, GM, basically every single automotive company has promised they'll have autonomous vehicles. So that kind of over promise is what leads to the winter. Because we'll come to those dates, there won't be autonomous vehicles. BG And there'll be a feeling, well, wait a minute, if we took your word at that time, that means we just spent billions of dollars, had made no money, and there's a counter response to where everybody gives up on it. Sort of intellectually, at every level, the hope just dies. And all that's left is a few basic researchers. So you're uncomfortable with some aspects of this idea. TK Well, it's the difference between science and commerce. BG So you think science goes on the way it does? TK Oh, science can really be killed by not getting proper funding or timely funding. I think Great Britain was a perfect example of that. The Lighthill report in, I can't remember the year, essentially said, there's no use Great Britain putting any money into this, it's going nowhere. And this was all about social factions in Great Britain. Edinburgh hated Cambridge and Cambridge hated Manchester. Somebody else can write that story. But it really did have a hard effect on research there. Now, they've come roaring back with Deep Mind. But that's one guy and his visionaries around him. BG But just to push on that, it's kind of interesting. You have this dislike of the idea of an AI winter. Where's that coming from? Where were you? TK Oh, because I just don't think it's true. BG There was a particular period of time. It's a romantic notion, certainly. TK Yeah, well. No, I admire science, perhaps more than I admire commerce. Commerce is fine. Hey, you know, we all gotta live. But science has a much longer view than commerce and continues almost regardless. It can't continue totally regardless, but almost regardless of what's saleable and what's not, what's monetizable and what's not. BG So the winter is just something that happens on the commerce side, and the science marches. That's a beautifully optimistic and inspiring message. I agree with you. I think if we look at the key people that work in AI, that work in key scientists in most disciplines, they continue working out of the love for science. You can always scrape up some funding to stay alive, and they continue working diligently. But there certainly is a huge amount of funding now, and there's a concern on the AI side and deep learning. There's a concern that we might, with over promising, hit another slowdown in funding, which does affect the number of students, you know, that kind of thing. RG Yeah, it does. BG So the kind of ideas you had in Machines Who Think, did you continue that curiosity through the decades that followed? RG Yes, I did. BG And what was your view, historical view of how AI community evolved, the conversations about it, the work? Has it persisted the same way from its birth? RG No, of course not. It's just as we were just talking, the symbolic AI really kind of dried up and it all became algorithmic. I remember a young AI student telling me what he was doing, and I had been away from the field long enough. I'd gotten involved with complexity at the Santa Fe Institute. I thought, algorithms, yeah, they're in the service of, but they're not the main event. No, they became the main event. That surprised me. And we all know the downside of this. We all know that if you're using an algorithm to make decisions based on a gazillion human decisions, baked into it are all the mistakes that humans make, the bigotries, the short sightedness, and so on and so on. BG So you mentioned Santa Fe Institute. So you've written the novel Edge of Chaos, but it's inspired by the ideas of complexity, a lot of which have been extensively explored at the Santa Fe Institute. It's another fascinating topic, just sort of emergent complexity from chaos. Nobody knows how it happens really, but it seems to where all the interesting stuff does happen. So how did first, not your novel, but just complexity in general and the work at Santa Fe, fit into the bigger puzzle of the history of AI? Or maybe even your personal journey through that? RG One of the last projects I did concerning AI in particular was looking at the work of Harold Cohen, the painter. And Harold was deeply involved with AI. He was a painter first. And what his project, ARIN, which was a lifelong project, did was reflect his own cognitive processes. Okay. Harold and I, even though I wrote a book about it, we had a lot of friction between us. And I went, I thought, this is it. The book died. It was published and fell into a ditch. This is it. I'm finished. It's time for me to do something different. By chance, this was a sabbatical year for my husband. And we spent two months at the Santa Fe Institute and two months at Caltech. And then the spring semester in Munich, Germany. Okay. Those two months at the Santa Fe Institute were so restorative for me. And I began to, the Institute was very small then. It was in some kind of office complex on old Santa Fe trail. Everybody kept their door open. So you could crack your head on a problem. And if you finally didn't get it, you could walk in to see Stuart Kaufman or any number of people and say, I don't get this. Can you explain? And one of the people that I was talking to about complex adaptive systems was Murray Gelman. And I told Murray what Harold Cohen had done. And I said, you know, this sounds to me like a complex adaptive system. And he said, yeah, it is. Well, what do you know? Harold Aaron had all these kids and cousins all over the world in science and in economics and so on and so forth. I was so relieved. I thought, okay, your instincts are okay. You're doing the right thing. I didn't have the vocabulary. And that was one of the things that the Santa Fe Institute gave me. If I could have rewritten that book, no, it had just come out. I couldn't rewrite it. I would have had a vocabulary to explain what Aaron was doing. Okay. So I got really interested in what was going on at the Institute. The people were, again, bright and funny and willing to explain anything to this amateur. George Cowan, who was then the head of the Institute, said he thought it might be a nice idea if I wrote a book about the Institute. And I thought about it and I had my eye on some other project, God knows what. And I said, I'm sorry, George. Yeah, I'd really love to do it, but just not going to work for me at this moment. He said, oh, too bad. I think it would make an interesting book. Well, he was right and I was wrong. I wish I'd done it. But that's interesting. I hadn't thought about that, that that was a road not taken that I wish I'd taken. Well, you know what? Just on that point, it's quite brave for you as a writer, as sort of coming from a world of literature and the literary thinking and historical thinking. I mean, just from that world and bravely talking to quite, I assume, large egos in AI or in complexity. Yeah, in AI or in complexity and so on. How'd you do it? I mean, I suppose they could be intimidated of you as well because it's two different worlds coming together. I never picked up that anybody was intimidated by me. But how were you brave enough? Where did you find the guts to sort of... God, just dumb luck. I mean, this is an interesting rock to turn over. I'm going to write a book about it. And you know, people have enough patience with writers if they think they're going to end up in a book that they let you flail around and so on. Well, but they also look if the writer has, if there's a sparkle in their eye, if they get it. Yeah, sure. When were you at the Santa Fe Institute? The time I'm talking about is 1990, 1991, 1992. But we then, because Joe was an external faculty member, were in Santa Fe every summer. We bought a house there and I didn't have that much to do with the Institute anymore. I was writing my novels. I was doing whatever I was doing. But I loved the Institute and I loved again, the audacity of the ideas. That really appeals to me. I think that there's this feeling, much like in great institutes of neuroscience, for example, that they're in it for the long game of understanding something fundamental about reality and nature. And that's really exciting. So if we start now to look a little bit more recently, how, you know, AI is really popular today. How is this world, you mentioned algorithmic, but in general, is the spirit of the people, the kind of conversations you hear through the grapevine and so on, is that different than the roots that you remember? No. The same kind of excitement, the same kind of, this is really going to make a difference in the world. And it will. It has. You know, a lot of folks, especially young, 20 years old or something, they think we've just found something special here. We're going to change the world tomorrow. On a time scale, do you have a sense of what, of the time scale at which breakthroughs of the time scale at which breakthroughs in AI happen? I really don't. Because look at Deep Learning. That was, Jeffrey Hinton came up with the algorithm in 86. But it took all these years for the technology to be good enough to actually be applicable. So no, I can't predict that at all. I can't. I wouldn't even try. Well, let me ask you to, not to try to predict, but to speak to the, you know, I'm sure in the 60s, as it continues now, there's people that think, let's call it, we can call it this fun word, the singularity. When there's a phase shift, there's some profound feeling where we're all really surprised by what's able to be achieved. I'm sure those dreams are there. I remember reading quotes in the 60s and those continued. How have your own views, maybe if you look back, about the timeline of a singularity changed? Well, I'm not a big fan of the singularity as Ray Kurzweil has presented it. How would you define the Ray Kurzweil? How do you think of singularity in those? If I understand Kurzweil's view, it's sort of, there's going to be this moment when machines are smarter than humans and, you know, game over. However, the game over is. I mean, do they put us on a reservation? Do they, et cetera, et cetera. And first of all, machines are smarter than humans in some ways all over the place. And they have been since adding machines were invented. So it's not, it's not going to come like some great eatable crossroads, you know, where they meet each other and our offspring, Oedipus says, you're dead. It's just not going to happen. Yeah. So it's already game over with calculators, right? They're already out to do much better at basic arithmetic than us. But you know, there's a human like intelligence. And it's not the ones that destroy us, but you know, somebody that you can have as a, as a friend, you can have deep connections with that kind of passing the touring test and beyond those kinds of ideas. Have you dreamt of those? Oh yes, yes, yes. Those possibilities. In a book I wrote with Ed Feigenbaum, a book I wrote with Ed Feigenbaum, there's a little story called the geriatric robot. And how I came up with the geriatric robot is a story in itself. But here's what the geriatric robot does. It doesn't just clean you up and feed you and wheel you out into the sun. It's great advantages. It listens. It says, tell me again about the great coup of 73. Tell me again about how awful or how wonderful your grandchildren are and so on and so forth. And it isn't hanging around to inherit your money. It isn't hanging around because it can't get any other job. This is his job. And so on and so forth. Well, I would love something like that. Yeah. I mean, for me, that deeply excites me. So I think there's a lot of us. Lex, you gotta know, it was a joke. I dreamed it up because I needed to talk to college students and I needed to give them some idea of what AI might be. And they were rolling in the aisles as I elaborated and elaborated and elaborated. When it went into the book, they took my hide off in the New York Review of Books. This is just what we have thought about these people in AI. They're inhuman. Come on, get over it. Don't you think that's a good thing for the world that AI could potentially do? I do. Absolutely. And furthermore, I'm pushing 80 now. By the time I need help like that, I also want it to roll itself in a corner and shut the fuck up. Let me linger on that point. Do you really though? Yeah, I do. Here's why. Don't you want it to push back a little bit? A little. But I have watched my friends go through the whole issue around having help in the house. And some of them have been very lucky and had fabulous help. And some of them have had people in the house who want to keep the television going on all day, who want to talk on their phones all day. No. Just roll yourself in the corner and shut the fuck up. Unfortunately, us humans, when we're assistants, we're still, even when we're assisting others, we care about ourselves more. Of course. And so you create more frustration. And a robot AI assistant can really optimize the experience for you. I was just speaking to the point, you actually bring up a very, very good point. But I was speaking to the fact that us humans are a little complicated, that we don't necessarily want a perfect servant. I don't, maybe you disagree with that, but there's a, I think there's a push and pull with humans. You're right. A little tension, a little mystery that, of course, that's really difficult for AI to get right. But I do sense, especially today with social media, that people are getting more and more lonely, even young folks, and sometimes especially young folks, that loneliness, there's a longing for connection and AI can help alleviate some of that loneliness. Some, just somebody who listens, like in person. So to speak. So to speak, yeah. So to speak. Yeah, that to me is really exciting. That is really exciting. But so if we look at that, that level of intelligence, which is exceptionally difficult to achieve actually, as the singularity or whatever, that's the human level bar, that people have dreamt of that too. Turing dreamt of it. He had a date timeline. Do you have, how have your own timeline evolved on past? I don't even think about it. You don't even think? No. Just this field has been so full of surprises for me. You're just taking in and see the fun about the basic science. Yeah. I just can't. Maybe that's because I've been around the field long enough to think, you know, don't go that way. Herb Simon was terrible about making these predictions of when this and that would happen. And he was a sensible guy. His quotes are often used, right? As a legend, yeah. Yeah. Do you have concerns about AI, the existential threats that many people like Elon Musk and Sam Harris and others are thinking about? Yeah. That takes up half a chapter in my book. I call it the male gaze. Well, you hear me out. The male gaze is actually a term from film criticism. And I'm blocking on the women who dreamed this up. But she pointed out how most movies were made from the male point of view, that women were objects, not subjects. They didn't have any agency and so on and so forth. So when Elon and his pals Hawking and so on came, AI is going to eat our lunch and our dinner and our midnight snack too, I thought, what? And I said to Ed Feigenbaum, oh, this is the first guy. First, these guys have always been the smartest guy on the block. And here comes something that might be smarter. Oh, let's stamp it out before it takes over. And Ed laughed. He said, I didn't think about it that way. But I did. I did. And it is the male gaze. Okay, suppose these things do have agency. Well, let's wait and see what happens. Can we imbue them with ethics? Can we imbue them with a sense of empathy? Or are they just going to be, I don't know, we've had centuries of guys like that. That's interesting that the ego, the male gaze is immediately threatened. And so you can't think in a patient, calm way of how the tech could evolve. Speaking of which, your 96 book, The Future of Women, I think at the time and now, certainly now, I mean, I'm sorry, maybe at the time, but I'm more cognizant of now, is extremely relevant. You and Nancy Ramsey talk about four possible futures of women in science and tech. So if we look at the decades before and after the book was released, can you tell a history, sorry, of women in science and tech and how it has evolved? How have things changed? Where do we stand? Not enough. They have not changed enough. The way that women are ground down in computing is simply unbelievable. But what are the four possible futures for women in tech from the book? What you're really looking at are various aspects of the present. So for each of those, you could say, oh yeah, we do have backlash. Look at what's happening with abortion and so on and so forth. We have one step forward, one step back. The golden age of equality was the hardest chapter to write. And I used something from the Santa Fe Institute, which is the sandpile effect, that you drop sand very slowly onto a pile and it grows and it grows and it grows until suddenly it just breaks apart. And in a way, Me Too has done that. That was the last drop of sand that broke everything apart. That was a perfect example of the sandpile effect. And that made me feel good. It didn't change all of society, but it really woke a lot of people up. But are you in general optimistic about maybe after Me Too? I mean, Me Too is about a very specific kind of thing. Boy, solve that and you solve everything. But are you in general optimistic about the future? Yes. I'm a congenital optimistic. I can't help it. What about AI? What are your thoughts about the future of AI? Of course, I get asked, what do you worry about? And the one thing I worry about is the things we can't anticipate. There's going to be something out of left field that we will just say, we weren't prepared for that. I am generally optimistic. When I first took up being interested in AI, like most people in the field, more intelligence was like more virtue. You know, what could be bad? And in a way, I still believe that. But I realize that my notion of intelligence has broadened. There are many kinds of intelligence, and we need to imbue our machines with those many kinds. So you've now just finished or in the process of finishing the book that you've been working on, the memoir, how have you changed? I know it's just writing, but how have you changed the process? If you look back, what kind of stuff did it bring up to you that surprised you, looking at the entirety of it all? The biggest thing, and it really wasn't a surprise, is how lucky I was. Oh, my. To have access to the beginning of a scientific field that is going to change the world. How did I luck out? And yes, of course, my view of things has widened a lot. If I can get back to one feminist part of our conversation. Without knowing it, it really was subconscious. I wanted AI to succeed because I was so tired of hearing that intelligence was inside the male cranium. And I thought if there was something out there that wasn't a male thinking and doing well, then that would put a lie to this whole notion of intelligence resides in the male cranium. I did not know that until one night Harold Cohen and I were having a glass of wine, maybe two, and he said, what drew you to AI? And I said, oh, you know, smartest people I knew, great project, blah, blah, blah. And I said, and I wanted something besides male smarts. And it just bubbled up out of me like, what? It's kind of brilliant, actually. So AI really humbles all of us and humbles the people that need to be humbled the most. Let's hope. Wow. That is so beautiful. Pamela, thank you so much for talking to me. It's really a huge honor. It's been a great pleasure. Thank you.
Pamela McCorduck: Machines Who Think and the Early Days of AI | Lex Fridman Podcast #34
The following is a conversation with Jeremy Howard. He's the founder of FastAI, a research institute dedicated to making deep learning more accessible. He's also a distinguished research scientist at the University of San Francisco, a former president of Kaggle, as well as a top ranking competitor there. And in general, he's a successful entrepreneur, educator, researcher, and an inspiring personality in the AI community. When someone asks me, how do I get started with deep learning? FastAI is one of the top places that point them to. It's free, it's easy to get started, it's insightful and accessible, and if I may say so, it has very little BS that can sometimes dilute the value of educational content on popular topics like deep learning. FastAI has a focus on practical application of deep learning and hands on exploration of the cutting edge that is incredibly both accessible to beginners and useful to experts. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Jeremy Howard. What's the first program you ever written? First program I wrote that I remember would be at high school. I did an assignment where I decided to try to find out if there were some better musical scales than the normal 12 tone, 12 interval scale. So I wrote a program on my Commodore 64 in basic that searched through other scale sizes to see if it could find one where there were more accurate harmonies. Like mid tone? Like you want an actual exactly three to two ratio or else with a 12 interval scale, it's not exactly three to two, for example. So that's well tempered as they say in there. And basic on a Commodore 64. Where was the interest in music from? Or is it just technical? I did music all my life. So I played saxophone and clarinet and piano and guitar and drums and whatever. How does that thread go through your life? Where's music today? It's not where I wish it was. For various reasons, couldn't really keep it going, particularly because I had a lot of problems with RSI with my fingers. And so I had to kind of like cut back anything that used hands and fingers. I hope one day I'll be able to get back to it health wise. So there's a love for music underlying it all. Yeah. What's your favorite instrument? Saxophone. Sax. Or baritone saxophone. Well, probably bass saxophone, but they're awkward. Well, I always love it when music is coupled with programming. There's something about a brain that utilizes those that emerges with creative ideas. So you've used and studied quite a few programming languages. Can you give an overview of what you've used? What are the pros and cons of each? Well, my favorite programming environment, well, most certainly was Microsoft Access back in like the earliest days. So that was Visual Basic for applications, which is not a good programming language, but the programming environment was fantastic. It's like the ability to create, you know, user interfaces and tie data and actions to them and create reports and all that as I've never seen anything as good. There's things nowadays like Airtable, which are like small subsets of that, which people love for good reason, but unfortunately, nobody's ever achieved anything like that. What is that? If you could pause on that for a second. Oh, Access? Is it a database? It was a database program that Microsoft produced, part of Office, and they kind of withered, you know, but basically it lets you in a totally graphical way create tables and relationships and queries and tie them to forms and set up, you know, event handlers and calculations. And it was a very complete powerful system designed for not massive scalable things, but for like useful little applications that I loved. So what's the connection between Excel and Access? So very close. So Access kind of was the relational database equivalent, if you like. So people still do a lot of that stuff that should be in Access in Excel as they know it. Excel's great as well. So, but it's just not as rich a programming model as VBA combined with a relational database. And so I've always loved relational databases, but today programming on top of relational database is just a lot more of a headache. You know, you generally either need to kind of, you know, you need something that connects, that runs some kind of database server unless you use SQLite, which has its own issues. Then you kind of often, if you want to get a nice programming model, you'll need to like create an, add an ORM on top. And then, I don't know, there's all these pieces to tie together and it's just a lot more awkward than it should be. There are people that are trying to make it easier. So in particular, I think of F sharp, you know, Don Syme, who, him and his team have done a great job of making something like a database appear in the type system. So you actually get like tab completion for fields and tables and stuff like that. Anyway, so that was kind of, anyway, so like that whole VBA office thing, I guess, was a starting point, which I still miss. And I got into standard Visual Basic, which... That's interesting, just to pause on that for a second. It's interesting that you're connecting programming languages to the ease of management of data. Yeah. So in your use of programming languages, you always had a love and a connection with data. I've always been interested in doing useful things for myself and for others, which generally means getting some data and doing something with it and putting it out there again. So that's been my interest throughout. So I also did a lot of stuff with AppleScript back in the early days. So it's kind of nice being able to get the computer and computers to talk to each other and to do things for you. And then I think that one, the programming language I most loved then would have been Delphi, which was Object Pascal, created by Anders Heilsberg, who previously did Turbo Pascal and then went on to create.NET and then went on to create TypeScript. Delphi was amazing because it was like a compiled, fast language that was as easy to use as Visual Basic. Delphi, what is it similar to in more modern languages? Visual Basic. Visual Basic. Yeah, but a compiled, fast version. So I'm not sure there's anything quite like it anymore. If you took like C Sharp or Java and got rid of the virtual machine and replaced it with something, you could compile a small type binary. I feel like it's where Swift could get to with the new Swift UI and the cross platform development going on. Like that's one of my dreams is that we'll hopefully get back to where Delphi was. There is actually a free Pascal project nowadays called Lazarus, which is also attempting to kind of recreate Delphi. So they're making good progress. So, okay, Delphi, that's one of your favorite programming languages. Well, it's programming environments. Again, I'd say Pascal's not a nice language. If you wanted to know specifically about what languages I like, I would definitely pick J as being an amazingly wonderful language. What's J? J, are you aware of APL? I am not, except from doing a little research on the work you've done. Okay, so not at all surprising you're not familiar with it because it's not well known, but it's actually one of the main families of programming languages going back to the late 50s, early 60s. So there was a couple of major directions. One was the kind of Lambda Calculus Alonzo Church direction, which I guess kind of lisp and scheme and whatever, which has a history going back to the early days of computing. The second was the kind of imperative slash OO, algo similar going on to C, C++ and so forth. There was a third, which are called array oriented languages, which started with a paper by a guy called Ken Iverson, which was actually a math theory paper, not a programming paper. It was called Notation as a Tool for Thought. And it was the development of a new way, a new type of math notation. And the idea is that this math notation was much more flexible, expressive, and also well defined than traditional math notation, which is none of those things. Math notation is awful. And so he actually turned that into a programming language and cause this was the early 50s or the sorry, late 50s, all the names were available. So he called his language a programming language or APL. APL. So APL is a implementation of notation as a tool for thought by which he means math notation. And Ken and his son went on to do many things, but eventually they actually produced a new language that was built on top of all the learnings of APL. And that was called J. And J is the most expressive, composable language of beautifully designed language I've ever seen. Does it have object oriented components? Does it have that kind of thing? Not really, it's an array oriented language. It's the third path. Are you saying array? Array oriented, yeah. What does it mean to be array oriented? So array oriented means that you generally don't use any loops, but the whole thing is done with kind of a extreme version of broadcasting, if you're familiar with that NumPy slash Python concept. So you do a lot with one line of code. It looks a lot like math notation, highly compact. And the idea is that you can kind of, because you can do so much with one line of code, a single screen of code is very unlikely to, you very rarely need more than that to express your program. And so you can kind of keep it all in your head and you can kind of clearly communicate it. It's interesting that APL created two main branches, K and J. J is this kind of like open source, niche community of crazy enthusiasts like me. And then the other path, K, was fascinating. It's an astonishingly expensive programming language, which many of the world's most ludicrously rich hedge funds use. So the entire K machine is so small it sits inside level three cache on your CPU. And it easily wins every benchmark I've ever seen in terms of data processing speed. But you don't come across it very much because it's like $100,000 per CPU to run it. It's like this path of programming languages is just so much, I don't know, so much more powerful in every way than the ones that almost anybody uses every day. So it's all about computation. It's really focused on computation. It's pretty heavily focused on computation. I mean, so much of programming is data processing by definition. So there's a lot of things you can do with it. But yeah, there's not much work being done on making like user interface toolkits or whatever. I mean, there's some, but they're not great. At the same time, you've done a lot of stuff with Perl and Python. So where does that fit into the picture of J and K and APL? Well, it's just much more pragmatic. Like in the end, you kind of have to end up where the libraries are, you know? Like, cause to me, my focus is on productivity. I just want to get stuff done and solve problems. So Perl was great. I created an email company called FastMail and Perl was great cause back in the late nineties, early two thousands, it just had a lot of stuff it could do. I still had to write my own monitoring system and my own web framework, my own whatever, cause like none of that stuff existed. But it was a super flexible language to do that in. And you used Perl for FastMail, you used it as a backend? Like so everything was written in Perl? Yeah, yeah, everything, everything was Perl. Why do you think Perl hasn't succeeded or hasn't dominated the market where Python really takes over a lot of the tasks? Well, I mean, Perl did dominate. It was everything, everywhere, but then the guy that ran Perl, Larry Wohl, kind of just didn't put the time in anymore. And no project can be successful if there isn't, you know, particularly one that started with a strong leader that loses that strong leadership. So then Python has kind of replaced it. You know, Python is a lot less elegant language in nearly every way, but it has the data science libraries and a lot of them are pretty great. So I kind of use it cause it's the best we have, but it's definitely not good enough. But what do you think the future of programming looks like? What do you hope the future of programming looks like if we zoom in on the computational fields, on data science, on machine learning? I hope Swift is successful because the goal of Swift, the way Chris Latner describes it, is to be infinitely hackable. And that's what I want. I want something where me and the people I do research with and my students can look at and change everything from top to bottom. There's nothing mysterious and magical and inaccessible. Unfortunately with Python, it's the opposite of that because Python is so slow. It's extremely unhackable. You get to a point where it's like, okay, from here on down at C. So your debugger doesn't work in the same way. Your profiler doesn't work in the same way. Your build system doesn't work in the same way. It's really not very hackable at all. What's the part you like to be hackable? Is it for the objective of optimizing training of neural networks, inference of neural networks? Is it performance of the system or is there some non performance related, just? It's everything. I mean, in the end, I want to be productive as a practitioner. So that means that, so like at the moment, our understanding of deep learning is incredibly primitive. There's very little we understand. Most things don't work very well, even though it works better than anything else out there. There's so many opportunities to make it better. So you look at any domain area, like, I don't know, speech recognition with deep learning or natural language processing classification with deep learning or whatever. Every time I look at an area with deep learning, I always see like, oh, it's terrible. There's lots and lots of obviously stupid ways to do things that need to be fixed. So then I want to be able to jump in there and quickly experiment and make them better. You think the programming language has a role in that? Huge role, yeah. So currently, Python has a big gap in terms of our ability to innovate, particularly around recurrent neural networks and natural language processing. Because it's so slow, the actual loop where we actually loop through words, we have to do that whole thing in CUDA C. So we actually can't innovate with the kernel, the heart of that most important algorithm. And it's just a huge problem. And this happens all over the place. So we hit research limitations. Another example, convolutional neural networks, which are actually the most popular architecture for lots of things, maybe most things in deep learning. We almost certainly should be using sparse convolutional neural networks, but only like two people are, because to do it, you have to rewrite all of that CUDA C level stuff. And yeah, just researchers and practitioners don't. So there's just big gaps in what people actually research on, what people actually implement because of the programming language problem. So you think it's just too difficult to write in CUDA C that a higher level programming language like Swift should enable the easier, fooling around creative stuff with RNNs or with sparse convolutional neural networks? Kind of. Who's at fault? Who's at charge of making it easy for a researcher to play around? I mean, no one's at fault, just nobody's got around to it yet, or it's just, it's hard, right? And I mean, part of the fault is that we ignored that whole APL kind of direction. Nearly everybody did for 60 years, 50 years. But recently people have been starting to reinvent pieces of that and kind of create some interesting new directions in the compiler technology. So the place where that's particularly happening right now is something called MLIR, which is something that, again, Chris Latina, the Swift guy, is leading. And yeah, because it's actually not gonna be Swift on its own that solves this problem, because the problem is that currently writing a acceptably fast, you know, GPU program is too complicated regardless of what language you use. Right. And that's just because if you have to deal with the fact that I've got, you know, 10,000 threads and I have to synchronize between them all and I have to put my thing into grid blocks and think about warps and all this stuff, it's just so much boilerplate that to do that well, you have to be a specialist at that and it's gonna be a year's work to, you know, optimize that algorithm in that way. But with things like tensor comprehensions and TILE and MLIR and TVM, there's all these various projects which are all about saying, let's let people create like domain specific languages for tensor computations. These are the kinds of things we do generally on the GPU for deep learning and then have a compiler which can optimize that tensor computation. A lot of this work is actually sitting on top of a project called Halide, which is a mind blowing project where they came up with such a domain specific language. In fact, two, one domain specific language for expressing this is what my tensor computation is and another domain specific language for expressing this is the kind of the way I want you to structure the compilation of that and like do it block by block and do these bits in parallel. And they were able to show how you can compress the amount of code by 10X compared to optimized GPU code and get the same performance. So that's like, so these other things are kind of sitting on top of that kind of research and MLIR is pulling a lot of those best practices together. And now we're starting to see work done on making all of that directly accessible through Swift so that I could use Swift to kind of write those domain specific languages and hopefully we'll get then Swift CUDA kernels written in a very expressive and concise way that looks a bit like J and APL and then Swift layers on top of that and then a Swift UI on top of that. And it'll be so nice if we can get to that point. Now does it all eventually boil down to CUDA and NVIDIA GPUs? Unfortunately at the moment it does, but one of the nice things about MLIR if AMD ever gets their act together which they probably won't is that they or others could write MLIR backends for other GPUs or rather tensor computation devices of which today there are increasing number like Graph Core or Vertex AI or whatever. So yeah, being able to target lots of backends would be another benefit of this and the market really needs competition because at the moment NVIDIA is massively overcharging for their kind of enterprise class cards because there is no serious competition because nobody else is doing the software properly. In the cloud there is some competition, right? But... Not really, other than TPUs perhaps, but TPUs are almost unprogrammable at the moment. So TPUs have the same problem that you can't? It's even worse. So TPUs, Google actually made an explicit decision to make them almost entirely unprogrammable because they felt that there was too much IP in there and if they gave people direct access to program them, people would learn their secrets. So you can't actually directly program the memory in a TPU. You can't even directly create code that runs on and that you look at on the machine that has the TPU, it all goes through a virtual machine. So all you can really do is this kind of cookie cutter thing of like plug in high level stuff together, which is just super tedious and annoying and totally unnecessary. So what was the, tell me if you could, the origin story of fast AI. What is the motivation, its mission, its dream? So I guess the founding story is heavily tied to my previous startup, which is a company called Analytic, which was the first company to focus on deep learning for medicine and I created that because I saw that was a huge opportunity to, there's about a 10X shortage of the number of doctors in the world, in the developing world that we need. I expected it would take about 300 years to train enough doctors to meet that gap. But I guess that maybe if we used deep learning for some of the analytics, we could maybe make it so you don't need as highly trained doctors. For diagnosis. For diagnosis and treatment planning. Where's the biggest benefit just before we get to fast AI, where's the biggest benefit of AI and medicine that you see today? And maybe next time. Not much happening today in terms of like stuff that's actually out there, it's very early. But in terms of the opportunity, it's to take markets like India and China and Indonesia, which have big populations, Africa, small numbers of doctors, and provide diagnostic, particularly treatment planning and triage kind of on device so that if you do a test for malaria or tuberculosis or whatever, you immediately get something that even a healthcare worker that's had a month of training can get a very high quality assessment of whether the patient might be at risk and tell, okay, we'll send them off to a hospital. So for example, in Africa, outside of South Africa, there's only five pediatric radiologists for the entire continent. So most countries don't have any. So if your kid is sick and they need something diagnosed through medical imaging, the person, even if you're able to get medical imaging done, the person that looks at it will be a nurse at best. But actually in India, for example, and China, almost no x rays are read by anybody, by any trained professional because they don't have enough. So if instead we had a algorithm that could take the most likely high risk 5% and say triage, basically say, okay, someone needs to look at this, it would massively change the kind of way that what's possible with medicine in the developing world. And remember, they have, increasingly they have money. They're the developing world, they're not the poor world, they're the developing world. So they have the money. So they're building the hospitals, they're getting the diagnostic equipment, but there's no way for a very long time will they be able to have the expertise. Shortage of expertise, okay. And that's where the deep learning systems can step in and magnify the expertise they do have. Exactly, yeah. So you do see, just to linger a little bit longer, the interaction, do you still see the human experts still at the core of these systems? Yeah, absolutely. Is there something in medicine that could be automated almost completely? I don't see the point of even thinking about that because we have such a shortage of people. Why would we want to find a way not to use them? We have people, so the idea of like, even from an economic point of view, if you can make them 10X more productive, getting rid of the person, doesn't impact your unit economics at all. And it totally ignores the fact that there are things people do better than machines. So it's just to me, that's not a useful way of framing the problem. I guess, just to clarify, I guess I meant there may be some problems where you can avoid even going to the expert ever, sort of maybe preventative care or some basic stuff, allowing food, allowing the expert to focus on the things that are really that, you know. Well, that's what the triage would do, right? So the triage would say, okay, there's 99% sure there's nothing here. So that can be done on device and they can just say, okay, go home. So the experts are being used to look at the stuff which has some chance it's worth looking at, which most things it's not, it's fine. Why do you think that is? You know, it's fine. Why do you think we haven't quite made progress on that yet in terms of the scale of how much AI is applied in the medical field? Oh, there's a lot of reasons. I mean, one is it's pretty new. I only started in Liddick in like 2014. And before that, it's hard to express to what degree the medical world was not aware of the opportunities here. So I went to RSNA, which is the world's largest radiology conference. And I told everybody I could, you know, like I'm doing this thing with deep learning, please come and check it out. And no one had any idea what I was talking about and no one had any interest in it. So like we've come from absolute zero, which is hard. And then the whole regulatory framework, education system, everything is just set up to think of doctoring in a very different way. So today there is a small number of people who are deep learning practitioners and doctors at the same time. And we're starting to see the first ones come out of their PhD programs. So Zach Kahane over in Boston, Cambridge has a number of students now who are data science experts, deep learning experts, and actual medical doctors. Quite a few doctors have completed our fast AI course now and are publishing papers and creating journal reading groups in the American Council of Radiology. And like, it's just starting to happen, but it's gonna be a long time coming. It's gonna happen, but it's gonna be a long process. The regulators have to learn how to regulate this. They have to build guidelines. And then the lawyers at hospitals have to develop a new way of understanding that sometimes it makes sense for data to be looked at in raw form in large quantities in order to create well changing results. Yeah, so the regulation around data, all that, it sounds probably the hardest problem, but sounds reminiscent of autonomous vehicles as well. Many of the same regulatory challenges, many of the same data challenges. Yeah, I mean, funnily enough, the problem is less the regulation and more the interpretation of that regulation by lawyers in hospitals. So HIPAA is actually, was designed to pay, and HIPAA does not stand for privacy. It stands for portability. It's actually meant to be a way that data can be used. And it was created with lots of gray areas because the idea is that would be more practical and it would help people to use this legislation to actually share data in a more thoughtful way. Unfortunately, it's done the opposite because when a lawyer sees a gray area, they say, oh, if we don't know, we won't get sued, then we can't do it. So HIPAA is not exactly the problem. The problem is more that there's, hospital lawyers are not incented to make bold decisions about data portability. Or even to embrace technology that saves lives. They more want to not get in trouble for embracing that technology. It also saves lives in a very abstract way, which is like, oh, we've been able to release these 100,000 anonymized records. I can't point to the specific person whose life that saved. I can say like, oh, we ended up with this paper which found this result, which diagnosed a thousand more people than we would have otherwise, but it's like, which ones were helped? It's very abstract. And on the counter side of that, you may be able to point to a life that was taken because of something that was. Yeah, or a person whose privacy was violated. It's like, oh, this specific person was deidentified. So, identified. Just a fascinating topic. We're jumping around. We'll get back to fast AI, but on the question of privacy, data is the fuel for so much innovation in deep learning. What's your sense on privacy? Whether we're talking about Twitter, Facebook, YouTube, just the technologies like in the medical field that rely on people's data in order to create impact. How do we get that right, respecting people's privacy and yet creating technology that is learning from data? One of my areas of focus is on doing more with less data. More with less data, which, so most vendors, unfortunately, are strongly incented to find ways to require more data and more computation. So, Google and IBM being the most obvious. IBM. Yeah, so Watson. So, Google and IBM both strongly push the idea that you have to be, that they have more data and more computation and more intelligent people than anybody else. And so you have to trust them to do things because nobody else can do it. And Google's very upfront about this, like Jeff Dean has gone out there and given talks and said, our goal is to require a thousand times more computation, but less people. Our goal is to use the people that you have better and the data you have better and the computation you have better. So, one of the things that we've discovered is, or at least highlighted, is that you very, very, very often don't need much data at all. And so the data you already have in your organization will be enough to get state of the art results. So, like my starting point would be to kind of say around privacy is a lot of people are looking for ways to share data and aggregate data, but I think often that's unnecessary. They assume that they need more data than they do because they're not familiar with the basics of transfer learning, which is this critical technique for needing orders of magnitude less data. Is your sense, one reason you might wanna collect data from everyone is like in the recommender system context, where your individual, Jeremy Howard's individual data is the most useful for providing a product that's impactful for you. So, for giving you advertisements, for recommending to you movies, for doing medical diagnosis, is your sense we can build with a small amount of data, general models that will have a huge impact for most people that we don't need to have data from each individual? On the whole, I'd say yes. I mean, there are things like, you know, recommender systems have this cold start problem where, you know, Jeremy is a new customer, we haven't seen him before, so we can't recommend him things based on what else he's bought and liked with us. And there's various workarounds to that. Like in a lot of music programs, we'll start out by saying, which of these artists do you like? Which of these albums do you like? Which of these songs do you like? Netflix used to do that, nowadays they tend not to. People kind of don't like that because they think, oh, we don't wanna bother the user. So, you could work around that by having some kind of data sharing where you get my marketing record from Axiom or whatever, and try to guess from that. To me, the benefit to me and to society of saving me five minutes on answering some questions versus the negative externalities of the privacy issue doesn't add up. So, I think like a lot of the time, the places where people are invading our privacy in order to provide convenience is really about just trying to make them more money and they move these negative externalities to places that they don't have to pay for them. So, when you actually see regulations appear that actually cause the companies that create these negative externalities to have to pay for it themselves, they say, well, we can't do it anymore. So, the cost is actually too high. But for something like medicine, yeah, I mean, the hospital has my medical imaging, my pathology studies, my medical records, and also I own my medical data. So, you can, so I help a startup called Doc.ai. One of the things Doc.ai does is that it has an app. You can connect to, you know, Sutter Health and LabCorp and Walgreens and download your medical data to your phone and then upload it again at your discretion to share it as you wish. So, with that kind of approach, we can share our medical information with the people we want to. Yeah, so control. I mean, really being able to control who you share it with and so on. Yeah. So, that has a beautiful, interesting tangent to return back to the origin story of Fast.ai. Right, so before I started Fast.ai, I spent a year researching where are the biggest opportunities for deep learning? Because I knew from my time at Kaggle in particular that deep learning had kind of hit this threshold point where it was rapidly becoming the state of the art approach in every area that looked at it. And I'd been working with neural nets for over 20 years. I knew that from a theoretical point of view, once it hit that point, it would do that in kind of just about every domain. And so I kind of spent a year researching what are the domains that's gonna have the biggest low hanging fruit in the shortest time period. I picked medicine, but there were so many I could have picked. And so there was a kind of level of frustration for me of like, okay, I'm really glad we've opened up the medical deep learning world. And today it's huge, as you know, but we can't do, I can't do everything. I don't even know, like in medicine, it took me a really long time to even get a sense of like what kind of problems do medical practitioners solve? What kind of data do they have? Who has that data? So I kind of felt like I need to approach this differently if I wanna maximize the positive impact of deep learning. Rather than me picking an area and trying to become good at it and building something, I should let people who are already domain experts in those areas and who already have the data do it themselves. So that was the reason for Fast.ai is to basically try and figure out how to get deep learning into the hands of people who could benefit from it and help them to do so in as quick and easy and effective a way as possible. Got it, so sort of empower the domain experts. Yeah, and like partly it's because like, unlike most people in this field, my background is very applied and industrial. Like my first job was at McKinsey & Company. I spent 10 years in management consulting. I spend a lot of time with domain experts. So I kind of respect them and appreciate them. And I know that's where the value generation in society is. And so I also know how most of them can't code and most of them don't have the time to invest three years in a graduate degree or whatever. So I was like, how do I upskill those domain experts? I think that would be a super powerful thing, the biggest societal impact I could have. So yeah, that was the thinking. So much of Fast.ai students and researchers and the things you teach are pragmatically minded, practically minded, figuring out ways how to solve real problems and fast. So from your experience, what's the difference between theory and practice of deep learning? Well, most of the research in the deep learning world is a total waste of time. Right, that's what I was getting at. Yeah. It's a problem in science in general. Scientists need to be published, which means they need to work on things that their peers are extremely familiar with and can recognize in advance in that area. So that means that they all need to work on the same thing. And so it really, and the thing they work on, there's nothing to encourage them to work on things that are practically useful. So you get just a whole lot of research, which is minor advances and stuff that's been very highly studied and has no significant practical impact. Whereas the things that really make a difference, like I mentioned transfer learning, like if we can do better at transfer learning, then it's this like world changing thing where suddenly like lots more people can do world class work with less resources and less data. But almost nobody works on that. Or another example, active learning, which is the study of like, how do we get more out of the human beings in the loop? That's my favorite topic. Yeah, so active learning is great, but it's almost nobody working on it because it's just not a trendy thing right now. You know what somebody, sorry to interrupt, you're saying that nobody is publishing on active learning, but there's people inside companies, anybody who actually has to solve a problem, they're going to innovate on active learning. Yeah, everybody kind of reinvents active learning when they actually have to work in practice because they start labeling things and they think, gosh, this is taking a long time and it's very expensive. And then they start thinking, well, why am I labeling everything? I'm only, the machine's only making mistakes on those two classes. They're the hard ones. Maybe I'll just start labeling those two classes. And then you start thinking, well, why did I do that manually? Why can't I just get the system to tell me which things are going to be hardest? It's an obvious thing to do, but yeah, it's just like transfer learning. It's understudied and the academic world just has no reason to care about practical results. The funny thing is, like I've only really ever written one paper. I hate writing papers. And I didn't even write it. It was my colleague, Sebastian Ruder, who actually wrote it. I just did the research for it, but it was basically introducing transfer learning, successful transfer learning to NLP for the first time. The algorithm is called ULM fit. And it actually, I actually wrote it for the course, for the Fast AI course. I wanted to teach people NLP and I thought, I only want to teach people practical stuff. And I think the only practical stuff is transfer learning. And I couldn't find any examples of transfer learning in NLP. So I just did it. And I was shocked to find that as soon as I did it, which, you know, the basic prototype took a couple of days, smashed the state of the art on one of the most important data sets in a field that I knew nothing about. And I just thought, well, this is ridiculous. And so I spoke to Sebastian about it and he kindly offered to write it up, the results. And so it ended up being published in ACL, which is the top computational linguistics conference. So like people do actually care once you do it, but I guess it's difficult for maybe like junior researchers or like, I don't care whether I get citations or papers or whatever. There's nothing in my life that makes that important, which is why I've never actually bothered to write a paper myself. But for people who do, I guess they have to pick the kind of safe option, which is like, yeah, make a slight improvement on something that everybody's already working on. Yeah, nobody does anything interesting or succeeds in life with the safe option. Although, I mean, the nice thing is, nowadays everybody is now working on NLP transfer learning because since that time we've had GPT and GPT2 and BERT, and, you know, it's like, it's, so yeah, once you show that something's possible, everybody jumps in, I guess, so. I hope to be a part of, and I hope to see more innovation and active learning in the same way. I think transfer learning and active learning are fascinating, public, open work. I actually helped start a startup called Platform AI, which is really all about active learning. And yeah, it's been interesting trying to kind of see what research is out there and make the most of it. And there's basically none. So we've had to do all our own research. Once again, and just as you described. Can you tell the story of the Stanford competition, Dawn Bench, and FastAI's achievement on it? Sure, so something which I really enjoy is that I basically teach two courses a year, the Practical Deep Learning for Coders, which is kind of the introductory course, and then Cutting Edge Deep Learning for Coders, which is the kind of research level course. And while I teach those courses, I basically have a big office at the University of San Francisco, big enough for like 30 people. And I invite anybody, any student who wants to come and hang out with me while I build the course. And so generally it's full. And so we have 20 or 30 people in a big office with nothing to do but study deep learning. So it was during one of these times that somebody in the group said, oh, there's a thing called Dawn Bench that looks interesting. And I was like, what the hell is that? And they set out some competition to see how quickly you can train a model. Seems kind of, not exactly relevant to what we're doing, but it sounds like the kind of thing which you might be interested in. And I checked it out and I was like, oh crap, there's only 10 days till it's over. It's too late. And we're kind of busy trying to teach this course. But we're like, oh, it would make an interesting case study for the course. It's like, it's all the stuff we're already doing. Why don't we just put together our current best practices and ideas? So me and I guess about four students just decided to give it a go. And we focused on this small one called Cifar 10, which is little 32 by 32 pixel images. Can you say what Dawn Bench is? Yeah, so it's a competition to train a model as fast as possible. It was run by Stanford. And it's cheap as possible too. That's also another one for as cheap as possible. And there was a couple of categories, ImageNet and Cifar 10. So ImageNet is this big 1.3 million image thing that took a couple of days to train. Remember a friend of mine, Pete Warden, who's now at Google. I remember he told me how he trained ImageNet a few years ago when he basically like had this little granny flat out the back that he turned into his ImageNet training center. And he figured, you know, after like a year of work, he figured out how to train it in like 10 days or something. It's like, that was a big job. Whereas Cifar 10, at that time, you could train in a few hours. You know, it's much smaller and easier. So we thought we'd try Cifar 10. And yeah, I've really never done that before. Like I'd never really, like things like using more than one GPU at a time was something I tried to avoid. Cause to me, it's like very against the whole idea of accessibility is should better do things with one GPU. I mean, have you asked in the past before, after having accomplished something, how do I do this faster, much faster? Oh, always, but it's always, for me, it's always how do I make it much faster on a single GPU that a normal person could afford in their day to day life. It's not how could I do it faster by, you know, having a huge data center. Cause to me, it's all about like, as many people should better use something as possible without fussing around with infrastructure. So anyways, in this case it's like, well, we can use eight GPUs just by renting a AWS machine. So we thought we'd try that. And yeah, basically using the stuff we were already doing, we were able to get, you know, the speed, you know, within a few days we had the speed down to, I don't know, a very small number of minutes. I can't remember exactly how many minutes it was, but it might've been like 10 minutes or something. And so, yeah, we found ourselves at the top of the leaderboard easily for both time and money, which really shocked me cause the other people competing in this were like Google and Intel and stuff who I like know a lot more about this stuff than I think we do. So then we were emboldened. We thought let's try the ImageNet one too. I mean, it seemed way out of our league, but our goal was to get under 12 hours. And we did, which was really exciting. But we didn't put anything up on the leaderboard, but we were down to like 10 hours. But then Google put in like five hours or something and we're just like, oh, we're so screwed. But we kind of thought, we'll keep trying. You know, if Google can do it in five, I mean, Google did on five hours on something on like a TPU pod or something, like a lot of hardware. But we kind of like had a bunch of ideas to try. Like a really simple thing was why are we using these big images? They're like 224 or 256 by 256 pixels. You know, why don't we try smaller ones? And just to elaborate, there's a constraint on the accuracy that your trained model is supposed to achieve, right? Yeah, you gotta achieve 93%, I think it was, for ImageNet, exactly. Which is very tough, so you have to. Yeah, 93%, like they picked a good threshold. It was a little bit higher than what the most commonly used ResNet 50 model could achieve at that time. So yeah, so it's quite a difficult problem to solve. But yeah, we realized if we actually just use 64 by 64 images, it trained a pretty good model. And then we could take that same model and just give it a couple of epochs to learn 224 by 224 images. And it was basically already trained. It makes a lot of sense. Like if you teach somebody, like here's what a dog looks like and you show them low res versions, and then you say, here's a really clear picture of a dog, they already know what a dog looks like. So that like just, we jumped to the front and we ended up winning parts of that competition. We actually ended up doing a distributed version over multiple machines a couple of months later and ended up at the top of the leaderboard. We had 18 minutes. ImageNet. Yeah, and it was, and people have just kept on blasting through again and again since then, so. So what's your view on multi GPU or multiple machine training in general as a way to speed code up? I think it's largely a waste of time. Both of them. I think it's largely a waste of time. Both multi GPU on a single machine and. Yeah, particularly multi machines, cause it's just clunky. Multi GPUs is less clunky than it used to be, but to me anything that slows down your iteration speed is a waste of time. So you could maybe do your very last, you know, perfecting of the model on multi GPUs if you need to, but. So for example, I think doing stuff on ImageNet is generally a waste of time. Why test things on 1.3 million images? Most of us don't use 1.3 million images. And we've also done research that shows that doing things on a smaller subset of images gives you the same relative answers anyway. So from a research point of view, why waste that time? So actually I released a couple of new data sets recently. One is called ImageNet, the French ImageNet, which is a small subset of ImageNet, which is designed to be easy to classify. What's, how do you spell ImageNet? It's got an extra T and E at the end, cause it's very French. And then another one called ImageWolf, which is a subset of ImageNet that only contains dog breeds. And that's a hard one, right? That's a hard one. And I've discovered that if you just look at these two subsets, you can train things on a single GPU in 10 minutes. And the results you get are directly transferable to ImageNet nearly all the time. And so now I'm starting to see some researchers start to use these much smaller data sets. I so deeply love the way you think, because I think you might've written a blog post saying that sort of going these big data sets is encouraging people to not think creatively. Absolutely. So you're too, it sort of constrains you to train on large resources. And because you have these resources, you think more research will be better. And then you start, so like somehow you kill the creativity. Yeah, and even worse than that, Lex, I keep hearing from people who say, I decided not to get into deep learning because I don't believe it's accessible to people outside of Google to do useful work. So like I see a lot of people make an explicit decision to not learn this incredibly valuable tool because they've drunk the Google Koolaid, which is that only Google's big enough and smart enough to do it. And I just find that so disappointing and it's so wrong. And I think all of the major breakthroughs in AI in the next 20 years will be doable on a single GPU. Like I would say, my sense is all the big sort of. Well, let's put it this way. None of the big breakthroughs of the last 20 years have required multiple GPUs. So like batch norm, ReLU, Dropout. To demonstrate that there's something to them. Every one of them, none of them has required multiple GPUs. GANs, the original GANs didn't require multiple GPUs. Well, and we've actually recently shown that you don't even need GANs. So we've developed GAN level outcomes without needing GANs. And we can now do it with, again, by using transfer learning, we can do it in a couple of hours on a single GPU. You're just using a generator model without the adversarial part? Yeah, so we've found loss functions that work super well without the adversarial part. And then one of our students, a guy called Jason Antich, has created a system called dealtify, which uses this technique to colorize old black and white movies. You can do it on a single GPU, colorize a whole movie in a couple of hours. And one of the things that Jason and I did together was we figured out how to add a little bit of GAN at the very end, which it turns out for colorization makes it just a bit brighter and nicer. And then Jason did masses of experiments to figure out exactly how much to do, but it's still all done on his home machine on a single GPU in his lounge room. And if you think about colorizing Hollywood movies, that sounds like something a huge studio would have to do, but he has the world's best results on this. There's this problem of microphones. We're just talking to microphones now. It's such a pain in the ass to have these microphones to get good quality audio. And I tried to see if it's possible to plop down a bunch of cheap sensors and reconstruct higher quality audio from multiple sources. Because right now I haven't seen the work from, okay, we can say even expensive mics automatically combining audio from multiple sources to improve the combined audio. People haven't done that. And that feels like a learning problem. So hopefully somebody can. Well, I mean, it's evidently doable and it should have been done by now. I felt the same way about computational photography four years ago. Why are we investing in big lenses when three cheap lenses plus actually a little bit of intentional movement, so like take a few frames, gives you enough information to get excellent subpixel resolution, which particularly with deep learning, you would know exactly what you meant to be looking at. We can totally do the same thing with audio. I think it's madness that it hasn't been done yet. Is there progress on the photography company? Yeah, photography is basically standard now. So the Google Pixel Night Light, I don't know if you've ever tried it, but it's astonishing. You take a picture in almost pitch black and you get back a very high quality image. And it's not because of the lens. Same stuff with like adding the bokeh to the background blurring, it's done computationally. This is the pixel right here. Yeah, basically everybody now is doing most of the fanciest stuff on their phones with computational photography and also increasingly people are putting more than one lens on the back of the camera. So the same will happen for audio for sure. And there's applications in the audio side. If you look at an Alexa type device, most people I've seen, especially I worked at Google before, when you look at noise background removal, you don't think of multiple sources of audio. You don't play with that as much as I would hope people would. But I mean, you can still do it even with one. Like again, not much work's been done in this area. So we're actually gonna be releasing an audio library soon, which hopefully will encourage development of this because it's so underused. The basic approach we used for our super resolution and which Jason uses for dealtify of generating high quality images, the exact same approach would work for audio. No one's done it yet, but it would be a couple of months work. Okay, also learning rate in terms of Dawn Bench. There's some magic on learning rate that you played around with that's kind of interesting. Yeah, so this is all work that came from a guy called Leslie Smith. Leslie's a researcher who, like us, cares a lot about just the practicalities of training neural networks quickly and accurately, which I think is what everybody should care about, but almost nobody does. And he discovered something very interesting, which he calls super convergence, which is there are certain networks that with certain settings of high parameters could suddenly be trained 10 times faster by using a 10 times higher learning rate. Now, no one published that paper because it's not an area of kind of active research in the academic world. No academics recognize that this is important. And also deep learning in academia is not considered a experimental science. So unlike in physics where you could say like, I just saw a subatomic particle do something which the theory doesn't explain, you could publish that without an explanation. And then in the next 60 years, people can try to work out how to explain it. We don't allow this in the deep learning world. So it's literally impossible for Leslie to publish a paper that says, I've just seen something amazing happen. This thing trained 10 times faster than it should have. I don't know why. And so the reviewers were like, well, you can't publish that because you don't know why. So anyway. That's important to pause on because there's so many discoveries that would need to start like that. Every other scientific field I know of works that way. I don't know why ours is uniquely disinterested in publishing unexplained experimental results, but there it is. So it wasn't published. Having said that, I read a lot more unpublished papers than published papers because that's where you find the interesting insights. So I absolutely read this paper. And I was just like, this is astonishingly mind blowing and weird and awesome. And like, why isn't everybody only talking about this? Because like, if you can train these things 10 times faster, they also generalize better because you're doing less epochs, which means you look at the data less, you get better accuracy. So I've been kind of studying that ever since. And eventually Leslie kind of figured out a lot of how to get this done. And we added minor tweaks. And a big part of the trick is starting at a very low learning rate, very gradually increasing it. So as you're training your model, you would take very small steps at the start and you gradually make them bigger and bigger until eventually you're taking much bigger steps than anybody thought was possible. There's a few other little tricks to make it work, but basically we can reliably get super convergence. And so for the Dawn Bench thing, we were using just much higher learning rates than people expected to work. What do you think the future of, I mean, it makes so much sense for that to be a critical hyperparameter learning rate that you vary. What do you think the future of learning rate magic looks like? Well, there's been a lot of great work in the last 12 months in this area. And people are increasingly realizing that optimize, like we just have no idea really how optimizers work. And the combination of weight decay, which is how we regularize optimizers, and the learning rate, and then other things like the epsilon we use in the Adam optimizer, they all work together in weird ways. And different parts of the model, this is another thing we've done a lot of work on is research into how different parts of the model should be trained at different rates in different ways. So we do something we call discriminative learning rates, which is really important, particularly for transfer learning. So really, I think in the last 12 months, a lot of people have realized that all this stuff is important. There's been a lot of great work coming out and we're starting to see algorithms appear, which have very, very few dials, if any, that you have to touch. So I think what's gonna happen is the idea of a learning rate, well, it almost already has disappeared in the latest research. And instead, it's just like we know enough about how to interpret the gradients and the change of gradients we see to know how to set every parameter in an optimal way. So you see the future of deep learning where really, where's the input of a human expert needed? Well, hopefully the input of a human expert will be almost entirely unneeded from the deep learning point of view. So again, like Google's approach to this is to try and use thousands of times more compute to run lots and lots of models at the same time and hope that one of them is good. AutoML kind of thing? Yeah, AutoML kind of stuff, which I think is insane. When you better understand the mechanics of how models learn, you don't have to try a thousand different models to find which one happens to work the best. You can just jump straight to the best one, which means that it's more accessible in terms of compute, cheaper, and also with less hyperparameters to set, it means you don't need deep learning experts to train your deep learning model for you, which means that domain experts can do more of the work, which means that now you can focus the human time on the kind of interpretation, the data gathering, identifying model errors and stuff like that. Yeah, the data side. How often do you work with data these days in terms of the cleaning, looking at it? Like Darwin looked at different species while traveling about. Do you look at data? Have you in your roots in Kaggle? Always, yeah. Look at data. Yeah, I mean, it's a key part of our course. It's like before we train a model in the course, we see how to look at the data. And then the first thing we do after we train our first model, which we fine tune an ImageNet model for five minutes. And then the thing we immediately do after that is we learn how to analyze the results of the model by looking at examples of misclassified images and looking at a classification matrix, and then doing research on Google to learn about the kinds of things that it's misclassifying. So to me, one of the really cool things about machine learning models in general is that when you interpret them, they tell you about things like what are the most important features, which groups are you misclassifying, and they help you become a domain expert more quickly because you can focus your time on the bits that the model is telling you is important. So it lets you deal with things like data leakage, for example, if it says, oh, the main feature I'm looking at is customer ID. And you're like, oh, customer ID should be predictive. And then you can talk to the people that manage customer IDs and they'll tell you like, oh yes, as soon as a customer's application is accepted, we add a one on the end of their customer ID or something. So yeah, looking at data, particularly from the lens of which parts of the data the model says is important is super important. Yeah, and using the model to almost debug the data to learn more about the data. Exactly. What are the different cloud options for training your own networks? Last question related to DawnBench. Well, it's part of a lot of the work you do, but from a perspective of performance, I think you've written this in a blog post. There's AWS, there's TPU from Google. What's your sense? What the future holds? What would you recommend now in terms of training? So from a hardware point of view, Google's TPUs and the best Nvidia GPUs are similar. I mean, maybe the TPUs are like 30% faster, but they're also much harder to program. There isn't a clear leader in terms of hardware right now, although much more importantly, the Nvidia GPUs are much more programmable. They've got much more written for all of them. So like that's the clear leader for me and where I would spend my time as a researcher and practitioner. But then in terms of the platform, I mean, we're super lucky now with stuff like Google GCP, Google Cloud, and AWS that you can access a GPU pretty quickly and easily. But I mean, for AWS, it's still too hard. Like you have to find an AMI and get the instance running and then install the software you want and blah, blah, blah. GCP is currently the best way to get started on a full server environment because they have a fantastic fast AI in PyTorch ready to go instance, which has all the courses preinstalled. It has Jupyter Notebook pre running. Jupyter Notebook is this wonderful interactive computing system, which everybody basically should be using for any kind of data driven research. But then even better than that, there are platforms like Salamander, which we own and Paperspace, where literally you click a single button and it pops up a Jupyter Notebook straight away without any kind of installation or anything. And all the course notebooks are all preinstalled. So like for me, this is one of the things we spent a lot of time kind of curating and working on. Because when we first started our courses, the biggest problem was people dropped out of lesson one because they couldn't get an AWS instance running. So things are so much better now. And like we actually have, if you go to course.fast.ai, the first thing it says is here's how to get started with your GPU. And there's like, you just click on the link and you click start and you're going. You'll go GCP. I have to confess, I've never used the Google GCP. Yeah, GCP gives you $300 of compute for free, which is really nice. But as I say, Salamander and Paperspace are even easier still. Okay. So from the perspective of deep learning frameworks, you work with fast.ai, if you go to this framework, and PyTorch and TensorFlow. What are the strengths of each platform in your perspective? So in terms of what we've done our research on and taught in our course, we started with Theano and Keras, and then we switched to TensorFlow and Keras, and then we switched to PyTorch, and then we switched to PyTorch and fast.ai. And that kind of reflects a growth and development of the ecosystem of deep learning libraries. Theano and TensorFlow were great, but were much harder to teach and to do research and development on because they define what's called a computational graph upfront, a static graph, where you basically have to say, here are all the things that I'm gonna eventually do in my model, and then later on you say, okay, do those things with this data. And you can't like debug them, you can't do them step by step, you can't program them interactively in a Jupyter notebook and so forth. PyTorch was not the first, but PyTorch was certainly the strongest entrant to come along and say, let's not do it that way, let's just use normal Python. And everything you know about in Python is just gonna work, and we'll figure out how to make that run on the GPU as and when necessary. That turned out to be a huge leap in terms of what we could do with our research and what we could do with our teaching. Because it wasn't limiting. Yeah, I mean, it was critical for us for something like DawnBench to be able to rapidly try things. It's just so much harder to be a researcher and practitioner when you have to do everything upfront and you can't inspect it. Problem with PyTorch is it's not at all accessible to newcomers because you have to like write your own training loop and manage the gradients and all this stuff. And it's also like not great for researchers because you're spending your time dealing with all this boilerplate and overhead rather than thinking about your algorithm. So we ended up writing this very multi layered API that at the top level, you can train a state of the art neural network in three lines of code. And which kind of talks to an API, which talks to an API, which talks to an API, which like you can dive into at any level and get progressively closer to the machine kind of levels of control. And this is the fast AI library. That's been critical for us and for our students and for lots of people that have won deep learning competitions with it and written academic papers with it. It's made a big difference. We're still limited though by Python. And particularly this problem with things like recurrent neural nets say where you just can't change things unless you accept it going so slowly that it's impractical. So in the latest incarnation of the course and with some of the research we're now starting to do, we're starting to do stuff, some stuff in Swift. I think we're three years away from that being super practical, but I'm in no hurry. I'm very happy to invest the time to get there. But with that, we actually already have a nascent version of the fast AI library for vision running on Swift and TensorFlow. Cause a Python for TensorFlow is not gonna cut it. It's just a disaster. What they did was they tried to replicate the bits that people were saying they like about PyTorch, this kind of interactive computation, but they didn't actually change their foundational runtime components. So they kind of added this like syntax sugar they call TF Eager, TensorFlow Eager, which makes it look a lot like PyTorch, but it's 10 times slower than PyTorch to actually do a step. So because they didn't invest the time in like retooling the foundations, cause their code base is so horribly complex. Yeah, I think it's probably very difficult to do that kind of retooling. Yeah, well, particularly the way TensorFlow was written, it was written by a lot of people very quickly in a very disorganized way. So like when you actually look in the code, as I do often, I'm always just like, Oh God, what were they thinking? It's just, it's pretty awful. So I'm really extremely negative about the potential future for Python for TensorFlow. But Swift for TensorFlow can be a different beast altogether. It can be like, it can basically be a layer on top of MLIR that takes advantage of, you know, all the great compiler stuff that Swift builds on with LLVM and yeah, I think it will be absolutely fantastic. Well, you're inspiring me to try. I haven't truly felt the pain of TensorFlow 2.0 Python. It's fine by me, but of... Yeah, I mean, it does the job if you're using like predefined things that somebody has already written. But if you actually compare, you know, like I've had to do, cause I've been having to do a lot of stuff with TensorFlow recently, you actually compare like, okay, I want to write something from scratch and you're like, I just keep finding it's like, Oh, it's running 10 times slower than PyTorch. So is the biggest cost, let's throw running time out the window. How long it takes you to program? That's not too different now, thanks to TensorFlow Eager, that's not too different. But because so many things take so long to run, you wouldn't run it at 10 times slower. Like you just go like, Oh, this is taking too long. And also there's a lot of things which are just less programmable, like tf.data, which is the way data processing works in TensorFlow is just this big mess. It's incredibly inefficient. And they kind of had to write it that way because of the TPU problems I described earlier. So I just, you know, I just feel like they've got this huge technical debt, which they're not going to solve without starting from scratch. So here's an interesting question then, if there's a new student starting today, what would you recommend they use? Well, I mean, we obviously recommend Fastai and PyTorch because we teach new students and that's what we teach with. So we would very strongly recommend that because it will let you get on top of the concepts much more quickly. So then you'll become an actual, and you'll also learn the actual state of the art techniques, you know, so you actually get world class results. Honestly, it doesn't much matter what library you learn because switching from the trainer to MXNet to TensorFlow to PyTorch is gonna be a couple of days work as long as you understand the foundation as well. But you think will Swift creep in there as a thing that people start using? Not for a few years, particularly because like Swift has no data science community, libraries, schooling. And the Swift community has a total lack of appreciation and understanding of numeric computing. So like they keep on making stupid decisions, you know, for years, they've just done dumb things around performance and prioritization. That's clearly changing now because the developer of Swift, Chris Latner, is working at Google on Swift for TensorFlow. So like that's a priority. It'll be interesting to see what happens with Apple because like Apple hasn't shown any sign of caring about numeric programming in Swift. So I mean, hopefully they'll get off their ass and start appreciating this because currently all of their low level libraries are not written in Swift. They're not particularly Swifty at all, stuff like CoreML, they're really pretty rubbish. So yeah, so there's a long way to go. But at least one nice thing is that Swift for TensorFlow can actually directly use Python code and Python libraries in a literally the entire lesson one notebook of fast AI runs in Swift right now in Python mode. So that's a nice intermediate thing. How long does it take? If you look at the two fast AI courses, how long does it take to get from point zero to completing both courses? It varies a lot. Somewhere between two months and two years generally. So for two months, how many hours a day on average? So like somebody who is a very competent coder can do 70 hours per course and pick up 70. 70, seven zero, that's it, okay. But a lot of people I know take a year off to study fast AI full time and say at the end of the year, they feel pretty competent because generally there's a lot of other things you do like generally they'll be entering Kaggle competitions, they might be reading Ian Goodfellow's book, they might, they'll be doing a bunch of stuff and often particularly if they are a domain expert, their coding skills might be a little on the pedestrian side. So part of it's just like doing a lot more writing. What do you find is the bottleneck for people usually except getting started and setting stuff up? I would say coding. Yeah, I would say the best, the people who are strong coders pick it up the best. Although another bottleneck is people who have a lot of experience of classic statistics can really struggle because the intuition is so the opposite of what they're used to. They're very used to like trying to reduce the number of parameters in their model and looking at individual coefficients and stuff like that. So I find people who have a lot of coding background and know nothing about statistics are generally gonna be the best off. So you taught several courses on deep learning and as Feynman says, best way to understand something is to teach it. What have you learned about deep learning from teaching it? A lot. That's a key reason for me to teach the courses. I mean, obviously it's gonna be necessary to achieve our goal of getting domain experts to be familiar with deep learning, but it was also necessary for me to achieve my goal of being really familiar with deep learning. I mean, to see so many domain experts from so many different backgrounds, it's definitely, I wouldn't say taught me, but convinced me something that I liked to believe was true, which was anyone can do it. So there's a lot of kind of snobbishness out there about only certain people can learn to code. Only certain people are gonna be smart enough like do AI, that's definitely bullshit. I've seen so many people from so many different backgrounds get state of the art results in their domain areas now. It's definitely taught me that the key differentiator between people that succeed and people that fail is tenacity. That seems to be basically the only thing that matters. A lot of people give up. But of the ones who don't give up, pretty much everybody succeeds. Even if at first I'm just kind of like thinking like, wow, they really aren't quite getting it yet, are they? But eventually people get it and they succeed. So I think that's been, I think they're both things I liked to believe was true, but I don't feel like I really had strong evidence for them to be true, but now I can say I've seen it again and again. I've seen it again and again. So what advice do you have for someone who wants to get started in deep learning? Train lots of models. That's how you learn it. So I think, it's not just me, I think our course is very good, but also lots of people independently have said it's very good. It recently won the COGx award for AI courses as being the best in the world. So I'd say come to our course, course.fast.ai. And the thing I keep on hopping on in my lessons is train models, print out the inputs to the models, print out to the outputs to the models, like study, change the inputs a bit, look at how the outputs vary, just run lots of experiments to get an intuitive understanding of what's going on. To get hooked, do you think, you mentioned training, do you think just running the models inference, like if we talk about getting started? No, you've got to fine tune the models. So that's the critical thing, because at that point you now have a model that's in your domain area. So there's no point running somebody else's model because it's not your model. So it only takes five minutes to fine tune a model for the data you care about. And in lesson two of the course, we teach you how to create your own data set from scratch by scripting Google image search. So, and we show you how to actually create a web application running online. So I create one in the course that differentiates between a teddy bear, a grizzly bear and a brown bear. And it does it with basically 100% accuracy, took me about four minutes to scrape the images from Google search in the script. There's a little graphical widgets we have in the notebook that help you clean up the data set. There's other widgets that help you study the results to see where the errors are happening. And so now we've got over a thousand replies in our share your work here thread of students saying, here's the thing I built. And so there's people who like, and a lot of them are state of the art. Like somebody said, oh, I tried looking at Devangari characters and I couldn't believe it. The thing that came out was more accurate than the best academic paper after lesson one. And then there's others which are just more kind of fun, like somebody who's doing Trinidad and Tobago hummingbirds. She said that's kind of their national bird and she's got something that can now classify Trinidad and Tobago hummingbirds. So yeah, train models, fine tune models with your data set and then study their inputs and outputs. How much is Fast.ai courses? Free. Everything we do is free. We have no revenue sources of any kind. It's just a service to the community. You're a saint. Okay, once a person understands the basics, trains a bunch of models, if we look at the scale of years, what advice do you have for someone wanting to eventually become an expert? Train lots of models. But specifically train lots of models in your domain area. So an expert what, right? We don't need more expert, like create slightly evolutionary research in areas that everybody's studying. We need experts at using deep learning to diagnose malaria. Or we need experts at using deep learning to analyze language to study media bias. So we need experts in analyzing fisheries to identify problem areas in the ocean. That's what we need. So become the expert in your passion area. And this is a tool which you can use for just about anything and you'll be able to do that thing better than other people, particularly by combining it with your passion and domain expertise. So that's really interesting. Even if you do wanna innovate on transfer learning or active learning, your thought is, I mean, it's one I certainly share, is you also need to find a domain or data set that you actually really care for. If you're not working on a real problem that you understand, how do you know if you're doing it any good? How do you know if your results are good? How do you know if you're getting bad results? Why are you getting bad results? Is it a problem with the data? Like, how do you know you're doing anything useful? Yeah, to me, the only really interesting research is, not the only, but the vast majority of interesting research is like, try and solve an actual problem and solve it really well. So both understanding sufficient tools on the deep learning side and becoming a domain expert in a particular domain are really things within reach for anybody. Yeah, I mean, to me, I would compare it to like studying self driving cars, having never looked at a car or been in a car or turned a car on, which is like the way it is for a lot of people, they'll study some academic data set where they literally have no idea about that. By the way, I'm not sure how familiar with autonomous vehicles, but that is literally, you describe a large percentage of robotics folks working in self driving cars is they actually haven't considered driving. They haven't actually looked at what driving looks like. They haven't driven. And it's a problem because you know, when you've actually driven, you know, like these are the things that happened to me when I was driving. There's nothing that beats the real world examples of just experiencing them. You've created many successful startups. What does it take to create a successful startup? Same thing as becoming a successful deep learning practitioner, which is not giving up. So you can run out of money or run out of time or run out of something, you know, but if you keep costs super low and try and save up some money beforehand so you can afford to have some time, then just sticking with it is one important thing. Doing something you understand and care about is important. By something, I don't mean, the biggest problem I see with deep learning people is they do a PhD in deep learning and then they try and commercialize their PhD. It is a waste of time because that doesn't solve an actual problem. You picked your PhD topic because it was an interesting kind of engineering or math or research exercise. But yeah, if you've actually spent time as a recruiter and you know that most of your time was spent sifting through resumes and you know that most of the time you're just looking for certain kinds of things and you can try doing that with a model for a few minutes and see whether that's something which a model seems to be able to do as well as you could, then you're on the right track to creating a startup. And then I think just, yeah, being, just be pragmatic and try and stay away from venture capital money as long as possible, preferably forever. So yeah, on that point, do you venture capital? So did you, were you able to successfully run startups with self funded for quite a while? Yeah, so my first two were self funded and that was the right way to do it. Is that scary? No, VC startups are much more scary because you have these people on your back who do this all the time and who have done it for years telling you grow, grow, grow, grow. And they don't care if you fail. They only care if you don't grow fast enough. So that's scary. Whereas doing the ones myself, well, with partners who were friends was nice because like we just went along at a pace that made sense and we were able to build it to something which was big enough that we never had to work again but was not big enough that any VC would think it was impressive. And that was enough for us to be excited, you know? So I thought that's a much better way to do things than most people. In generally speaking, not for yourself but how do you make money during that process? Do you cut into savings? So yeah, so for, so I started Fast Mail and Optimal Decisions at the same time in 1999 with two different friends. And for Fast Mail, I guess I spent $70 a month on the server. And when the server ran out of space I put a payments button on the front page and said, if you want more than 10 mega space you have to pay $10 a year. And. So run low, like keep your costs down. Yeah, so I kept my costs down. And once, you know, once I needed to spend more money I asked people to spend the money for me. And that, that was that. Basically from then on, we were making money and I was profitable from then. For Optimal Decisions, it was a bit harder because we were trying to sell something that was more like a $1 million sale. But what we did was we would sell scoping projects. So kind of like prototypy projects but rather than doing it for free we would sell them 50 to $100,000. So again, we were covering our costs and also making the client feel like we were doing something valuable. So in both cases, we were profitable from six months in. Ah, nevertheless, it's scary. I mean, yeah, sure. I mean, it's, it's scary before you jump in and I just, I guess I was comparing it to the scarediness of VC. I felt like with VC stuff, it was more scary. Kind of much more in somebody else's hands, will they fund you or not? And what do they think of what you're doing? I also found it very difficult with VCs, back startups to actually do the thing which I thought was important for the company rather than doing the thing which I thought would make the VC happy. And VCs always tell you not to do the thing that makes them happy. But then if you don't do the thing that makes them happy they get sad, so. And do you think optimizing for the, whatever they call it, the exit is a good thing to optimize for? I mean, it can be, but not at the VC level because the VC exit needs to be, you know, a thousand X. So where else the lifestyle exit, if you can sell something for $10 million, then you've made it, right? So I don't, it depends. If you want to build something that's gonna, you're kind of happy to do forever, then fine. If you want to build something you want to sell in three years time, that's fine too. I mean, they're both perfectly good outcomes. So you're learning Swift now, in a way. I mean, you've already. I'm trying to. And I read that you use, at least in some cases, space repetition as a mechanism for learning new things. I use Anki quite a lot myself. Me too. I actually never talk to anybody about it. Don't know how many people do it, but it works incredibly well for me. Can you talk to your experience? Like how did you, what do you? First of all, okay, let's back it up. What is space repetition? So space repetition is an idea created by a psychologist named Ebbinghaus. I don't know, must be a couple of hundred years ago or something, 150 years ago. He did something which sounds pretty damn tedious. He wrote down random sequences of letters on cards and tested how well he would remember those random sequences a day later, a week later, whatever. He discovered that there was this kind of a curve where his probability of remembering one of them would be dramatically smaller the next day and then a little bit smaller the next day and a little bit smaller the next day. What he discovered is that if he revised those cards after a day, the probabilities would decrease at a smaller rate. And then if you revise them again a week later, they would decrease at a smaller rate again. And so he basically figured out a roughly optimal equation for when you should revise something you wanna remember. So space repetition learning is using this simple algorithm, just something like revise something after a day and then three days and then a week and then three weeks and so forth. And so if you use a program like Anki, as you know, it will just do that for you. And it will say, did you remember this? And if you say no, it will reschedule it back to appear again like 10 times faster than it otherwise would have. It's a kind of a way of being guaranteed to learn something because by definition, if you're not learning it, it will be rescheduled to be revised more quickly. Unfortunately though, it's also like, it doesn't let you fool yourself. If you're not learning something, you know like your revisions will just get more and more. So you have to find ways to learn things productively and effectively like treat your brain well. So using like mnemonics and stories and context and stuff like that. So yeah, it's a super great technique. It's like learning how to learn is something which everybody should learn before they actually learn anything. But almost nobody does. So what have you, so it certainly works well for learning new languages for, I mean, for learning like small projects almost. But do you, you know, I started using it for, I forget who wrote a blog post about this inspired me. It might've been you, I'm not sure. I started when I read papers, I'll concepts and ideas, I'll put them. Was it Michael Nielsen? It was Michael Nielsen. So Michael started doing this recently and has been writing about it. So the kind of today's Ebbinghaus is a guy called Peter Wozniak who developed a system called SuperMemo. And he's been basically trying to become like the world's greatest Renaissance man over the last few decades. He's basically lived his life with space repetition learning for everything. I, and sort of like, Michael's only very recently got into this, but he started really getting excited about doing it for a lot of different things. For me personally, I actually don't use it for anything except Chinese. And the reason for that is that Chinese is specifically a thing I made a conscious decision that I want to continue to remember, even if I don't get much of a chance to exercise it, cause like I'm not often in China, so I don't. Or else something like programming languages or papers. I have a very different approach, which is I try not to learn anything from them, but instead I try to identify the important concepts and like actually ingest them. So like really understand that concept deeply and study it carefully. I will decide if it really is important, if it is like incorporated into our library, you know, incorporated into how I do things or decide it's not worth it, say. So I find, I find I then remember the things that I care about because I'm using it all the time. So I've, for the last 25 years, I've committed to spending at least half of every day learning or practicing something new, which is all my colleagues have always hated because it always looks like I'm not working on what I'm meant to be working on, but it always means I do everything faster because I've been practicing a lot of stuff. So I kind of give myself a lot of opportunity to practice new things. And so I find now I don't, yeah, I don't often kind of find myself wishing I could remember something because if it's something that's useful, then I've been using it a lot. It's easy enough to look it up on Google, but speaking Chinese, you can't look it up on Google. Do you have advice for people learning new things? So if you, what have you learned as a process as a, I mean, it all starts with just making the hours and the day available. Yeah, you got to stick with it, which is again, the number one thing that 99% of people don't do. So the people I started learning Chinese with, none of them were still doing it 12 months later. I'm still doing it 10 years later. I tried to stay in touch with them, but they just, no one did it. For something like Chinese, like study how human learning works. So every one of my Chinese flashcards is associated with a story. And that story is specifically designed to be memorable. And we find things memorable, which are like funny or disgusting or sexy or related to people that we know or care about. So I try to make sure all of the stories that are in my head have those characteristics. Yeah, so you have to, you know, you won't remember things well if they don't have some context. And yeah, you won't remember them well if you don't regularly practice them, whether it be just part of your day to day life or the Chinese and me flashcards. I mean, the other thing is, I'll let yourself fail sometimes. So like I've had various medical problems over the last few years. And basically my flashcards just stopped for about three years. And there've been other times I've stopped for a few months and it's so hard because you get back to it and it's like, you have 18,000 cards due. It's like, and so you just have to go, all right, well, I can either stop and give up everything or just decide to do this every day for the next two years until I get back to it. The amazing thing has been that even after three years, I, you know, the Chinese were still in there. Like it was so much faster to relearn than it was to learn the first time. Yeah, absolutely. It's in there. I have the same with guitar, with music and so on. It's sad because the work sometimes takes away and then you won't play for a year. But really, if you then just get back to it every day, you're right there again. What do you think is the next big breakthrough in artificial intelligence? What are your hopes in deep learning or beyond that people should be working on or you hope there'll be breakthroughs? I don't think it's possible to predict. I think what we already have is an incredibly powerful platform to solve lots of societally important problems that are currently unsolved. So I just hope that people will, lots of people will learn this toolkit and try to use it. I don't think we need a lot of new technological breakthroughs to do a lot of great work right now. And when do you think we're going to create a human level intelligence system? Do you think? Don't know. How hard is it? How far away are we? Don't know. Don't know. I have no way to know. I don't know why people make predictions about this because there's no data and nothing to go on. And it's just like, there's so many societally important problems to solve right now. I just don't find it a really interesting question to even answer. So in terms of societally important problems, what's the problem that is within reach? Well, I mean, for example, there are problems that AI creates, right? So more specifically, labor force displacement is going to be huge and people keep making this frivolous econometric argument of being like, oh, there's been other things that aren't AI that have come along before and haven't created massive labor force displacement, therefore AI won't. So that's a serious concern for you? Oh yeah. Andrew Yang is running on it. Yeah, it's, I'm desperately concerned. And you see already that the changing workplace has led to a hollowing out of the middle class. You're seeing that students coming out of school today have a less rosy financial future ahead of them than their parents did, which has never happened in recent, in the last few hundred years. You know, we've always had progress before. And you see this turning into anxiety and despair and even violence. So I very much worry about that. You've written quite a bit about ethics too. I do think that every data scientist working with deep learning needs to recognize they have an incredibly high leverage tool that they're using that can influence society in lots of ways. And if they're doing research, that that research is gonna be used by people doing this kind of work. And they have a responsibility to consider the consequences and to think about things like how will humans be in the loop here? How do we avoid runaway feedback loops? How do we ensure an appeals process for humans that are impacted by my algorithm? How do I ensure that the constraints of my algorithm are adequately explained to the people that end up using them? There's all kinds of human issues which only data scientists are actually in the right place to educate people are about, but data scientists tend to think of themselves as just engineers and that they don't need to be part of that process, which is wrong. Well, you're in the perfect position to educate them better, to read literature, to read history, to learn from history. Well, Jeremy, thank you so much for everything you do for inspiring huge amount of people, getting them into deep learning and having the ripple effects, the flap of a butterfly's wings that will probably change the world. So thank you very much. Thank you, thank you, thank you, thank you.
Jeremy Howard: fast.ai Deep Learning Courses and Research | Lex Fridman Podcast #35
The following is a conversation with Yann LeCun. He's considered to be one of the fathers of deep learning, which, if you've been hiding under a rock, is the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He's a professor at New York University, a vice president and chief AI scientist at Facebook, and co recipient of the Turing Award for his work on deep learning. He's probably best known as the founding father of convolutional neural networks, in particular their application to optical character recognition and the famed MNIST dataset. He is also an outspoken personality, unafraid to speak his mind in a distinctive French accent and explore provocative ideas, both in the rigorous medium of academic research and the somewhat less rigorous medium of Twitter and Facebook. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Yann LeCun. You said that 2001 Space Odyssey is one of your favorite movies. Hal 9000 decides to get rid of the astronauts for people who haven't seen the movie, spoiler alert, because he, it, she believes that the astronauts, they will interfere with the mission. Do you see Hal as flawed in some fundamental way or even evil, or did he do the right thing? Neither. There's no notion of evil in that context, other than the fact that people die, but it was an example of what people call value misalignment, right? You give an objective to a machine, and the machine strives to achieve this objective. And if you don't put any constraints on this objective, like don't kill people and don't do things like this, the machine, given the power, will do stupid things just to achieve this objective, or damaging things to achieve this objective. It's a little bit like, I mean, we're used to this in the context of human society. We put in place laws to prevent people from doing bad things, because spontaneously, they would do those bad things, right? So we have to shape their cost function, their objective function, if you want, through laws to kind of correct, and education, obviously, to sort of correct for those. So maybe just pushing a little further on that point, how, you know, there's a mission, there's this fuzziness around, the ambiguity around what the actual mission is, but, you know, do you think that there will be a time, from a utilitarian perspective, where an AI system, where it is not misalignment, where it is alignment, for the greater good of society, that an AI system will make decisions that are difficult? Well, that's the trick. I mean, eventually we'll have to figure out how to do this. And again, we're not starting from scratch, because we've been doing this with humans for millennia. So designing objective functions for people is something that we know how to do. And we don't do it by, you know, programming things, although the legal code is called code. So that tells you something. And it's actually the design of an objective function. That's really what legal code is, right? It tells you, here is what you can do, here is what you can't do. If you do it, you pay that much, that's an objective function. So there is this idea somehow that it's a new thing for people to try to design objective functions that are aligned with the common good. But no, we've been writing laws for millennia and that's exactly what it is. So that's where, you know, the science of lawmaking and computer science will. Come together. Will come together. So there's nothing special about HAL or AI systems, it's just the continuation of tools used to make some of these difficult ethical judgments that laws make. Yeah, and we have systems like this already that make many decisions for ourselves in society that need to be designed in a way that they, like rules about things that sometimes have bad side effects and we have to be flexible enough about those rules so that they can be broken when it's obvious that they shouldn't be applied. So you don't see this on the camera here, but all the decoration in this room is all pictures from 2001 and Space Odyssey. Wow, is that by accident or is there a lot? No, by accident, it's by design. Oh, wow. So if you were to build HAL 10,000, so an improvement of HAL 9,000, what would you improve? Well, first of all, I wouldn't ask it to hold secrets and tell lies because that's really what breaks it in the end, that's the fact that it's asking itself questions about the purpose of the mission and it's, you know, pieces things together that it's heard, you know, all the secrecy of the preparation of the mission and the fact that it was the discovery on the lunar surface that really was kept secret and one part of HAL's memory knows this and the other part does not know it and is supposed to not tell anyone and that creates internal conflict. So you think there's never should be a set of things that an AI system should not be allowed, like a set of facts that should not be shared with the human operators? Well, I think, no, I think it should be a bit like in the design of autonomous AI systems, there should be the equivalent of, you know, the oath that a hypocrite oath that doctors sign up to, right? So there's certain things, certain rules that you have to abide by and we can sort of hardwire this into our machines to kind of make sure they don't go. So I'm not, you know, an advocate of the three laws of robotics, you know, the Asimov kind of thing because I don't think it's practical, but, you know, some level of limits. But to be clear, these are not questions that are kind of really worth asking today because we just don't have the technology to do this. We don't have autonomous intelligent machines, we have intelligent machines. Some are intelligent machines that are very specialized, but they don't really sort of satisfy an objective. They're just, you know, kind of trained to do one thing. So until we have some idea for design of a full fledged autonomous intelligent system, asking the question of how we design this objective, I think is a little too abstract. It's a little too abstract. There's useful elements to it in that it helps us understand our own ethical codes, humans. So even just as a thought experiment, if you imagine that an AGI system is here today, how would we program it is a kind of nice thought experiment of constructing how should we have a law, have a system of laws for us humans. It's just a nice practical tool. And I think there's echoes of that idea too in the AI systems we have today that don't have to be that intelligent. Yeah. Like autonomous vehicles. These things start creeping in that are worth thinking about, but certainly they shouldn't be framed as how. Yeah. Looking back, what is the most, I'm sorry if it's a silly question, but what is the most beautiful or surprising idea in deep learning or AI in general that you've ever come across? Sort of personally, when you said back and just had this kind of, oh, that's pretty cool moment. That's nice. That's surprising. I don't know if it's an idea rather than a sort of empirical fact. The fact that you can build gigantic neural nets, train them on relatively small amounts of data relatively with stochastic gradient descent and that it actually works, breaks everything you read in every textbook, right? Every pre deep learning textbook that told you, you need to have fewer parameters and you have data samples. If you have a non convex objective function, you have no guarantee of convergence. All those things that you read in textbook and they tell you to stay away from this and they're all wrong. The huge number of parameters, non convex, and somehow which is very relative to the number of parameters data, it's able to learn anything. Right. Does that still surprise you today? Well, it was kind of obvious to me before I knew anything that this is a good idea. And then it became surprising that it worked because I started reading those textbooks. Okay. Okay. So can you talk through the intuition of why it was obvious to you if you remember? Well, okay. So the intuition was it's sort of like, those people in the late 19th century who proved that heavier than air flight was impossible. And of course you have birds, right? They do fly. And so on the face of it, it's obviously wrong as an empirical question, right? And so we have the same kind of thing that we know that the brain works. We don't know how, but we know it works. And we know it's a large network of neurons and interaction and that learning takes place by changing the connection. So kind of getting this level of inspiration without copying the details, but sort of trying to derive basic principles, and that kind of gives you a clue as to which direction to go. There's also the idea somehow that I've been convinced of since I was an undergrad that, even before, that intelligence is inseparable from learning. So the idea somehow that you can create an intelligent machine by basically programming, for me it was a non starter from the start. Every intelligent entity that we know about arrives at this intelligence through learning. So machine learning was a completely obvious path. Also because I'm lazy, so, you know, kind of. He's automate basically everything and learning is the automation of intelligence. So do you think, so what is learning then? What falls under learning? Because do you think of reasoning as learning? Well, reasoning is certainly a consequence of learning as well, just like other functions of the brain. The big question about reasoning is, how do you make reasoning compatible with gradient based learning? Do you think neural networks can be made to reason? Yes, there is no question about that. Again, we have a good example, right? The question is how? So the question is how much prior structure do you have to put in the neural net so that something like human reasoning will emerge from it, you know, from learning? Another question is all of our kind of model of what reasoning is that are based on logic are discrete and are therefore incompatible with gradient based learning. And I'm a very strong believer in this idea of gradient based learning. I don't believe that other types of learning that don't use kind of gradient information if you want. So you don't like discrete mathematics? You don't like anything discrete? Well, that's, it's not that I don't like it, it's just that it's incompatible with learning and I'm a big fan of learning, right? So in fact, that's perhaps one reason why deep learning has been kind of looked at with suspicion by a lot of computer scientists because the math is very different. The math that you use for deep learning, you know, it kind of has more to do with, you know, cybernetics, the kind of math you do in electrical engineering than the kind of math you do in computer science. And, you know, nothing in machine learning is exact, right? Computer science is all about sort of, you know, obviously compulsive attention to details of like, you know, every index has to be right. And you can prove that an algorithm is correct, right? Machine learning is the science of sloppiness, really. That's beautiful. So, okay, maybe let's feel around in the dark of what is a neural network that reasons or a system that works with continuous functions that's able to do, build knowledge, however we think about reasoning, build on previous knowledge, build on extra knowledge, create new knowledge, generalize outside of any training set to ever build. What does that look like? If, yeah, maybe give inklings of thoughts of what that might look like. Yeah, I mean, yes and no. If I had precise ideas about this, I think, you know, we'd be building it right now. And there are people working on this whose main research interest is actually exactly that, right? So what you need to have is a working memory. So you need to have some device, if you want, some subsystem that can store a relatively large number of factual episodic information for, you know, a reasonable amount of time. So, you know, in the brain, for example, there are kind of three main types of memory. One is the sort of memory of the state of your cortex. And that sort of disappears within 20 seconds. You can't remember things for more than about 20 seconds or a minute if you don't have any other form of memory. The second type of memory, which is longer term, is still short term, is the hippocampus. So you can, you know, you came into this building, you remember where the exit is, where the elevators are. You have some map of that building that's stored in your hippocampus. You might remember something about what I said, you know, a few minutes ago. I forgot it all already. Of course, it's been erased, but, you know, but that would be in your hippocampus. And then the longer term memory is in the synapse, the synapses, right? So what you need if you want a system that's capable of reasoning is that you want the hippocampus like thing, right? And that's what people have tried to do with memory networks and, you know, neural training machines and stuff like that, right? And now with transformers, which have sort of a memory in there, kind of self attention system. You can think of it this way. So that's one element you need. Another thing you need is some sort of network that can access this memory, get an information back and then kind of crunch on it and then do this iteratively multiple times because a chain of reasoning is a process by which you update your knowledge about the state of the world, about, you know, what's going to happen, et cetera. And that has to be this sort of recurrent operation basically. And you think that kind of, if we think about a transformer, so that seems to be too small to contain the knowledge that's, to represent the knowledge that's contained in Wikipedia, for example. Well, a transformer doesn't have this idea of recurrence. It's got a fixed number of layers and that's the number of steps that, you know, limits basically its representation. But recurrence would build on the knowledge somehow. I mean, it would evolve the knowledge and expand the amount of information perhaps or useful information within that knowledge. But is this something that just can emerge with size? Because it seems like everything we have now is too small. Not just, no, it's not clear. I mean, how you access and write into an associative memory in an efficient way. I mean, sort of the original memory network maybe had something like the right architecture, but if you try to scale up a memory network so that the memory contains all the Wikipedia, it doesn't quite work. Right. So there's a need for new ideas there, okay. But it's not the only form of reasoning. So there's another form of reasoning, which is true, which is very classical also in some types of AI. And it's based on, let's call it energy minimization. Okay, so you have some sort of objective, some energy function that represents the quality or the negative quality, okay. Energy goes up when things get bad and they get low when things get good. So let's say you want to figure out, what gestures do I need to do to grab an object or walk out the door. If you have a good model of your own body, a good model of the environment, using this kind of energy minimization, you can do planning. And in optimal control, it's called model predictive control. You have a model of what's gonna happen in the world as a consequence of your actions. And that allows you to, by energy minimization, figure out the sequence of action that optimizes a particular objective function, which measures, minimizes the number of times you're gonna hit something and the energy you're gonna spend doing the gesture and et cetera. So that's a form of reasoning. Planning is a form of reasoning. And perhaps what led to the ability of humans to reason is the fact that, or species that appear before us had to do some sort of planning to be able to hunt and survive and survive the winter in particular. And so it's the same capacity that you need to have. So in your intuition is, if we look at expert systems and encoding knowledge as logic systems, as graphs, in this kind of way, is not a useful way to think about knowledge? Graphs are a little brittle or logic representation. So basically, variables that have values and then constraint between them that are represented by rules, is a little too rigid and too brittle, right? So some of the early efforts in that respect were to put probabilities on them. So a rule, if you have this and that symptom, you have this disease with that probability and you should prescribe that antibiotic with that probability, right? That's the mycin system from the 70s. And that's what that branch of AI led to, based on networks and graphical models and causal inference and variational method. So there is certainly a lot of interesting work going on in this area. The main issue with this is knowledge acquisition. How do you reduce a bunch of data to a graph of this type? Yeah, it relies on the expert, on the human being, to encode, to add knowledge. And that's essentially impractical. Yeah, it's not scalable. That's a big question. The second question is, do you want to represent knowledge as symbols and do you want to manipulate them with logic? And again, that's incompatible with learning. So one suggestion, which Jeff Hinton has been advocating for many decades, is replace symbols by vectors. Think of it as pattern of activities in a bunch of neurons or units or whatever you want to call them. And replace logic by continuous functions. Okay, and that becomes now compatible. There's a very good set of ideas by, written in a paper about 10 years ago by Leon Boutout, who is here at Facebook. The title of the paper is, From Machine Learning to Machine Reasoning. And his idea is that a learning system should be able to manipulate objects that are in a space and then put the result back in the same space. So it's this idea of working memory, basically. And it's very enlightening. And in a sense, that might learn something like the simple expert systems. I mean, you can learn basic logic operations there. Yeah, quite possibly. There's a big debate on sort of how much prior structure you have to put in for this kind of stuff to emerge. That's the debate I have with Gary Marcus and people like that. Yeah, yeah, so, and the other person, so I just talked to Judea Pearl, from the you mentioned causal inference world. So his worry is that the current neural networks are not able to learn what causes what causal inference between things. So I think he's right and wrong about this. If he's talking about the sort of classic type of neural nets, people sort of didn't worry too much about this. But there's a lot of people now working on causal inference. And there's a paper that just came out last week by Leon Boutou, among others, David Lopez, Baz, and a bunch of other people, exactly on that problem of how do you kind of get a neural net to sort of pay attention to real causal relationships, which may also solve issues of bias in data and things like this, so. I'd like to read that paper because that ultimately the challenges also seems to fall back on the human expert to ultimately decide causality between things. People are not very good at establishing causality, first of all. So first of all, you talk to physicists and physicists actually don't believe in causality because look at all the basic laws of microphysics are time reversible, so there's no causality. The arrow of time is not real, yeah. It's as soon as you start looking at macroscopic systems where there is unpredictable randomness, where there is clearly an arrow of time, but it's a big mystery in physics, actually, how that emerges. Is it emergent or is it part of the fundamental fabric of reality? Or is it a bias of intelligent systems that because of the second law of thermodynamics, we perceive a particular arrow of time, but in fact, it's kind of arbitrary, right? So yeah, physicists, mathematicians, they don't care about, I mean, the math doesn't care about the flow of time. Well, certainly, macrophysics doesn't. People themselves are not very good at establishing causal relationships. If you ask, I think it was in one of Seymour Papert's book on children learning. He studied with Jean Piaget. He's the guy who coauthored the book Perceptron with Marvin Minsky that kind of killed the first wave of neural nets, but he was actually a learning person. He, in the sense of studying learning in humans and machines, that's why he got interested in Perceptron. And he wrote that if you ask a little kid about what is the cause of the wind, a lot of kids will say, they will think for a while and they'll say, oh, it's the branches in the trees, they move and that creates wind, right? So they get the causal relationship backwards. And it's because their understanding of the world and intuitive physics is not that great, right? I mean, these are like, you know, four or five year old kids. You know, it gets better, and then you understand that this, it can be, right? But there are many things which we can, because of our common sense understanding of things, what people call common sense, and our understanding of physics, we can, there's a lot of stuff that we can figure out causality. Even with diseases, we can figure out what's not causing what, often. There's a lot of mystery, of course, but the idea is that you should be able to encode that into systems, because it seems unlikely they'd be able to figure that out themselves. Well, whenever we can do intervention, but you know, all of humanity has been completely deluded for millennia, probably since its existence, about a very, very wrong causal relationship, where whatever you can explain, you attribute it to, you know, some deity, some divinity, right? And that's a cop out, that's a way of saying like, I don't know the cause, so you know, God did it, right? So you mentioned Marvin Minsky, and the irony of, you know, maybe causing the first AI winter. You were there in the 90s, you were there in the 80s, of course. In the 90s, why do you think people lost faith in deep learning, in the 90s, and found it again, a decade later, over a decade later? Yeah, it wasn't called deep learning yet, it was just called neural nets, but yeah, they lost interest. I mean, I think I would put that around 1995, at least the machine learning community, there was always a neural net community, but it became kind of disconnected from sort of mainstream machine learning, if you want. There were, it was basically electrical engineering that kept at it, and computer science gave up on neural nets. I don't know, you know, I was too close to it to really sort of analyze it with sort of an unbiased eye, if you want, but I would make a few guesses. So the first one is, at the time, neural nets were, it was very hard to make them work, in the sense that you would implement backprop in your favorite language, and that favorite language was not Python, it was not MATLAB, it was not any of those things, because they didn't exist, right? You had to write it in Fortran OC, or something like this, right? So you would experiment with it, you would probably make some very basic mistakes, like, you know, badly initialize your weights, make the network too small, because you read in the textbook, you know, you don't want too many parameters, right? And of course, you know, and you would train on XOR, because you didn't have any other data set to trade on. And of course, you know, it works half the time. So you would say, I give up. Also, you would train it with batch gradient, which, you know, isn't that sufficient. So there's a lot of, there's a bag of tricks that you had to know to make those things work, or you had to reinvent, and a lot of people just didn't, and they just couldn't make it work. So that's one thing. The investment in software platform to be able to kind of, you know, display things, figure out why things don't work, kind of get a good intuition for how to get them to work, have enough flexibility so you can create, you know, network architectures like convolutional nets and stuff like that. It was hard. I mean, you had to write everything from scratch. And again, you didn't have any Python or MATLAB or anything, right? I read that, sorry to interrupt, but I read that you wrote in Lisp the first versions of Lanet with convolutional networks, which by the way, one of my favorite languages. That's how I knew you were legit. Turing award, whatever. You programmed in Lisp, that's... It's still my favorite language, but it's not that we programmed in Lisp, it's that we had to write our Lisp interpreter, okay? Because it's not like we used one that existed. So we wrote a Lisp interpreter that we hooked up to, you know, a backend library that we wrote also for sort of neural net computation. And then after a few years around 1991, we invented this idea of basically having modules that know how to forward propagate and back propagate gradients, and then interconnecting those modules in a graph. Number two had made proposals on this, about this in the late eighties, and we were able to implement this using our Lisp system. Eventually we wanted to use that system to build production code for character recognition at Bell Labs. So we actually wrote a compiler for that Lisp interpreter so that Patricia Simard, who is now at Microsoft, kind of did the bulk of it with Leon and me. And so we could write our system in Lisp and then compile to C, and then we'll have a self contained complete system that could kind of do the entire thing. Neither PyTorch nor TensorFlow can do this today. Yeah, okay, it's coming. Yeah. I mean, there's something like that in PyTorch called TorchScript. And so, you know, we had to write our Lisp interpreter, we had to write our Lisp compiler, we had to invest a huge amount of effort to do this. And not everybody, if you don't completely believe in the concept, you're not going to invest the time to do this. Now at the time also, you know, or today, this would turn into Torch or PyTorch or TensorFlow or whatever, we'd put it in open source, everybody would use it and, you know, realize it's good. Back before 1995, working at AT&T, there's no way the lawyers would let you release anything in open source of this nature. And so we could not distribute our code really. And on that point, and sorry to go on a million tangents, but on that point, I also read that there was some, almost like a patent on convolutional neural networks at Bell Labs. So that, first of all, I mean, just. There's two actually. That ran out. Thankfully, in 2007. In 2007. So I'm gonna, what, can we just talk about that for a second? I know you're a Facebook, but you're also at NYU. And what does it mean to patent ideas like these software ideas, essentially? Or what are mathematical ideas? Or what are they? Okay, so they're not mathematical ideas. They are, you know, algorithms. And there was a period where the US Patent Office would allow the patent of software as long as it was embodied. The Europeans are very different. They don't quite accept that. They have a different concept. But, you know, I don't, I no longer, I mean, I never actually strongly believed in this, but I don't believe in this kind of patent. Facebook basically doesn't believe in this kind of patent. Google fires patents because they've been burned with Apple. And so now they do this for defensive purpose, but usually they say, we're not gonna sue you if you infringe. Facebook has a similar policy. They say, you know, we fire patents on certain things for defensive purpose. We're not gonna sue you if you infringe, unless you sue us. So the industry does not believe in patents. They are there because of, you know, the legal landscape and various things. But I don't really believe in patents for this kind of stuff. So that's a great thing. So I... I'll tell you a worse story, actually. So what happens was the first patent about convolutional net was about kind of the early version of convolutional net that didn't have separate pooling layers. It had convolutional layers which tried more than one, if you want, right? And then there was a second one on convolutional nets with separate pooling layers, trained with backprop. And there were files filed in 89 and 1990 or something like this. At the time, the life of a patent was 17 years. So here's what happened over the next few years is that we started developing character recognition technology around convolutional nets. And in 1994, a check reading system was deployed in ATM machines. In 1995, it was for large check reading machines in back offices, et cetera. And those systems were developed by an engineering group that we were collaborating with at AT&T. And they were commercialized by NCR, which at the time was a subsidiary of AT&T. Now AT&T split up in 1996, early 1996. And the lawyers just looked at all the patents and they distributed the patents among the various companies. They gave the convolutional net patent to NCR because they were actually selling products that used it. But nobody at NCR had any idea what a convolutional net was. Yeah. Okay. So between 1996 and 2007, so there's a whole period until 2002 where I didn't actually work on machine learning or convolutional net. I resumed working on this around 2002. And between 2002 and 2007, I was working on them, crossing my finger that nobody at NCR would notice. Nobody noticed. Yeah, and I hope that this kind of somewhat, as you said, lawyers aside, relative openness of the community now will continue. It accelerates the entire progress of the industry. And the problems that Facebook and Google and others are facing today is not whether Facebook or Google or Microsoft or IBM or whoever is ahead of the other. It's that we don't have the technology to build the things we want to build. We want to build intelligent virtual assistants that have common sense. We don't have monopoly on good ideas for this. We don't believe we do. Maybe others believe they do, but we don't. Okay. If a startup tells you they have the secret to human level intelligence and common sense, don't believe them, they don't. And it's gonna take the entire work of the world research community for a while to get to the point where you can go off and each of those companies kind of start to build things on this. We're not there yet. It's absolutely, and this calls to the gap between the space of ideas and the rigorous testing of those ideas of practical application that you often speak to. You've written advice saying don't get fooled by people who claim to have a solution to artificial general intelligence, who claim to have an AI system that works just like the human brain or who claim to have figured out how the brain works. Ask them what the error rate they get on MNIST or ImageNet. So this is a little dated by the way. 2000, I mean five years, who's counting? Okay, but I think your opinion is still, MNIST and ImageNet, yes, may be dated, there may be new benchmarks, right? But I think that philosophy is one you still in somewhat hold, that benchmarks and the practical testing, the practical application is where you really get to test the ideas. Well, it may not be completely practical. Like for example, it could be a toy data set, but it has to be some sort of task that the community as a whole has accepted as some sort of standard kind of benchmark if you want. It doesn't need to be real. So for example, many years ago here at FAIR, people, Jason West and Antoine Borne and a few others proposed the Babi tasks, which were kind of a toy problem to test the ability of machines to reason actually to access working memory and things like this. And it was very useful even though it wasn't a real task. MNIST is kind of halfway real task. So toy problems can be very useful. It's just that I was really struck by the fact that a lot of people, particularly a lot of people with money to invest would be fooled by people telling them, oh, we have the algorithm of the cortex and you should give us 50 million. Yes, absolutely. So there's a lot of people who try to take advantage of the hype for business reasons and so on. But let me sort of talk to this idea that sort of new ideas, the ideas that push the field forward may not yet have a benchmark or it may be very difficult to establish a benchmark. I agree. That's part of the process. Establishing benchmarks is part of the process. So what are your thoughts about, so we have these benchmarks on around stuff we can do with images from classification to captioning to just every kind of information you can pull off from images and the surface level. There's audio data sets, there's some video. What can we start, natural language, what kind of stuff, what kind of benchmarks do you see that start creeping on to more something like intelligence, like reasoning, like maybe you don't like the term, but AGI echoes of that kind of formulation. A lot of people are working on interactive environments in which you can train and test intelligence systems. So there, for example, it's the classical paradigm of supervised learning is that you have a data set, you partition it into a training set, validation set, test set, and there's a clear protocol, right? But what if that assumes that the samples are statistically independent, you can exchange them, the order in which you see them shouldn't matter, things like that. But what if the answer you give determines the next sample you see, which is the case, for example, in robotics, right? You robot does something and then it gets exposed to a new room, and depending on where it goes, the room would be different. So that creates the exploration problem. The what if the samples, so that creates also a dependency between samples, right? You, if you move, if you can only move in space, the next sample you're gonna see is gonna be probably in the same building, most likely, right? So all the assumptions about the validity of this training set, test set hypothesis break. Whenever a machine can take an action that has an influence in the world, and it's what it's gonna see. So people are setting up artificial environments where that takes place, right? The robot runs around a 3D model of a house and can interact with objects and things like this. So you do robotics based simulation, you have those opening a gym type thing or Mujoko kind of simulated robots and you have games, things like that. So that's where the field is going really, this kind of environment. Now, back to the question of AGI. I don't like the term AGI because it implies that human intelligence is general and human intelligence is nothing like general. It's very, very specialized. We think it's general. We'd like to think of ourselves as having general intelligence. We don't, we're very specialized. We're only slightly more general than. Why does it feel general? So you kind of, the term general. I think what's impressive about humans is ability to learn, as we were talking about learning, to learn in just so many different domains. It's perhaps not arbitrarily general, but just you can learn in many domains and integrate that knowledge somehow. Okay. The knowledge persists. So let me take a very specific example. Yes. It's not an example. It's more like a quasi mathematical demonstration. So you have about 1 million fibers coming out of one of your eyes. Okay, 2 million total, but let's talk about just one of them. It's 1 million nerve fibers, your optical nerve. Let's imagine that they are binary. So they can be active or inactive, right? So the input to your visual cortex is 1 million bits. Mm hmm. Now they're connected to your brain in a particular way, and your brain has connections that are kind of a little bit like a convolutional net, they're kind of local, you know, in space and things like this. Now, imagine I play a trick on you. It's a pretty nasty trick, I admit. I cut your optical nerve, and I put a device that makes a random perturbation of a permutation of all the nerve fibers. So now what comes to your brain is a fixed but random permutation of all the pixels. There's no way in hell that your visual cortex, even if I do this to you in infancy, will actually learn vision to the same level of quality that you can. Got it, and you're saying there's no way you've learned that? No, because now two pixels that are nearby in the world will end up in very different places in your visual cortex, and your neurons there have no connections with each other because they're only connected locally. So this whole, our entire, the hardware is built in many ways to support? The locality of the real world. Yes, that's specialization. Yeah, but it's still pretty damn impressive, so it's not perfect generalization, it's not even close. No, no, it's not that it's not even close, it's not at all. Yeah, it's not, it's specialized, yeah. So how many Boolean functions? So let's imagine you want to train your visual system to recognize particular patterns of those one million bits. Okay, so that's a Boolean function, right? Either the pattern is here or not here, this is a two way classification with one million binary inputs. How many such Boolean functions are there? Okay, you have two to the one million combinations of inputs, for each of those you have an output bit, and so you have two to the one million Boolean functions of this type, okay? Which is an unimaginably large number. How many of those functions can actually be computed by your visual cortex? And the answer is a tiny, tiny, tiny, tiny, tiny, tiny sliver. Like an enormously tiny sliver. Yeah, yeah. So we are ridiculously specialized. Okay. But, okay, that's an argument against the word general. I think there's a, I agree with your intuition, but I'm not sure it's, it seems the brain is impressively capable of adjusting to things, so. It's because we can't imagine tasks that are outside of our comprehension, right? So we think we're general because we're general of all the things that we can apprehend. But there is a huge world out there of things that we have no idea. We call that heat, by the way. Heat. Heat. So, at least physicists call that heat, or they call it entropy, which is kind of. You have a thing full of gas, right? Closed system for gas. Right? Closed or not closed. It has pressure, it has temperature, it has, you know, and you can write equations, PV equal N on T, you know, things like that, right? When you reduce the volume, the temperature goes up, the pressure goes up, you know, things like that, right? For perfect gas, at least. Those are the things you can know about that system. And it's a tiny, tiny number of bits compared to the complete information of the state of the entire system. Because the state of the entire system will give you the position of momentum of every molecule of the gas. And what you don't know about it is the entropy, and you interpret it as heat. The energy contained in that thing is what we call heat. Now, it's very possible that, in fact, there is some very strong structure in how those molecules are moving. It's just that they are in a way that we are just not wired to perceive. Yeah, we're ignorant to it. And there's, in your infinite amount of things, we're not wired to perceive. And you're right, that's a nice way to put it. We're general to all the things we can imagine, which is a very tiny subset of all things that are possible. So it's like comograph complexity or the comograph chitin sum of complexity. Yeah. You know, every bit string or every integer is random, except for all the ones that you can actually write down. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah, okay. So beautifully put. But, you know, so we can just call it artificial intelligence. We don't need to have a general. Or human level. Human level intelligence is good. You know, you'll start, anytime you touch human, it gets interesting because, you know, it's because we attach ourselves to human and it's difficult to define what human intelligence is. Yeah. Nevertheless, my definition is maybe dem impressive intelligence, okay? Dem impressive demonstration of intelligence, whatever. And so on that topic, most successes in deep learning have been in supervised learning. What is your view on unsupervised learning? Is there a hope to reduce involvement of human input and still have successful systems that have practical use? Yeah, I mean, there's definitely a hope. It's more than a hope, actually. It's mounting evidence for it. And that's basically all I do. Like, the only thing I'm interested in at the moment is, I call it self supervised learning, not unsupervised. Because unsupervised learning is a loaded term. People who know something about machine learning, you know, tell you, so you're doing clustering or PCA, which is not the case. And the white public, you know, when you say unsupervised learning, oh my God, machines are gonna learn by themselves without supervision. You know, they see this as... Where's the parents? Yeah, so I call it self supervised learning because, in fact, the underlying algorithms that are used are the same algorithms as the supervised learning algorithms, except that what we train them to do is not predict a particular set of variables, like the category of an image, and not to predict a set of variables that have been provided by human labelers. But what you're trying the machine to do is basically reconstruct a piece of its input that is being maxed out, essentially. You can think of it this way, right? So show a piece of video to a machine and ask it to predict what's gonna happen next. And of course, after a while, you can show what happens and the machine will kind of train itself to do better at that task. You can do like all the latest, most successful models in natural language processing, use self supervised learning. You know, sort of BERT style systems, for example, right? You show it a window of a dozen words on a text corpus, you take out 15% of the words, and then you train the machine to predict the words that are missing, that self supervised learning. It's not predicting the future, it's just predicting things in the middle, but you could have it predict the future, that's what language models do. So you construct, so in an unsupervised way, you construct a model of language. Do you think... Or video or the physical world or whatever, right? How far do you think that can take us? Do you think BERT understands anything? To some level, it has a shallow understanding of text, but it needs to, I mean, to have kind of true human level intelligence, I think you need to ground language in reality. So some people are attempting to do this, right? Having systems that kind of have some visual representation of what is being talked about, which is one reason you need those interactive environments actually. But this is like a huge technical problem that is not solved, and that explains why self supervised learning works in the context of natural language, but does not work in the context, or at least not well, in the context of image recognition and video, although it's making progress quickly. And the reason, that reason is the fact that it's much easier to represent uncertainty in the prediction in a context of natural language than it is in the context of things like video and images. So for example, if I ask you to predict what words are missing, 15% of the words that I've taken out. The possibilities are small. That means... It's small, right? There is 100,000 words in the lexicon, and what the machine spits out is a big probability vector, right? It's a bunch of numbers between zero and one that sum to one. And we know how to do this with computers. So there, representing uncertainty in the prediction is relatively easy, and that's, in my opinion, why those techniques work for NLP. For images, if you ask... If you block a piece of an image, and you ask the system, reconstruct that piece of the image, there are many possible answers. They are all perfectly legit, right? And how do you represent this set of possible answers? You can't train a system to make one prediction. You can't train a neural net to say, here it is, that's the image, because there's a whole set of things that are compatible with it. So how do you get the machine to represent not a single output, but a whole set of outputs? And similarly with video prediction, there's a lot of things that can happen in the future of video. You're looking at me right now. I'm not moving my head very much, but I might turn my head to the left or to the right. If you don't have a system that can predict this, and you train it with least square to minimize the error with the prediction and what I'm doing, what you get is a blurry image of myself in all possible future positions that I might be in, which is not a good prediction. So there might be other ways to do the self supervision for visual scenes. Like what? I mean, if I knew, I wouldn't tell you, publish it first, I don't know. No, there might be. So I mean, these are kind of, there might be artificial ways of like self play in games, the way you can simulate part of the environment. Oh, that doesn't solve the problem. It's just a way of generating data. But because you have more of a control, like maybe you can control, yeah, it's a way to generate data. That's right. And because you can do huge amounts of data generation, that doesn't, you're right. Well, it creeps up on the problem from the side of data, and you don't think that's the right way to creep up. It doesn't solve this problem of handling uncertainty in the world, right? So if you have a machine learn a predictive model of the world in a game that is deterministic or quasi deterministic, it's easy, right? Just give a few frames of the game to a ConvNet, put a bunch of layers, and then have the game generates the next few frames. And if the game is deterministic, it works fine. And that includes feeding the system with the action that your little character is gonna take. The problem comes from the fact that the real world and most games are not entirely predictable. And so there you get those blurry predictions and you can't do planning with blurry predictions, right? So if you have a perfect model of the world, you can, in your head, run this model with a hypothesis for a sequence of actions, and you're going to predict the outcome of that sequence of actions. But if your model is imperfect, how can you plan? Yeah, it quickly explodes. What are your thoughts on the extension of this, which topic I'm super excited about, it's connected to something you were talking about in terms of robotics, is active learning. So as opposed to sort of completely unsupervised or self supervised learning, you ask the system for human help for selecting parts you want annotated next. So if you think about a robot exploring a space or a baby exploring a space or a system exploring a data set, every once in a while asking for human input, do you see value in that kind of work? I don't see transformative value. It's going to make things that we can already do more efficient or they will learn slightly more efficiently, but it's not going to make machines sort of significantly more intelligent. I think, and by the way, there is no opposition, there's no conflict between self supervised learning, reinforcement learning and supervised learning or imitation learning or active learning. I see self supervised learning as a preliminary to all of the above. Yes. So the example I use very often is how is it that, so if you use classical reinforcement learning, deep reinforcement learning, if you want, the best methods today, so called model free reinforcement learning to learn to play Atari games, take about 80 hours of training to reach the level that any human can reach in about 15 minutes. They get better than humans, but it takes them a long time. Alpha star, okay, the, you know, Aureal Vinyals and his teams, the system to play StarCraft plays, you know, a single map, a single type of player. A single player and can reach better than human level with about the equivalent of 200 years of training playing against itself. It's 200 years, right? It's not something that no human can ever do. I mean, I'm not sure what lesson to take away from that. Okay, now take those algorithms, the best algorithms we have today to train a car to drive itself. It would probably have to drive millions of hours. It will have to kill thousands of pedestrians. It will have to run into thousands of trees. It will have to run off cliffs. And it had to run off cliff multiple times before it figures out that it's a bad idea, first of all. And second of all, before it figures out how not to do it. And so, I mean, this type of learning obviously does not reflect the kind of learning that animals and humans do. There is something missing that's really, really important there. And my hypothesis, which I've been advocating for like five years now, is that we have predictive models of the world that include the ability to predict under uncertainty. And what allows us to not run off a cliff when we learn to drive, most of us can learn to drive in about 20 or 30 hours of training without ever crashing, causing any accident. And if we drive next to a cliff, we know that if we turn the wheel to the right, the car is gonna run off the cliff and nothing good is gonna come out of this. Because we have a pretty good model of intuitive physics that tells us the car is gonna fall. We know about gravity. Babies learn this around the age of eight or nine months that objects don't float, they fall. And we have a pretty good idea of the effect of turning the wheel on the car and we know we need to stay on the road. So there's a lot of things that we bring to the table, which is basically our predictive model of the world. And that model allows us to not do stupid things. And to basically stay within the context of things we need to do. We still face unpredictable situations and that's how we learn. But that allows us to learn really, really, really quickly. So that's called model based reinforcement learning. There's some imitation and supervised learning because we have a driving instructor that tells us occasionally what to do. But most of the learning is learning the model, learning physics that we've done since we were babies. That's where all, almost all the learning. And the physics is somewhat transferable from, it's transferable from scene to scene. Stupid things are the same everywhere. Yeah, I mean, if you have experience of the world, you don't need to be from a particularly intelligent species to know that if you spill water from a container, the rest is gonna get wet. You might get wet. So cats know this, right? Yeah. Right, so the main problem we need to solve is how do we learn models of the world? That's what I'm interested in. That's what self supervised learning is all about. If you were to try to construct a benchmark for, let's look at MNIST. I love that data set. Do you think it's useful, interesting, slash possible to perform well on MNIST with just one example of each digit and how would we solve that problem? The answer is probably yes. The question is what other type of learning are you allowed to do? So if what you're allowed to do is train on some gigantic data set of labeled digit, that's called transfer learning. And we know that works, okay? We do this at Facebook, like in production, right? We train large convolutional nets to predict hashtags that people type on Instagram and we train on billions of images, literally billions. And then we chop off the last layer and fine tune on whatever task we want. That works really well. You can beat the ImageNet record with this. We actually open sourced the whole thing like a few weeks ago. Yeah, that's still pretty cool. But yeah, so what would be impressive? What's useful and impressive? What kind of transfer learning would be useful and impressive? Is it Wikipedia, that kind of thing? No, no, so I don't think transfer learning is really where we should focus. We should try to do, you know, have a kind of scenario for Benchmark where you have unlabeled data and you can, and it's very large number of unlabeled data. It could be video clips. It could be where you do, you know, frame prediction. It could be images where you could choose to, you know, mask a piece of it, could be whatever, but they're unlabeled and you're not allowed to label them. So you do some training on this, and then you train on a particular supervised task, ImageNet or a NIST, and you measure how your test error decrease or validation error decreases as you increase the number of label training samples. Okay, and what you'd like to see is that, you know, your error decreases much faster than if you train from scratch from random weights. So that to reach the same level of performance and a completely supervised, purely supervised system would reach you would need way fewer samples. So that's the crucial question because it will answer the question to like, you know, people interested in medical image analysis. Okay, you know, if I want to get to a particular level of error rate for this task, I know I need a million samples. Can I do, you know, self supervised pre training to reduce this to about 100 or something? And you think the answer there is self supervised pre training? Yeah, some form, some form of it. Telling you active learning, but you disagree. No, it's not useless. It's just not gonna lead to a quantum leap. It's just gonna make things that we already do. So you're way smarter than me. I just disagree with you. But I don't have anything to back that. It's just intuition. So I worked a lot of large scale data sets and there's something that might be magic in active learning, but okay. And at least I said it publicly. At least I'm being an idiot publicly. Okay. It's not being an idiot. It's, you know, working with the data you have. I mean, I mean, certainly people are doing things like, okay, I have 3000 hours of, you know, imitation learning for start driving car, but most of those are incredibly boring. What I like is select, you know, 10% of them that are kind of the most informative. And with just that, I would probably reach the same. So it's a weak form of active learning if you want. Yes, but there might be a much stronger version. Yeah, that's right. That's what, and that's an awful question if it exists. The question is how much stronger can you get? Elon Musk is confident. Talked to him recently. He's confident that large scale data and deep learning can solve the autonomous driving problem. What are your thoughts on the limits, possibilities of deep learning in this space? It's obviously part of the solution. I mean, I don't think we'll ever have a set driving system or at least not in the foreseeable future that does not use deep learning. Let me put it this way. Now, how much of it? So in the history of sort of engineering, particularly sort of AI like systems, there's generally a first phase where everything is built by hand. Then there is a second phase. And that was the case for autonomous driving 20, 30 years ago. There's a phase where there's a little bit of learning is used, but there's a lot of engineering that's involved in kind of taking care of corner cases and putting limits, et cetera, because the learning system is not perfect. And then as technology progresses, we end up relying more and more on learning. That's the history of character recognition, it's the history of science. Character recognition is the history of speech recognition, now computer vision, natural language processing. And I think the same is going to happen with autonomous driving that currently the methods that are closest to providing some level of autonomy, some decent level of autonomy where you don't expect a driver to kind of do anything is where you constrain the world. So you only run within 100 square kilometers or square miles in Phoenix where the weather is nice and the roads are wide, which is what Waymo is doing. You completely overengineer the car with tons of LIDARs and sophisticated sensors that are too expensive for consumer cars, but they're fine if you just run a fleet. And you engineer the hell out of everything else. You map the entire world. So you have complete 3D model of everything. So the only thing that the perception system has to take care of is moving objects and construction and sort of things that weren't in your map. And you can engineer a good SLAM system and all that stuff. So that's kind of the current approach that's closest to some level of autonomy. But I think eventually the longterm solution is going to rely more and more on learning and possibly using a combination of self supervised learning and model based reinforcement or something like that. But ultimately learning will be not just at the core, but really the fundamental part of the system. Yeah, it already is, but it will become more and more. What do you think it takes to build a system with human level intelligence? You talked about the AI system in the movie Her being way out of reach, our current reach. This might be outdated as well, but. It's still way out of reach. It's still way out of reach. What would it take to build Her? Do you think? So I can tell you the first two obstacles that we have to clear, but I don't know how many obstacles there are after this. So the image I usually use is that there is a bunch of mountains that we have to climb and we can see the first one, but we don't know if there are 50 mountains behind it or not. And this might be a good sort of metaphor for why AI researchers in the past have been overly optimistic about the result of AI. You know, for example, Noel and Simon wrote the general problem solver and they called it the general problem solver. General problem solver. And of course, the first thing you realize is that all the problems you want to solve are exponential. And so you can't actually use it for anything useful, but you know. Yeah, so yeah, all you see is the first peak. So in general, what are the first couple of peaks for Her? So the first peak, which is precisely what I'm working on is self supervised learning. How do we get machines to run models of the world by observation, kind of like babies and like young animals? So we've been working with, you know, cognitive scientists. So this Emmanuelle Dupoux, who's at FAIR in Paris, is a half time, is also a researcher in a French university. And he has this chart that shows that which, how many months of life baby humans kind of learn different concepts. And you can measure this in sort of various ways. So things like distinguishing animate objects from inanimate objects, you can tell the difference at age two, three months. Whether an object is going to stay stable, is going to fall, you know, about four months, you can tell. You know, there are various things like this. And then things like gravity, the fact that objects are not supposed to float in the air, but are supposed to fall, you run this around the age of eight or nine months. If you look at the data, eight or nine months, if you look at a lot of, you know, eight month old babies, you give them a bunch of toys on their high chair. First thing they do is they throw them on the ground and they look at them. It's because, you know, they're learning about, actively learning about gravity. Gravity, yeah. Okay, so they're not trying to annoy you, but they, you know, they need to do the experiment, right? Yeah. So, you know, how do we get machines to learn like babies, mostly by observation with a little bit of interaction and learning those models of the world? Because I think that's really a crucial piece of an intelligent autonomous system. So if you think about the architecture of an intelligent autonomous system, it needs to have a predictive model of the world. So something that says, here is a world at time T, here is a state of the world at time T plus one, if I take this action. And it's not a single answer, it can be a... Yeah, it can be a distribution, yeah. Yeah, well, but we don't know how to represent distributions in high dimensional T spaces. So it's gotta be something weaker than that, okay? But with some representation of uncertainty. If you have that, then you can do what optimal control theorists call model predictive control, which means that you can run your model with a hypothesis for a sequence of action and then see the result. Now, what you need, the other thing you need is some sort of objective that you want to optimize. Am I reaching the goal of grabbing this object? Am I minimizing energy? Am I whatever, right? So there is some sort of objective that you have to minimize. And so in your head, if you have this model, you can figure out the sequence of action that will optimize your objective. That objective is something that ultimately is rooted in your basal ganglia, at least in the human brain, that's what it's basal ganglia, computes your level of contentment or miscontentment. I don't know if that's a word. Unhappiness, okay? Yeah, yeah. Discontentment. Discontentment, maybe. And so your entire behavior is driven towards kind of minimizing that objective, which is maximizing your contentment, computed by your basal ganglia. And what you have is an objective function, which is basically a predictor of what your basal ganglia is going to tell you. So you're not going to put your hand on fire because you know it's going to burn and you're going to get hurt. And you're predicting this because of your model of the world and your sort of predictor of this objective, right? So if you have those three components, you have four components, you have the hardwired objective, hardwired contentment objective computer, if you want, calculator. And then you have the three components. One is the objective predictor, which basically predicts your level of contentment. One is the model of the world. And there's a third module I didn't mention, which is the module that will figure out the best course of action to optimize an objective given your model, okay? Yeah. And you can call this a policy network or something like that, right? Now, you need those three components to act autonomously intelligently. And you can be stupid in three different ways. You can be stupid because your model of the world is wrong. You can be stupid because your objective is not aligned with what you actually want to achieve, okay? In humans, that would be a psychopath. And then the third way you can be stupid is that you have the right model, you have the right objective, but you're unable to figure out a course of action to optimize your objective given your model. Okay. Some people who are in charge of big countries actually have all three that are wrong. All right. Which countries? I don't know. Okay, so if we think about this agent, if we think about the movie Her, you've criticized the art project that is Sophia the Robot. And what that project essentially does is uses our natural inclination to anthropomorphize things that look like human and give them more. Do you think that could be used by AI systems like in the movie Her? So do you think that body is needed to create a feeling of intelligence? Well, if Sophia was just an art piece, I would have no problem with it, but it's presented as something else. Let me, on that comment real quick, if creators of Sophia could change something about their marketing or behavior in general, what would it be? What's? I'm just about everything. I mean, don't you think, here's a tough question. Let me, so I agree with you. So Sophia is not, the general public feels that Sophia can do way more than she actually can. That's right. And the people who created Sophia are not honestly publicly communicating, trying to teach the public. Right. But here's a tough question. Don't you think the same thing is scientists in industry and research are taking advantage of the same misunderstanding in the public when they create AI companies or publish stuff? Some companies, yes. I mean, there is no sense of, there's no desire to delude. There's no desire to kind of over claim when something is done, right? You publish a paper on AI that has this result on ImageNet, it's pretty clear. I mean, it's not even interesting anymore, but I don't think there is that. I mean, the reviewers are generally not very forgiving of unsupported claims of this type. And, but there are certainly quite a few startups that have had a huge amount of hype around this that I find extremely damaging and I've been calling it out when I've seen it. So yeah, but to go back to your original question, like the necessity of embodiment, I think, I don't think embodiment is necessary. I think grounding is necessary. So I don't think we're gonna get machines that really understand language without some level of grounding in the real world. And it's not clear to me that language is a high enough bandwidth medium to communicate how the real world works. So I think for this. Can you talk to what grounding means? So grounding means that, so there is this classic problem of common sense reasoning, you know, the Winograd schema, right? And so I tell you the trophy doesn't fit in the suitcase because it's too big, or the trophy doesn't fit in the suitcase because it's too small. And the it in the first case refers to the trophy in the second case to the suitcase. And the reason you can figure this out is because you know where the trophy and the suitcase are, you know, one is supposed to fit in the other one and you know the notion of size and a big object doesn't fit in a small object, unless it's a Tardis, you know, things like that, right? So you have this knowledge of how the world works, of geometry and things like that. I don't believe you can learn everything about the world by just being told in language how the world works. I think you need some low level perception of the world, you know, be it visual touch, you know, whatever, but some higher bandwidth perception of the world. By reading all the world's text, you still might not have enough information. That's right. There's a lot of things that just will never appear in text and that you can't really infer. So I think common sense will emerge from, you know, certainly a lot of language interaction, but also with watching videos or perhaps even interacting in virtual environments and possibly, you know, robot interacting in the real world. But I don't actually believe necessarily that this last one is absolutely necessary. But I think that there's a need for some grounding. But the final product doesn't necessarily need to be embodied, you're saying. No. It just needs to have an awareness, a grounding to. Right, but it needs to know how the world works to have, you know, to not be frustrating to talk to. And you talked about emotions being important. That's a whole nother topic. Well, so, you know, I talked about this, the basal ganglia as the thing that calculates your level of miscontentment. And then there is this other module that sort of tries to do a prediction of whether you're going to be content or not. That's the source of some emotion. So fear, for example, is an anticipation of bad things that can happen to you, right? You have this inkling that there is some chance that something really bad is going to happen to you and that creates fear. Well, you know for sure that something bad is going to happen to you, you kind of give up, right? It's not fear anymore. It's uncertainty that creates fear. So the punchline is, we're not going to have autonomous intelligence without emotions. Whatever the heck emotions are. So you mentioned very practical things of fear, but there's a lot of other mess around it. But there are kind of the results of, you know, drives. Yeah, there's deeper biological stuff going on. And I've talked to a few folks on this. There's fascinating stuff that ultimately connects to our brain. If we create an AGI system, sorry. Human level intelligence. Human level intelligence system. And you get to ask her one question. What would that question be? You know, I think the first one we'll create would probably not be that smart. They'd be like a four year old. Okay. So you would have to ask her a question to know she's not that smart. Yeah. Well, what's a good question to ask, you know, to be impressed. What is the cause of wind? And if she answers, oh, it's because the leaves of the tree are moving and that creates wind. She's onto something. And if she says that's a stupid question, she's really onto something. No, and then you tell her, actually, you know, here is the real thing. She says, oh yeah, that makes sense. So questions that reveal the ability to do common sense reasoning about the physical world. Yeah. And you'll sum it up with causal inference. Causal inference. Well, it was a huge honor. Congratulations on your Turing Award. Thank you so much for talking today. Thank you. Thank you for having me.
Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36
The following is a conversation with Vijay Kumar. He's one of the top roboticists in the world, a professor at the University of Pennsylvania, a dean of pen engineering, former director of Grasp Lab, or the General Robotics Automation Sensing and Perception Laboratory at Penn, that was established back in 1979, that's 40 years ago. Vijay is perhaps best known for his work in multi robot systems, robot swarms, and micro aerial vehicles, robots that elegantly cooperate in flight under all the uncertainty and challenges that the real world conditions present. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Vijay Kumar. What is the first robot you've ever built or were a part of building? Way back when I was in graduate school, I was part of a fairly big project that involved building a very large hexapod. It's weighed close to 7,000 pounds, and it was powered by hydraulic actuation, or it was actuated by hydraulics with 18 motors, hydraulic motors, each controlled by an Intel 8085 processor and an 8086 co processor. And so imagine this huge monster that had 18 joints, each controlled by an independent computer, and there was a 19th computer that actually did the coordination between these 18 joints. So I was part of this project, and my thesis work was how do you coordinate the 18 legs? And in particular, the pressures in the hydraulic cylinders to get efficient locomotion. It sounds like a giant mess. So how difficult is it to make all the motors communicate? Presumably, you have to send signals hundreds of times a second, or at least. So this was not my work, but the folks who worked on this wrote what I believe to be the first multiprocessor operating system. This was in the 80s, and you had to make sure that obviously messages got across from one joint to another. You have to remember the clock speeds on those computers were about half a megahertz. Right, the 80s. So not to romanticize the notion, but how did it make you feel to see that robot move? It was amazing. In hindsight, it looks like, well, we built this thing which really should have been much smaller. And of course, today's robots are much smaller. You look at Boston Dynamics or Ghost Robotics, a spinoff from Penn. But back then, you were stuck with the substrate you had, the compute you had, so things were unnecessarily big. But at the same time, and this is just human psychology, somehow bigger means grander. People never had the same appreciation for nanotechnology or nanodevices as they do for the Space Shuttle or the Boeing 747. Yeah, you've actually done quite a good job at illustrating that small is beautiful in terms of robotics. So what is on that topic is the most beautiful or elegant robot in motion that you've ever seen? Not to pick favorites or whatever, but something that just inspires you that you remember. Well, I think the thing that I'm most proud of that my students have done is really think about small UAVs that can maneuver in constrained spaces and in particular, their ability to coordinate with each other and form three dimensional patterns. So once you can do that, you can essentially create 3D objects in the sky and you can deform these objects on the fly. So in some sense, your toolbox of what you can create has suddenly got enhanced. And before that, we did the two dimensional version of this. So we had ground robots forming patterns and so on. So that was not as impressive, that was not as beautiful. But if you do it in 3D, suspended in midair, and you've got to go back to 2011 when we did this, now it's actually pretty standard to do these things eight years later. But back then it was a big accomplishment. So the distributed cooperation is where beauty emerges in your eyes? Well, I think beauty to an engineer is very different from beauty to someone who's looking at robots from the outside, if you will. But what I meant there, so before we said that grand, so before we said that grand is associated with size. And another way of thinking about this is just the physical shape and the idea that you can get physical shapes in midair and have them deform, that's beautiful. But the individual components, the agility is beautiful too, right? That is true too. So then how quickly can you actually manipulate these three dimensional shapes and the individual components? Yes, you're right. But by the way, you said UAV, unmanned aerial vehicle. What's a good term for drones, UAVs, quad copters? Is there a term that's being standardized? I don't know if there is. Everybody wants to use the word drones. And I've often said this, drones to me is a pejorative word. It signifies something that's dumb, that's pre programmed, that does one little thing and robots are anything but drones. So I actually don't like that word, but that's what everybody uses. You could call it unpiloted. Unpiloted. But even unpiloted could be radio controlled, could be remotely controlled in many different ways. And I think the right word is, thinking about it as an aerial robot. You also say agile, autonomous, aerial robot, right? Yeah, so agility is an attribute, but they don't have to be. So what biological system, because you've also drawn a lot of inspiration with those. I've seen bees and ants that you've talked about. What living creatures have you found to be most inspiring as an engineer, instructive in your work in robotics? To me, so ants are really quite incredible creatures, right? So you, I mean, the individuals arguably are very simple in how they're built and yet they're incredibly resilient as a population. And as individuals, they're incredibly robust. So, if you take an ant, it's six legs, you remove one leg, it still works just fine. And it moves along. And I don't know that he even realizes it's lost a leg. So that's the robustness at the individual ant level. But then you look about this instinct for self preservation of the colonies and they adapt in so many amazing ways. You know, transcending gaps by just chaining themselves together when you have a flood, being able to recruit other teammates to carry big morsels of food, and then going out in different directions looking for food, and then being able to demonstrate consensus, even though they don't communicate directly with each other the way we communicate with each other. In some sense, they also know how to do democracy, probably better than what we do. Yeah, somehow it's even democracy is emergent. It seems like all of the phenomena that we see is all emergent. It seems like there's no centralized communicator. There is, so I think a lot is made about that word, emergent, and it means lots of things to different people. But you're absolutely right. I think as an engineer, you think about what element, elemental behaviors were primitives you could synthesize so that the whole looks incredibly powerful, incredibly synergistic, the whole definitely being greater than some of the parts, and ants are living proof of that. So when you see these beautiful swarms where there's biological systems of robots, do you sometimes think of them as a single individual living intelligent organism? So it's the same as thinking of our human beings are human civilization as one organism, or do you still, as an engineer, think about the individual components and all the engineering that went into the individual components? Well, that's very interesting. So again, philosophically as engineers, what we wanna do is to go beyond the individual components, the individual units, and think about it as a unit, as a cohesive unit, without worrying about the individual components. If you start obsessing about the individual building blocks and what they do, you inevitably will find it hard to scale up. Just mathematically, just think about individual things you wanna model, and if you want to have 10 of those, then you essentially are taking Cartesian products of 10 things, and that makes it really complicated. Then to do any kind of synthesis or design in that high dimension space is really hard. So the right way to do this is to think about the individuals in a clever way so that at the higher level, when you look at lots and lots of them, abstractly, you can think of them in some low dimensional space. So what does that involve? For the individual, do you have to try to make the way they see the world as local as possible? And the other thing, do you just have to make them robust to collisions? Like you said with the ants, if something fails, the whole swarm doesn't fail. Right, I think as engineers, we do this. I mean, you think about, we build planes, or we build iPhones, and we know that by taking individual components, well engineered components with well specified interfaces that behave in a predictable way, you can build complex systems. So that's ingrained, I would claim, in most engineers thinking, and it's true for computer scientists as well. I think what's different here is that you want the individuals to be robust in some sense, as we do in these other settings, but you also want some degree of resiliency for the population. And so you really want them to be able to reestablish communication with their neighbors. You want them to rethink their strategy for group behavior. You want them to reorganize. And that's where I think a lot of the challenges lie. So just at a high level, what does it take for a bunch of, what should we call them, flying robots, to create a formation? Just for people who are not familiar with robotics in general, how much information is needed? How do you even make it happen without a centralized controller? So, I mean, there are a couple of different ways of looking at this. If you are a purist, you think of it as a way of recreating what nature does. So nature forms groups for several reasons, but mostly it's because of this instinct that organisms have of preserving their colonies, their population, which means what? You need shelter, you need food, you need to procreate, and that's basically it. So the kinds of interactions you see are all organic. They're all local. And the only information that they share, and mostly it's indirectly, is to, again, preserve the herd or the flock, or the swarm, and either by looking for new sources of food or looking for new shelters, right? Right. As engineers, when we build swarms, we have a mission. And when you think of a mission, and it involves mobility, most often it's described in some kind of a global coordinate system. As a human, as an operator, as a commander, or as a collaborator, I have my coordinate system, and I want the robots to be consistent with that. So I might think of it slightly differently. I might want the robots to recognize that coordinate system, which means not only do they have to think locally in terms of who their immediate neighbors are, but they have to be cognizant of what the global environment is. They have to be cognizant of what the global environment looks like. So if I say, surround this building and protect this from intruders, well, they're immediately in a building centered coordinate system, and I have to tell them where the building is. And they're globally collaborating on the map of that building. They're maintaining some kind of global, not just in the frame of the building, but there's information that's ultimately being built up explicitly as opposed to kind of implicitly, like nature might. Correct, correct. So in some sense, nature is very, very sophisticated, but the tasks that nature solves or needs to solve are very different from the kind of engineered tasks, artificial tasks that we are forced to address. And again, there's nothing preventing us from solving these other problems, but ultimately it's about impact. You want these swarms to do something useful. And so you're kind of driven into this very unnatural, if you will. Unnatural, meaning not like how nature does, setting. And it's probably a little bit more expensive to do it the way nature does, because nature is less sensitive to the loss of the individual. And cost wise in robotics, I think you're more sensitive to losing individuals. I think that's true, although if you look at the price to performance ratio of robotic components, it's coming down dramatically, right? It continues to come down. So I think we're asymptotically approaching the point where we would get, yeah, the cost of individuals would really become insignificant. So let's step back at a high level view, the impossible question of what kind of, as an overview, what kind of autonomous flying vehicles are there in general? I think the ones that receive a lot of notoriety are obviously the military vehicles. Military vehicles are controlled by a base station, but have a lot of human supervision. But they have limited autonomy, which is the ability to go from point A to point B. And even the more sophisticated now, sophisticated vehicles can do autonomous takeoff and landing. And those usually have wings and they're heavy. Usually they're wings, but then there's nothing preventing us from doing this for helicopters as well. There are many military organizations that have autonomous helicopters in the same vein. And by the way, you look at autopilots and airplanes and it's actually very similar. In fact, one interesting question we can ask is, if you look at all the air safety violations, all the crashes that occurred, would they have happened if the plane were truly autonomous? And I think you'll find that in many of the cases, because of pilot error, we made silly decisions. And so in some sense, even in air traffic, commercial air traffic, there's a lot of applications, although we only see autonomy being enabled at very high altitudes when the plane is an autopilot. The plane is an autopilot. There's still a role for the human and that kind of autonomy is, you're kind of implying, I don't know what the right word is, but it's a little dumber than it could be. Right, so in the lab, of course, we can afford to be a lot more aggressive. And the question we try to ask is, can we make robots that will be able to make decisions without any kind of external infrastructure? So what does that mean? So the most common piece of infrastructure that airplanes use today is GPS. GPS is also the most brittle form of information. If you have driven in a city, try to use GPS navigation, in tall buildings, you immediately lose GPS. And so that's not a very sophisticated way of building autonomy. I think the second piece of infrastructure they rely on is communications. Again, it's very easy to jam communications. In fact, if you use wifi, you know that wifi signals drop out, cell signals drop out. So to rely on something like that is not good. The third form of infrastructure we use, and I hate to call it infrastructure, but it is that, in the sense of robots, is people. So you could rely on somebody to pilot you. And so the question you wanna ask is, if there are no pilots, there's no communications with any base station, if there's no knowledge of position, and if there's no a priori map, a priori knowledge of what the environment looks like, a priori model of what might happen in the future, can robots navigate? So that is true autonomy. So that's true autonomy, and we're talking about, you mentioned like military application of drones. Okay, so what else is there? You talk about agile, autonomous flying robots, aerial robots, so that's a different kind of, it's not winged, it's not big, at least it's small. So I use the word agility mostly, or at least we're motivated to do agile robots, mostly because robots can operate and should be operating in constrained environments. And if you want to operate the way a global hawk operates, I mean, the kinds of conditions in which you operate are very, very restrictive. If you wanna go inside a building, for example, for search and rescue, or to locate an active shooter, or you wanna navigate under the canopy in an orchard to look at health of plants, or to look for, to count fruits, to measure the tree trunks. These are things we do, by the way. There's some cool agriculture stuff you've shown in the past, it's really awesome. So in those kinds of settings, you do need that agility. Agility does not necessarily mean you break records for the 100 meters dash. What it really means is you see the unexpected and you're able to maneuver in a safe way, and in a way that gets you the most information about the thing you're trying to do. By the way, you may be the only person who, in a TED Talk, has used a math equation, which is amazing, people should go see one of your TED Talks. Actually, it's very interesting, because the TED curator, Chris Anderson, told me, you can't show math. And I thought about it, but that's who I am. I mean, that's our work. And so I felt compelled to give the audience a taste for at least some math. So on that point, simply, what does it take to make a thing with four motors fly, a quadcopter, one of these little flying robots? How hard is it to make it fly? How do you coordinate the four motors? How do you convert those motors into actual movement? So this is an interesting question. We've been trying to do this since 2000. It is a commentary on the sensors that were available back then, the computers that were available back then. And a number of things happened between 2000 and 2007. One is the advances in computing, which is, so we all know about Moore's Law, but I think 2007 was a tipping point, the year of the iPhone, the year of the cloud. Lots of things happened in 2007. But going back even further, inertial measurement units as a sensor really matured. Again, lots of reasons for that. Certainly, there's a lot of federal funding, particularly DARPA in the US, but they didn't anticipate this boom in IMUs. But if you look, subsequently what happened is that every car manufacturer had to put an airbag in, which meant you had to have an accelerometer on board. And so that drove down the price to performance ratio. Wow, I should know this. That's very interesting. That's very interesting, the connection there. And that's why research is very, it's very hard to predict the outcomes. And again, the federal government spent a ton of money on things that they thought were useful for resonators, but it ended up enabling these small UAVs, which is great, because I could have never raised that much money and sold this project, hey, we want to build these small UAVs. Can you actually fund the development of low cost IMUs? So why do you need an IMU on an IMU? So I'll come back to that. So in 2007, 2008, we were able to build these. And then the question you're asking was a good one. How do you coordinate the motors to develop this? But over the last 10 years, everything is commoditized. A high school kid today can pick up a Raspberry Pi kit and build this. All the low levels functionality is all automated. But basically at some level, you have to drive the motors at the right RPMs, the right velocity, in order to generate the right amount of thrust, in order to position it and orient it in a way that you need to in order to fly. The feedback that you get is from onboard sensors, and the IMU is an important part of it. The IMU tells you what the acceleration is, as well as what the angular velocity is. And those are important pieces of information. In addition to that, you need some kind of local position or velocity information. For example, when we walk, we implicitly have this information because we kind of know what our stride length is. We also are looking at images fly past our retina, if you will, and so we can estimate velocity. We also have accelerometers in our head, and we're able to integrate all these pieces of information to determine where we are as we walk. And so robots have to do something very similar. You need an IMU, you need some kind of a camera or other sensor that's measuring velocity, and then you need some kind of a global reference frame if you really want to think about doing something in a world coordinate system. And so how do you estimate your position with respect to that global reference frame? That's important as well. So coordinating the RPMs of the four motors is what allows you to, first of all, fly and hover, and then you can change the orientation and the velocity and so on. Exactly, exactly. So it's a bunch of degrees of freedom that you're complaining about. There's six degrees of freedom, but you only have four inputs, the four motors. And it turns out to be a remarkably versatile configuration. You think at first, well, I only have four motors, how do I go sideways? But it's not too hard to say, well, if I tilt myself, I can go sideways, and then you have four motors pointing up, how do I rotate in place about a vertical axis? Well, you rotate them at different speeds and that generates reaction moments and that allows you to turn. So it's actually a pretty, it's an optimal configuration from an engineer standpoint. It's very simple, very cleverly done, and very versatile. So if you could step back to a time, so I've always known flying robots as, to me, it was natural that a quadcopter should fly. But when you first started working with it, how surprised are you that you can make, do so much with the four motors? How surprising is it that you can make this thing fly, first of all, that you can make it hover, that you can add control to it? Firstly, this is not, the four motor configuration is not ours. You can, it has at least a hundred year history. And various people, various people try to get quadrotors to fly without much success. As I said, we've been working on this since 2000. Our first designs were, well, this is way too complicated. Why not we try to get an omnidirectional flying robot? So our early designs, we had eight rotors. And so these eight rotors were arranged uniformly on a sphere, if you will. So you can imagine a symmetric configuration. And so you should be able to fly anywhere. But the real challenge we had is the strength to weight ratio is not enough. And of course, we didn't have the sensors and so on. So everybody knew, or at least the people who worked with rotorcrafts knew, four rotors will get it done. So that was not our idea. But it took a while before we could actually do the onboard sensing and the computation that was needed for the kinds of agile maneuvering that we wanted to do in our little aerial robots. And that only happened between 2007 and 2009 in our lab. Yeah, and you have to send the signal maybe a hundred times a second. So the compute there, everything has to come down in price. And what are the steps of getting from point A to point B? So we just talked about like local control. But if all the kind of cool dancing in the air that I've seen you show, how do you make it happen? How do you make a trajectory? First of all, okay, figure out a trajectory. So plan a trajectory. And then how do you make that trajectory happen? Yeah, I think planning is a very fundamental problem in robotics. I think 10 years ago it was an esoteric thing, but today with self driving cars, everybody can understand this basic idea that a car sees a whole bunch of things and it has to keep a lane or maybe make a right turn or switch lanes. It has to plan a trajectory. It has to be safe. It has to be efficient. So everybody's familiar with that. That's kind of the first step that you have to think about when you say autonomy. And so for us, it's about finding smooth motions, motions that are safe. So we think about these two things. One is optimality, one is safety. Clearly you cannot compromise safety. So you're looking for safe, optimal motions. The other thing you have to think about is can you actually compute a reasonable trajectory in a small amount of time? Cause you have a time budget. So the optimal becomes suboptimal, but in our lab we focus on synthesizing smooth trajectory that satisfy all the constraints. In other words, don't violate any safety constraints and is as efficient as possible. And when I say efficient, it could mean I want to get from point A to point B as quickly as possible, or I want to get to it as gracefully as possible, or I want to consume as little energy as possible. But always staying within the safety constraints. But yes, always finding a safe trajectory. So there's a lot of excitement and progress in the field of machine learning and reinforcement learning and the neural network variant of that with deep reinforcement learning. Do you see a role of machine learning in, so a lot of the success of flying robots did not rely on machine learning, except for maybe a little bit of the perception on the computer vision side. On the control side and the planning, do you see there's a role in the future for machine learning? So let me disagree a little bit with you. I think we never perhaps called out in my work, called out learning, but even this very simple idea of being able to fly through a constrained space. The first time you try it, you'll invariably, you might get it wrong if the task is challenging. And the reason is to get it perfectly right, you have to model everything in the environment. And flying is notoriously hard to model. There are aerodynamic effects that we constantly discover. Even just before I was talking to you, I was talking to a student about how blades flap when they fly. And that ends up changing how a rotorcraft is accelerated in the angular direction. Does he use like micro flaps or something? It's not micro flaps. So we assume that each blade is rigid, but actually it flaps a little bit. It bends. Interesting, yeah. And so the models rely on the fact, on the assumption that they're not rigid. On the assumption that they're actually rigid, but that's not true. If you're flying really quickly, these effects become significant. If you're flying close to the ground, you get pushed off by the ground, right? Something which every pilot knows when he tries to land or she tries to land, this is called a ground effect. Something very few pilots think about is what happens when you go close to a ceiling or you get sucked into a ceiling. There are very few aircrafts that fly close to any kind of ceiling. Likewise, when you go close to a wall, there are these wall effects. And if you've gone on a train and you pass another train that's traveling in the opposite direction, you feel the buffeting. And so these kinds of microclimates affect our UAV significantly. So if you want... And they're impossible to model, essentially. I wouldn't say they're impossible to model, but the level of sophistication you would need in the model and the software would be tremendous. Plus, to get everything right would be awfully tedious. So the way we do this is over time, we figure out how to adapt to these conditions. So early on, we use the form of learning that we call iterative learning. So this idea, if you want to perform a task, there are a few things that you need to change and iterate over a few parameters that over time you can figure out. So I could call it policy gradient reinforcement learning, but actually it was just iterative learning. Iterative learning. And so this was there way back. I think what's interesting is, if you look at autonomous vehicles today, learning occurs, could occur in two pieces. One is perception, understanding the world. Second is action, taking actions. Everything that I've seen that is successful is on the perception side of things. So in computer vision, we've made amazing strides in the last 10 years. So recognizing objects, actually detecting objects, classifying them and tagging them in some sense, annotating them. This is all done through machine learning. On the action side, on the other hand, I don't know of any examples where there are fielded systems where we actually learn the right behavior. Outside of single demonstration is successful. In the laboratory, this is the holy grail. Can you do end to end learning? Can you go from pixels to motor currents? This is really, really hard. And I think if you go forward, the right way to think about these things is data driven approaches, learning based approaches, in concert with model based approaches, which is the traditional way of doing things. So I think there's a piece, there's a role for each of these methodologies. So what do you think, just jumping out on topic since you mentioned autonomous vehicles, what do you think are the limits on the perception side? So I've talked to Elon Musk and there on the perception side, they're using primarily computer vision to perceive the environment. In your work with, because you work with the real world a lot and the physical world, what are the limits of computer vision? Do you think we can solve autonomous vehicles on the perception side, focusing on vision alone and machine learning? So, we also have a spinoff company, Exxon Technologies that works underground in mines. So you go into mines, they're dark, they're dirty. You fly in a dirty area, there's stuff you kick up from by the propellers, the downwash kicks up dust. I challenge you to get a computer vision algorithm to work there. So we use LIDARs in that setting. Indoors and even outdoors when we fly through fields, I think there's a lot of potential for just solving the problem using computer vision alone. But I think the bigger question is, can you actually solve or can you actually identify all the corner cases using a single sensing modality and using learning alone? So what's your intuition there? So look, if you have a corner case and your algorithm doesn't work, your instinct is to go get data about the corner case and patch it up, learn how to deal with that corner case. But at some point, this is gonna saturate, this approach is not viable. So today, computer vision algorithms can detect 90% of the objects or can detect objects 90% of the time, classify them 90% of the time. Cats on the internet probably can do 95%, I don't know. But to get from 90% to 99%, you need a lot more data. And then I tell you, well, that's not enough because I have a safety critical application, I wanna go from 99% to 99.9%. That's even more data. So I think if you look at wanting accuracy on the X axis and look at the amount of data on the Y axis, I believe that curve is an exponential curve. Wow, okay, it's even hard if it's linear. It's hard if it's linear, totally, but I think it's exponential. And the other thing you have to think about is that this process is a very, very power hungry process to run data farms or servers. Power, do you mean literally power? Literally power, literally power. So in 2014, five years ago, and I don't have more recent data, 2% of US electricity consumption was from data farms. So we think about this as an information science and information processing problem. Actually, it is an energy processing problem. And so unless we figured out better ways of doing this, I don't think this is viable. So talking about driving, which is a safety critical application and some aspect of flight is safety critical, maybe philosophical question, maybe an engineering one, what problem do you think is harder to solve, autonomous driving or autonomous flight? That's a really interesting question. I think autonomous flight has several advantages that autonomous driving doesn't have. So look, if I want to go from point A to point B, I have a very, very safe trajectory. Go vertically up to a maximum altitude, fly horizontally to just about the destination, and then come down vertically. This is preprogrammed. The equivalent of that is very hard to find in the self driving car world because you're on the ground, you're in a two dimensional surface, and the trajectories on the two dimensional surface are more likely to encounter obstacles. I mean this in an intuitive sense, but mathematically true. That's mathematically as well, that's true. There's other option on the 2G space of platooning, or because there's so many obstacles, you can connect with those obstacles and all these kind of options. Sure, but those exist in the three dimensional space as well. So they do. So the question also implies how difficult are obstacles in the three dimensional space in flight? So that's the downside. I think in three dimensional space, you're modeling three dimensional world, not just because you want to avoid it, but you want to reason about it, and you want to work in the three dimensional environment, and that's significantly harder. So that's one disadvantage. I think the second disadvantage is of course, anytime you fly, you have to put up with the peculiarities of aerodynamics and their complicated environments. How do you negotiate that? So that's always a problem. Do you see a time in the future where there is, you mentioned there's agriculture applications. So there's a lot of applications of flying robots, but do you see a time in the future where there's tens of thousands, or maybe hundreds of thousands of delivery drones that fill the sky, delivery flying robots? I think there's a lot of potential for the last mile delivery. And so in crowded cities, I don't know, if you go to a place like Hong Kong, just crossing the river can take half an hour, and while a drone can just do it in five minutes at most. I think you look at delivery of supplies to remote villages. I work with a nonprofit called Weave Robotics. So they work in the Peruvian Amazon, where the only highways that are available are the only highways or rivers. And to get from point A to point B may take five hours, while with a drone, you can get there in 30 minutes. So just delivering drugs, retrieving samples for testing vaccines, I think there's huge potential here. So I think the challenges are not technological, but the challenge is economical. The one thing I'll tell you that nobody thinks about is the fact that we've not made huge strides in battery technology. Yes, it's true, batteries are becoming less expensive because we have these mega factories that are coming up, but they're all based on lithium based technologies. And if you look at the energy density and the power density, those are two fundamentally limiting numbers. So power density is important because for a UAV to take off vertically into the air, which most drones do, they don't have a runway, you consume roughly 200 watts per kilo at the small size. That's a lot, right? In contrast, the human brain consumes less than 80 watts, the whole of the human brain. So just imagine just lifting yourself into the air is like two or three light bulbs, which makes no sense to me. Yeah, so you're going to have to at scale solve the energy problem then, charging the batteries, storing the energy and so on. And then the storage is the second problem, but storage limits the range. But you have to remember that you have to burn a lot of it per given time. So the burning is another problem. Which is a power question. Yes, and do you think just your intuition, there are breakthroughs in batteries on the horizon? How hard is that problem? Look, there are a lot of companies that are promising flying cars that are autonomous and that are clean. I think they're over promising. The autonomy piece is doable. The clean piece, I don't think so. There's another company that I work with called JetOptra. They make small jet engines. And they can get up to 50 miles an hour very easily and lift 50 kilos. But they're jet engines, they're efficient, they're a little louder than electric vehicles, but they can build flying cars. So your sense is that there's a lot of pieces that have come together. So on this crazy question, if you look at companies like Kitty Hawk, working on electric, so the clean, talking to Sebastian Thrun, right? It's a crazy dream, you know? But you work with flight a lot. You've mentioned before that manned flights or carrying a human body is very difficult to do. So how crazy is flying cars? Do you think there'll be a day when we have vertical takeoff and landing vehicles that are sufficiently affordable that we're going to see a huge amount of them? And they would look like something like we dream of when we think about flying cars. Yeah, like the Jetsons. The Jetsons, yeah. So look, there are a lot of smart people working on this and you never say something is not possible when you have people like Sebastian Thrun working on it. So I totally think it's viable. I question, again, the electric piece. The electric piece, yeah. And again, for short distances, you can do it. And there's no reason to suggest that these all just have to be rotorcrafts. You take off vertically, but then you morph into a forward flight. I think there are a lot of interesting designs. The question to me is, are these economically viable? And if you agree to do this with fossil fuels, it instantly immediately becomes viable. That's a real challenge. Do you think it's possible for robots and humans to collaborate successfully on tasks? So a lot of robotics folks that I talk to and work with, I mean, humans just add a giant mess to the picture. So it's best to remove them from consideration when solving specific tasks. It's very difficult to model. There's just a source of uncertainty. In your work with these agile flying robots, do you think there's a role for collaboration with humans? Or is it best to model tasks in a way that doesn't have a human in the picture? Well, I don't think we should ever think about robots without human in the picture. Ultimately, robots are there because we want them to solve problems for humans. But there's no general solution to this problem. I think if you look at human interaction and how humans interact with robots, you know, we think of these in sort of three different ways. One is the human commanding the robot. The second is the human collaborating with the robot. So for example, we work on how a robot can actually pick up things with a human and carry things. That's like true collaboration. And third, we think about humans as bystanders, self driving cars, what's the human's role and how do self driving cars acknowledge the presence of humans? So I think all of these things are different scenarios. It depends on what kind of humans, what kind of task. And I think it's very difficult to say that there's a general theory that we all have for this. But at the same time, it's also silly to say that we should think about robots independent of humans. So to me, human robot interaction is almost a mandatory aspect of everything we do. Yes, but to which degree, so your thoughts, if we jump to autonomous vehicles, for example, there's a big debate between what's called level two and level four. So semi autonomous and autonomous vehicles. And so the Tesla approach currently at least has a lot of collaboration between human and machine. So the human is supposed to actively supervise the operation of the robot. Part of the safety definition of how safe a robot is in that case is how effective is the human in monitoring it. Do you think that's ultimately not a good approach in sort of having a human in the picture, not as a bystander or part of the infrastructure, but really as part of what's required to make the system safe? This is harder than it sounds. I think, you know, if you, I mean, I'm sure you've driven before in highways and so on. It's really very hard to have to relinquish control to a machine and then take over when needed. So I think Tesla's approach is interesting because it allows you to periodically establish some kind of contact with the car. Toyota, on the other hand, is thinking about shared autonomy or collaborative autonomy as a paradigm. If I may argue, these are very, very simple ways of human robot collaboration, because the task is pretty boring. You sit in a vehicle, you go from point A to point B. I think the more interesting thing to me is, for example, search and rescue. I've got a human first responder, robot first responders. I gotta do something. It's important. I have to do it in two minutes. The building is burning. There's been an explosion. It's collapsed. How do I do it? I think to me, those are the interesting things where it's very, very unstructured. And what's the role of the human? What's the role of the robot? Clearly, there's lots of interesting challenges and there's a field. I think we're gonna make a lot of progress in this area. Yeah, it's an exciting form of collaboration. You're right. In autonomous driving, the main enemy is just boredom of the human. Yes. As opposed to in rescue operations, it's literally life and death. And the collaboration enables the effective completion of the mission. So it's exciting. In some sense, we're also doing this. You think about the human driving a car and almost invariably, the human's trying to estimate the state of the car, they estimate the state of the environment and so on. But what if the car were to estimate the state of the human? So for example, I'm sure you have a smartphone and the smartphone tries to figure out what you're doing and send you reminders and oftentimes telling you to drive to a certain place, although you have no intention of going there because it thinks that that's where you should be because of some Gmail calendar entry or something like that. And it's trying to constantly figure out who you are, what you're doing. If a car were to do that, maybe that would make the driver safer because the car is trying to figure out is the driver paying attention, looking at his or her eyes, looking at circadian movements. So I think the potential is there, but from the reverse side, it's not robot modeling, but it's human modeling. It's more on the human, right. And I think the robots can do a very good job of modeling humans if you really think about the framework that you have a human sitting in a cockpit, surrounded by sensors, all staring at him, in addition to be staring outside, but also staring at him. I think there's a real synergy there. Yeah, I love that problem because it's the new 21st century form of psychology, actually AI enabled psychology. A lot of people have sci fi inspired fears of walking robots like those from Boston Dynamics. If you just look at shows on Netflix and so on, or flying robots like those you work with, how would you, how do you think about those fears? How would you alleviate those fears? Do you have inklings, echoes of those same concerns? You know, anytime we develop a technology meaning to have positive impact in the world, there's always the worry that, you know, somebody could subvert those technologies and use it in an adversarial setting. And robotics is no exception, right? So I think it's very easy to weaponize robots. I think we talk about swarms. One thing I worry a lot about is, so, you know, for us to get swarms to work and do something reliably, it's really hard. But suppose I have this challenge of trying to destroy something, and I have a swarm of robots, where only one out of the swarm needs to get to its destination. So that suddenly becomes a lot more doable. And so I worry about, you know, this general idea of using autonomy with lots and lots of agents. I mean, having said that, look, a lot of this technology is not very mature. My favorite saying is that if somebody had to develop this technology, wouldn't you rather the good guys do it? So the good guys have a good understanding of the technology, so they can figure out how this technology is being used in a bad way, or could be used in a bad way and try to defend against it. So we think a lot about that. So we have, we're doing research on how to defend against swarms, for example. That's interesting. There's in fact a report by the National Academies on counter UAS technologies. This is a real threat, but we're also thinking about how to defend against this and knowing how swarms work. Knowing how autonomy works is, I think, very important. So it's not just politicians? Do you think engineers have a role in this discussion? Absolutely. I think the days where politicians can be agnostic to technology are gone. I think every politician needs to be literate in technology. And I often say technology is the new liberal art. Understanding how technology will change your life, I think is important. And every human being needs to understand that. And maybe we can elect some engineers to office as well on the other side. What are the biggest open problems in robotics? And you said we're in the early days in some sense. What are the problems we would like to solve in robotics? I think there are lots of problems, right? But I would phrase it in the following way. If you look at the robots we're building, they're still very much tailored towards doing specific tasks and specific settings. I think the question of how do you get them to operate in much broader settings where things can change in unstructured environments is up in the air. So think of self driving cars. Today, we can build a self driving car in a parking lot. We can do level five autonomy in a parking lot. But can you do a level five autonomy in the streets of Napoli in Italy or Mumbai in India? No. So in some sense, when we think about robotics, we have to think about where they're functioning, what kind of environment, what kind of a task. We have no understanding of how to put both those things together. So we're in the very early days of applying it to the physical world. And I was just in Naples actually. And there's levels of difficulty and complexity depending on which area you're applying it to. I think so. And we don't have a systematic way of understanding that. Everybody says, just because a computer can now beat a human at any board game, we certainly know something about intelligence. That's not true. A computer board game is very, very structured. It is the equivalent of working in a Henry Ford factory where things, parts come, you assemble, move on. It's a very, very, very structured setting. That's the easiest thing. And we know how to do that. So you've done a lot of incredible work at the UPenn, University of Pennsylvania, GraspLab. You're now Dean of Engineering at UPenn. What advice do you have for a new bright eyed undergrad interested in robotics or AI or engineering? Well, I think there's really three things. One is you have to get used to the idea that the world will not be the same in five years or four years whenever you graduate, right? Which is really hard to do. So this thing about predicting the future, every one of us needs to be trying to predict the future always. Not because you'll be any good at it, but by thinking about it, I think you sharpen your senses and you become smarter. So that's number one. Number two, it's a corollary of the first piece, which is you really don't know what's gonna be important. So this idea that I'm gonna specialize in something which will allow me to go in a particular direction, it may be interesting, but it's important also to have this breadth so you have this jumping off point. I think the third thing, and this is where I think Penn excels. I mean, we teach engineering, but it's always in the context of the liberal arts. It's always in the context of society. As engineers, we cannot afford to lose sight of that. So I think that's important. But I think one thing that people underestimate when they do robotics is the importance of mathematical foundations, the importance of representations. Not everything can just be solved by looking for Ross packages on the internet or to find a deep neural network that works. I think the representation question is key, even to machine learning, where if you ever hope to achieve or get to explainable AI, somehow there need to be representations that you can understand. So if you wanna do robotics, you should also do mathematics. And you said liberal arts, a little literature. If you wanna build a robot, it should be reading Dostoyevsky. I agree with that. Very good. So Vijay, thank you so much for talking today. It was an honor. Thank you. It was just a very exciting conversation. Thank you.
Vijay Kumar: Flying Robots | Lex Fridman Podcast #37
The following is a conversation with Francois Chollet. He's the creator of Keras, which is an open source deep learning library that is designed to enable fast, user friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into the TensorFlow main code base a while ago. Meaning, if you want to create, train, and use neural networks, probably the easiest and most popular option is to use Keras inside TensorFlow. Aside from creating an exceptionally useful and popular library, Francois is also a world class AI researcher and software engineer at Google. And he's definitely an outspoken, if not controversial personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Francois Chollet. You're known for not sugarcoating your opinions and speaking your mind about ideas in AI, especially on Twitter. It's one of my favorite Twitter accounts. So what's one of the more controversial ideas you've expressed online and gotten some heat for? How do you pick? How do I pick? Yeah, no, I think if you go through the trouble of maintaining a Twitter account, you might as well speak your mind, you know? Otherwise, what's even the point of having a Twitter account? It's like having a nice car and just leaving it in the garage. Yeah, so what's one thing for which I got a lot of pushback? Perhaps, you know, that time I wrote something about the idea of intelligence explosion, and I was questioning the idea and the reasoning behind this idea. And I got a lot of pushback on that. I got a lot of flak for it. So yeah, so intelligence explosion, I'm sure you're familiar with the idea, but it's the idea that if you were to build general AI problem solving algorithms, well, the problem of building such an AI, that itself is a problem that could be solved by your AI, and maybe it could be solved better than what humans can do. So your AI could start tweaking its own algorithm, could start making a better version of itself, and so on iteratively in a recursive fashion. And so you would end up with an AI with exponentially increasing intelligence. That's right. And I was basically questioning this idea, first of all, because the notion of intelligence explosion uses an implicit definition of intelligence that doesn't sound quite right to me. It considers intelligence as a property of a brain that you can consider in isolation, like the height of a building, for instance. But that's not really what intelligence is. Intelligence emerges from the interaction between a brain, a body, like embodied intelligence, and an environment. And if you're missing one of these pieces, then you cannot really define intelligence anymore. So just tweaking a brain to make it smaller and smaller doesn't actually make any sense to me. So first of all, you're crushing the dreams of many people, right? So there's a, let's look at like Sam Harris. Actually, a lot of physicists, Max Tegmark, people who think the universe is an information processing system, our brain is kind of an information processing system. So what's the theoretical limit? Like, it doesn't make sense that there should be some, it seems naive to think that our own brain is somehow the limit of the capabilities of this information system. I'm playing devil's advocate here. This information processing system. And then if you just scale it, if you're able to build something that's on par with the brain, you just, the process that builds it just continues and it'll improve exponentially. So that's the logic that's used actually by almost everybody that is worried about super human intelligence. So you're trying to make, so most people who are skeptical of that are kind of like, this doesn't, their thought process, this doesn't feel right. Like that's for me as well. So I'm more like, it doesn't, the whole thing is shrouded in mystery where you can't really say anything concrete, but you could say this doesn't feel right. This doesn't feel like that's how the brain works. And you're trying to with your blog posts and now making it a little more explicit. So one idea is that the brain isn't exist alone. It exists within the environment. So you can't exponentially, you would have to somehow exponentially improve the environment and the brain together almost. Yeah, in order to create something that's much smarter in some kind of, of course we don't have a definition of intelligence. That's correct, that's correct. I don't think, you should look at very smart people today, even humans, not even talking about AIs. I don't think their brain and the performance of their brain is the bottleneck to their expressed intelligence, to their achievements. You cannot just tweak one part of this system, like of this brain, body, environment system and expect that capabilities like what emerges out of this system to just explode exponentially. Because anytime you improve one part of a system with many interdependencies like this, there's a new bottleneck that arises, right? And I don't think even today for very smart people, their brain is not the bottleneck to the sort of problems they can solve, right? In fact, many very smart people today, you know, they are not actually solving any big scientific problems, they're not Einstein. They're like Einstein, but you know, the patent clerk days. Like Einstein became Einstein because this was a meeting of a genius with a big problem at the right time, right? But maybe this meeting could have never happened and then Einstein would have just been a patent clerk, right? And in fact, many people today are probably like genius level smart, but you wouldn't know because they're not really expressing any of that. Wow, that's brilliant. So we can think of the world, Earth, but also the universe as just as a space of problems. So all these problems and tasks are roaming it of various difficulty. And there's agents, creatures like ourselves and animals and so on that are also roaming it. And then you get coupled with a problem and then you solve it. But without that coupling, you can't demonstrate your quote unquote intelligence. Exactly, intelligence is the meeting of great problem solving capabilities with a great problem. And if you don't have the problem, you don't really express any intelligence. All you're left with is potential intelligence, like the performance of your brain or how high your IQ is, which in itself is just a number, right? So you mentioned problem solving capacity. Yeah. What do you think of as problem solving capacity? Can you try to define intelligence? Like what does it mean to be more or less intelligent? Is it completely coupled to a particular problem or is there something a little bit more universal? Yeah, I do believe all intelligence is specialized intelligence. Even human intelligence has some degree of generality. Well, all intelligent systems have some degree of generality but they're always specialized in one category of problems. So the human intelligence is specialized in the human experience. And that shows at various levels, that shows in some prior knowledge that's innate that we have at birth. Knowledge about things like agents, goal driven behavior, visual priors about what makes an object, priors about time and so on. That shows also in the way we learn. For instance, it's very, very easy for us to pick up language. It's very, very easy for us to learn certain things because we are basically hard coded to learn them. And we are specialized in solving certain kinds of problem and we are quite useless when it comes to other kinds of problems. For instance, we are not really designed to handle very long term problems. We have no capability of seeing the very long term. We don't have very much working memory. So how do you think about long term? Do you think long term planning, are we talking about scale of years, millennia? What do you mean by long term? We're not very good. Well, human intelligence is specialized in the human experience. And human experience is very short. One lifetime is short. Even within one lifetime, we have a very hard time envisioning things on a scale of years. It's very difficult to project yourself at a scale of five years, at a scale of 10 years and so on. We can solve only fairly narrowly scoped problems. So when it comes to solving bigger problems, larger scale problems, we are not actually doing it on an individual level. So it's not actually our brain doing it. We have this thing called civilization, right? Which is itself a sort of problem solving system, a sort of artificially intelligent system, right? And it's not running on one brain, it's running on a network of brains. In fact, it's running on much more than a network of brains. It's running on a lot of infrastructure, like books and computers and the internet and human institutions and so on. And that is capable of handling problems on a much greater scale than any individual human. If you look at computer science, for instance, that's an institution that solves problems and it is superhuman, right? It operates on a greater scale. It can solve much bigger problems than an individual human could. And science itself, science as a system, as an institution, is a kind of artificially intelligent problem solving algorithm that is superhuman. Yeah, it's, at least computer science is like a theorem prover at a scale of thousands, maybe hundreds of thousands of human beings. At that scale, what do you think is an intelligent agent? So there's us humans at the individual level, there is millions, maybe billions of bacteria in our skin. There is, that's at the smaller scale. You can even go to the particle level as systems that behave, you can say intelligently in some ways. And then you can look at the earth as a single organism, you can look at our galaxy and even the universe as a single organism. Do you think, how do you think about scale in defining intelligent systems? And we're here at Google, there is millions of devices doing computation just in a distributed way. How do you think about intelligence versus scale? You can always characterize anything as a system. I think people who talk about things like intelligence explosion, tend to focus on one agent is basically one brain, like one brain considered in isolation, like a brain, a jaw that's controlling a body in a very like top to bottom kind of fashion. And that body is pursuing goals into an environment. So it's a very hierarchical view. You have the brain at the top of the pyramid, then you have the body just plainly receiving orders. And then the body is manipulating objects in the environment and so on. So everything is subordinate to this one thing, this epicenter, which is the brain. But in real life, intelligent agents don't really work like this, right? There is no strong delimitation between the brain and the body to start with. You have to look not just at the brain, but at the nervous system. But then the nervous system and the body are naturally two separate entities. So you have to look at an entire animal as one agent. But then you start realizing as you observe an animal over any length of time, that a lot of the intelligence of an animal is actually externalized. That's especially true for humans. A lot of our intelligence is externalized. When you write down some notes, that is externalized intelligence. When you write a computer program, you are externalizing cognition. So it's externalizing books, it's externalized in computers, the internet, in other humans. It's externalizing language and so on. So there is no hard delimitation of what makes an intelligent agent. It's all about context. Okay, but AlphaGo is better at Go than the best human player. There's levels of skill here. So do you think there's such a ability, such a concept as intelligence explosion in a specific task? And then, well, yeah. Do you think it's possible to have a category of tasks on which you do have something like an exponential growth of ability to solve that particular problem? I think if you consider a specific vertical, it's probably possible to some extent. I also don't think we have to speculate about it because we have real world examples of recursively self improving intelligent systems, right? So for instance, science is a problem solving system, a knowledge generation system, like a system that experiences the world in some sense and then gradually understands it and can act on it. And that system is superhuman and it is clearly recursively self improving because science feeds into technology. Technology can be used to build better tools, better computers, better instrumentation and so on, which in turn can make science faster, right? So science is probably the closest thing we have today to a recursively self improving superhuman AI. And you can just observe is science, is scientific progress to the exploding, which itself is an interesting question. You can use that as a basis to try to understand what will happen with a superhuman AI that has a science like behavior. Let me linger on it a little bit more. What is your intuition why an intelligence explosion is not possible? Like taking the scientific, all the semi scientific revolutions, why can't we slightly accelerate that process? So you can absolutely accelerate any problem solving process. So a recursively self improvement is absolutely a real thing. But what happens with a recursively self improving system is typically not explosion because no system exists in isolation. And so tweaking one part of the system means that suddenly another part of the system becomes a bottleneck. And if you look at science, for instance, which is clearly a recursively self improving, clearly a problem solving system, scientific progress is not actually exploding. If you look at science, what you see is the picture of a system that is consuming an exponentially increasing amount of resources, but it's having a linear output in terms of scientific progress. And maybe that will seem like a very strong claim. Many people are actually saying that, scientific progress is exponential, but when they're claiming this, they're actually looking at indicators of resource consumption by science. For instance, the number of papers being published, the number of patents being filed and so on, which are just completely correlated with how many people are working on science today. So it's actually an indicator of resource consumption, but what you should look at is the output, is progress in terms of the knowledge that science generates, in terms of the scope and significance of the problems that we solve. And some people have actually been trying to measure that. Like Michael Nielsen, for instance, he had a very nice paper, I think that was last year about it. So his approach to measure scientific progress was to look at the timeline of scientific discoveries over the past, you know, 100, 150 years. And for each major discovery, ask a panel of experts to rate the significance of the discovery. And if the output of science as an institution were exponential, you would expect the temporal density of significance to go up exponentially. Maybe because there's a faster rate of discoveries, maybe because the discoveries are, you know, increasingly more important. And what actually happens if you plot this temporal density of significance measured in this way, is that you see very much a flat graph. You see a flat graph across all disciplines, across physics, biology, medicine, and so on. And it actually makes a lot of sense if you think about it, because think about the progress of physics 110 years ago, right? It was a time of crazy change. Think about the progress of technology, you know, 170 years ago, when we started having, you know, replacing horses with cars, when we started having electricity and so on. It was a time of incredible change. And today is also a time of very, very fast change, but it would be an unfair characterization to say that today technology and science are moving way faster than they did 50 years ago or 100 years ago. And if you do try to rigorously plot the temporal density of the significance, yeah, of significance, sorry, you do see very flat curves. And you can check out the paper that Michael Nielsen had about this idea. And so the way I interpret it is, as you make progress in a given field, or in a given subfield of science, it becomes exponentially more difficult to make further progress. Like the very first person to work on information theory. If you enter a new field, and it's still the very early years, there's a lot of low hanging fruit you can pick. That's right, yeah. But the next generation of researchers is gonna have to dig much harder, actually, to make smaller discoveries, probably larger number of smaller discoveries, and to achieve the same amount of impact, you're gonna need a much greater head count. And that's exactly the picture you're seeing with science, that the number of scientists and engineers is in fact increasing exponentially. The amount of computational resources that are available to science is increasing exponentially and so on. So the resource consumption of science is exponential, but the output in terms of progress, in terms of significance, is linear. And the reason why is because, and even though science is regressively self improving, meaning that scientific progress turns into technological progress, which in turn helps science. If you look at computers, for instance, our products of science and computers are tremendously useful in speeding up science. The internet, same thing, the internet is a technology that's made possible by very recent scientific advances. And itself, because it enables scientists to network, to communicate, to exchange papers and ideas much faster, it is a way to speed up scientific progress. So even though you're looking at a regressively self improving system, it is consuming exponentially more resources to produce the same amount of problem solving, very much. So that's a fascinating way to paint it, and certainly that holds for the deep learning community. If you look at the temporal, what did you call it, the temporal density of significant ideas, if you look at in deep learning, I think, I'd have to think about that, but if you really look at significant ideas in deep learning, they might even be decreasing. So I do believe the per paper significance is decreasing, but the amount of papers is still today exponentially increasing. So I think if you look at an aggregate, my guess is that you would see a linear progress. If you were to sum the significance of all papers, you would see roughly in your progress. And in my opinion, it is not a coincidence that you're seeing linear progress in science despite exponential resource consumption. I think the resource consumption is dynamically adjusting itself to maintain linear progress because we as a community expect linear progress, meaning that if we start investing less and seeing less progress, it means that suddenly there are some lower hanging fruits that become available and someone's gonna step up and pick them, right? So it's very much like a market for discoveries and ideas. But there's another fundamental part which you're highlighting, which as a hypothesis as science or like the space of ideas, any one path you travel down, it gets exponentially more difficult to get a new way to develop new ideas. And your sense is that's gonna hold across our mysterious universe. Yes, well, exponential progress triggers exponential friction. So that if you tweak one part of the system, suddenly some other part becomes a bottleneck, right? For instance, let's say you develop some device that measures its own acceleration and then it has some engine and it outputs even more acceleration in proportion of its own acceleration and you drop it somewhere, it's not gonna reach infinite speed because it exists in a certain context. So the air around it is gonna generate friction and it's gonna block it at some top speed. And even if you were to consider the broader context and lift the bottleneck there, like the bottleneck of friction, then some other part of the system would start stepping in and creating exponential friction, maybe the speed of flight or whatever. And this definitely holds true when you look at the problem solving algorithm that is being run by science as an institution, science as a system. As you make more and more progress, despite having this recursive self improvement component, you are encountering exponential friction. The more researchers you have working on different ideas, the more overhead you have in terms of communication across researchers. If you look at, you were mentioning quantum mechanics, right? Well, if you want to start making significant discoveries today, significant progress in quantum mechanics, there is an amount of knowledge you have to ingest, which is huge. So there's a very large overhead to even start to contribute. There's a large amount of overhead to synchronize across researchers and so on. And of course, the significant practical experiments are going to require exponentially expensive equipment because the easier ones have already been run, right? So in your senses, there's no way escaping, there's no way of escaping this kind of friction with artificial intelligence systems. Yeah, no, I think science is a very good way to model what would happen with a superhuman recursive research improving AI. That's your sense, I mean, the... That's my intuition. It's not like a mathematical proof of anything. That's not my point. Like, I'm not trying to prove anything. I'm just trying to make an argument to question the narrative of intelligence explosion, which is quite a dominant narrative. And you do get a lot of pushback if you go against it. Because, so for many people, right, AI is not just a subfield of computer science. It's more like a belief system. Like this belief that the world is headed towards an event, the singularity, past which, you know, AI will become... will go exponential very much, and the world will be transformed, and humans will become obsolete. And if you go against this narrative, because it is not really a scientific argument, but more of a belief system, it is part of the identity of many people. If you go against this narrative, it's like you're attacking the identity of people who believe in it. It's almost like saying God doesn't exist, or something. So you do get a lot of pushback if you try to question these ideas. First of all, I believe most people, they might not be as eloquent or explicit as you're being, but most people in computer science are most people who actually have built anything that you could call AI, quote, unquote, would agree with you. They might not be describing in the same kind of way. It's more, so the pushback you're getting is from people who get attached to the narrative from, not from a place of science, but from a place of imagination. That's correct, that's correct. So why do you think that's so appealing? Because the usual dreams that people have when you create a superintelligence system past the singularity, that what people imagine is somehow always destructive. Do you have, if you were put on your psychology hat, what's, why is it so appealing to imagine the ways that all of human civilization will be destroyed? I think it's a good story. You know, it's a good story. And very interestingly, it mirrors a religious stories, right, religious mythology. If you look at the mythology of most civilizations, it's about the world being headed towards some final events in which the world will be destroyed and some new world order will arise that will be mostly spiritual, like the apocalypse followed by a paradise probably, right? It's a very appealing story on a fundamental level. And we all need stories. We all need stories to structure the way we see the world, especially at timescales that are beyond our ability to make predictions, right? So on a more serious non exponential explosion, question, do you think there will be a time when we'll create something like human level intelligence or intelligent systems that will make you sit back and be just surprised at damn how smart this thing is? That doesn't require exponential growth or an exponential improvement, but what's your sense of the timeline and so on that you'll be really surprised at certain capabilities? And we'll talk about limitations and deep learning. So do you think in your lifetime, you'll be really damn surprised? Around 2013, 2014, I was many times surprised by the capabilities of deep learning actually. That was before we had assessed exactly what deep learning could do and could not do. And it felt like a time of immense potential. And then we started narrowing it down, but I was very surprised. I would say it has already happened. Was there a moment, there must've been a day in there where your surprise was almost bordering on the belief of the narrative that we just discussed. Was there a moment, because you've written quite eloquently about the limits of deep learning, was there a moment that you thought that maybe deep learning is limitless? No, I don't think I've ever believed this. What was really shocking is that it worked. It worked at all, yeah. But there's a big jump between being able to do really good computer vision and human level intelligence. So I don't think at any point I wasn't under the impression that the results we got in computer vision meant that we were very close to human level intelligence. I don't think we're very close to human level intelligence. I do believe that there's no reason why we won't achieve it at some point. I also believe that it's the problem with talking about human level intelligence that implicitly you're considering like an axis of intelligence with different levels, but that's not really how intelligence works. Intelligence is very multi dimensional. And so there's the question of capabilities, but there's also the question of being human like, and it's two very different things. Like you can build potentially very advanced intelligent agents that are not human like at all. And you can also build very human like agents. And these are two very different things, right? Right. Let's go from the philosophical to the practical. Can you give me a history of Keras and all the major deep learning frameworks that you kind of remember in relation to Keras and in general, TensorFlow, Theano, the old days. Can you give a brief overview Wikipedia style history and your role in it before we return to AGI discussions? Yeah, that's a broad topic. So I started working on Keras. It was the name Keras at the time. I actually picked the name like just the day I was going to release it. So I started working on it in February, 2015. And so at the time there weren't too many people working on deep learning, maybe like fewer than 10,000. The software tooling was not really developed. So the main deep learning library was Cafe, which was mostly C++. Why do you say Cafe was the main one? Cafe was vastly more popular than Theano in late 2014, early 2015. Cafe was the one library that everyone was using for computer vision. And computer vision was the most popular problem in deep learning at the time. Absolutely. Like ConvNets was like the subfield of deep learning that everyone was working on. So myself, so in late 2014, I was actually interested in RNNs, in recurrent neural networks, which was a very niche topic at the time, right? It really took off around 2016. And so I was looking for good tools. I had used Torch 7, I had used Theano, used Theano a lot in Kaggle competitions. I had used Cafe. And there was no like good solution for RNNs at the time. Like there was no reusable open source implementation of an LSTM, for instance. So I decided to build my own. And at first, the pitch for that was, it was gonna be mostly around LSTM recurrent neural networks. It was gonna be in Python. An important decision at the time that was kind of not obvious is that the models would be defined via Python code, which was kind of like going against the mainstream at the time because Cafe, Pylon 2, and so on, like all the big libraries were actually going with the approach of setting configuration files in YAML to define models. So some libraries were using code to define models, like Torch 7, obviously, but that was not Python. Lasagne was like a Theano based very early library that was, I think, developed, I don't remember exactly, probably late 2014. It's Python as well. It's Python as well. It was like on top of Theano. And so I started working on something and the value proposition at the time was that not only what I think was the first reusable open source implementation of LSTM, you could combine RNNs and covenants with the same library, which is not really possible before, like Cafe was only doing covenants. And it was kind of easy to use because, so before I was using Theano, I was actually using scikitlin and I loved scikitlin for its usability. So I drew a lot of inspiration from scikitlin when I made Keras. It's almost like scikitlin for neural networks. The fit function. Exactly, the fit function, like reducing a complex string loop to a single function call, right? And of course, some people will say, this is hiding a lot of details, but that's exactly the point, right? The magic is the point. So it's magical, but in a good way. It's magical in the sense that it's delightful. Yeah, yeah. I'm actually quite surprised. I didn't know that it was born out of desire to implement RNNs and LSTMs. It was. That's fascinating. So you were actually one of the first people to really try to attempt to get the major architectures together. And it's also interesting. You made me realize that that was a design decision at all is defining the model and code. Just, I'm putting myself in your shoes, whether the YAML, especially if cafe was the most popular. It was the most popular by far. If I was, if I were, yeah, I don't, I didn't like the YAML thing, but it makes more sense that you will put in a configuration file, the definition of a model. That's an interesting gutsy move to stick with defining it in code. Just if you look back. Other libraries were doing it as well, but it was definitely the more niche option. Yeah. Okay, Keras and then. So I released Keras in March, 2015, and it got users pretty much from the start. So the deep learning community was very, very small at the time. Lots of people were starting to be interested in LSTM. So it was gonna release it at the right time because it was offering an easy to use LSTM implementation. Exactly at the time where lots of people started to be intrigued by the capabilities of RNN, RNNs for NLP. So it grew from there. Then I joined Google about six months later, and that was actually completely unrelated to Keras. So I actually joined a research team working on image classification, mostly like computer vision. So I was doing computer vision research at Google initially. And immediately when I joined Google, I was exposed to the early internal version of TensorFlow. And the way it appeared to me at the time, and it was definitely the way it was at the time is that this was an improved version of Theano. So I immediately knew I had to port Keras to this new TensorFlow thing. And I was actually very busy as a noobler, as a new Googler. So I had not time to work on that. But then in November, I think it was November, 2015, TensorFlow got released. And it was kind of like my wake up call that, hey, I had to actually go and make it happen. So in December, I ported Keras to run on top of TensorFlow, but it was not exactly a port. It was more like a refactoring where I was abstracting away all the backend functionality into one module so that the same code base could run on top of multiple backends. So on top of TensorFlow or Theano. And for the next year, Theano stayed as the default option. It was easier to use, somewhat less buggy. It was much faster, especially when it came to audience. But eventually, TensorFlow overtook it. And TensorFlow, the early TensorFlow, has similar architectural decisions as Theano, right? So it was a natural transition. Yeah, absolutely. So what, I mean, that still Keras is a side, almost fun project, right? Yeah, so it was not my job assignment. It was not. I was doing it on the side. And even though it grew to have a lot of users for a deep learning library at the time, like Stroud 2016, but I wasn't doing it as my main job. So things started changing in, I think it must have been maybe October, 2016. So one year later. So Rajat, who was the lead on TensorFlow, basically showed up one day in our building where I was doing like, so I was doing research and things like, so I did a lot of computer vision research, also collaborations with Christian Zighetti and deep learning for theorem proving. It was a really interesting research topic. And so Rajat was saying, hey, we saw Keras, we like it. We saw that you're at Google. Why don't you come over for like a quarter and work with us? And I was like, yeah, that sounds like a great opportunity. Let's do it. And so I started working on integrating the Keras API into TensorFlow more tightly. So what followed up is a sort of like temporary TensorFlow only version of Keras that was in TensorFlow.com Trib for a while. And finally moved to TensorFlow Core. And I've never actually gotten back to my old team doing research. Well, it's kind of funny that somebody like you who dreams of, or at least sees the power of AI systems that reason and theorem proving we'll talk about has also created a system that makes the most basic kind of Lego building that is deep learning super accessible, super easy. So beautifully so. It's a funny irony that you're both, you're responsible for both things, but so TensorFlow 2.0 is kind of, there's a sprint. I don't know how long it'll take, but there's a sprint towards the finish. What do you look, what are you working on these days? What are you excited about? What are you excited about in 2.0? I mean, eager execution. There's so many things that just make it a lot easier to work. What are you excited about and what's also really hard? What are the problems you have to kind of solve? So I've spent the past year and a half working on TensorFlow 2.0 and it's been a long journey. I'm actually extremely excited about it. I think it's a great product. It's a delightful product compared to TensorFlow 1.0. We've made huge progress. So on the Keras side, what I'm really excited about is that, so previously Keras has been this very easy to use high level interface to do deep learning. But if you wanted to, if you wanted a lot of flexibility, the Keras framework was probably not the optimal way to do things compared to just writing everything from scratch. So in some way, the framework was getting in the way. And in TensorFlow 2.0, you don't have this at all, actually. You have the usability of the high level interface, but you have the flexibility of this lower level interface. And you have this spectrum of workflows where you can get more or less usability and flexibility trade offs depending on your needs, right? You can write everything from scratch and you get a lot of help doing so by subclassing models and writing some train loops using ego execution. It's very flexible, it's very easy to debug, it's very powerful. But all of this integrates seamlessly with higher level features up to the classic Keras workflows, which are very scikit learn like and are ideal for a data scientist, machine learning engineer type of profile. So now you can have the same framework offering the same set of APIs that enable a spectrum of workflows that are more or less low level, more or less high level that are suitable for profiles ranging from researchers to data scientists and everything in between. Yeah, so that's super exciting. I mean, it's not just that, it's connected to all kinds of tooling. You can go on mobile, you can go with TensorFlow Lite, you can go in the cloud or serving and so on. It all is connected together. Now some of the best software written ever is often done by one person, sometimes two. So with a Google, you're now seeing sort of Keras having to be integrated in TensorFlow, I'm sure has a ton of engineers working on. And there's, I'm sure a lot of tricky design decisions to be made. How does that process usually happen from at least your perspective? What are the debates like? Is there a lot of thinking, considering different options and so on? Yes. So a lot of the time I spend at Google is actually discussing design discussions, right? Writing design docs, participating in design review meetings and so on. This is as important as actually writing a code. Right. So there's a lot of thought, there's a lot of thought and a lot of care that is taken in coming up with these decisions and taking into account all of our users because TensorFlow has this extremely diverse user base, right? It's not like just one user segment where everyone has the same needs. We have small scale production users, large scale production users. We have startups, we have researchers, you know, it's all over the place. And we have to cater to all of their needs. If I just look at the standard debates of C++ or Python, there's some heated debates. Do you have those at Google? I mean, they're not heated in terms of emotionally, but there's probably multiple ways to do it, right? So how do you arrive through those design meetings at the best way to do it? Especially in deep learning where the field is evolving as you're doing it. Is there some magic to it? Is there some magic to the process? I don't know if there's magic to the process, but there definitely is a process. So making design decisions is about satisfying a set of constraints, but also trying to do so in the simplest way possible, because this is what can be maintained, this is what can be expanded in the future. So you don't want to naively satisfy the constraints by just, you know, for each capability you need available, you're gonna come up with one argument in your API and so on. You want to design APIs that are modular and hierarchical so that they have an API surface that is as small as possible, right? And you want this modular hierarchical architecture to reflect the way that domain experts think about the problem. Because as a domain expert, when you are reading about a new API, you're reading a tutorial or some docs pages, you already have a way that you're thinking about the problem. You already have like certain concepts in mind and you're thinking about how they relate together. And when you're reading docs, you're trying to build as quickly as possible a mapping between the concepts featured in your API and the concepts in your mind. So you're trying to map your mental model as a domain expert to the way things work in the API. So you need an API and an underlying implementation that are reflecting the way people think about these things. So in minimizing the time it takes to do the mapping. Yes, minimizing the time, the cognitive load there is in ingesting this new knowledge about your API. An API should not be self referential or referring to implementation details. It should only be referring to domain specific concepts that people already understand. Brilliant. So what's the future of Keras and TensorFlow look like? What does TensorFlow 3.0 look like? So that's kind of too far in the future for me to answer, especially since I'm not even the one making these decisions. Okay. But so from my perspective, which is just one perspective among many different perspectives on the TensorFlow team, I'm really excited by developing even higher level APIs, higher level than Keras. I'm really excited by hyperparameter tuning, by automated machine learning, AutoML. I think the future is not just, you know, defining a model like you were assembling Lego blocks and then collect fit on it. It's more like an automagical model that would just look at your data and optimize the objective you're after, right? So that's what I'm looking into. Yeah, so you put the baby into a room with the problem and come back a few hours later with a fully solved problem. Exactly, it's not like a box of Legos. It's more like the combination of a kid that's really good at Legos and a box of Legos. It's just building the thing on its own. Very nice. So that's an exciting future. I think there's a huge amount of applications and revolutions to be had under the constraints of the discussion we previously had. But what do you think of the current limits of deep learning? If we look specifically at these function approximators that tries to generalize from data. You've talked about local versus extreme generalization. You mentioned that neural networks don't generalize well and humans do. So there's this gap. And you've also mentioned that extreme generalization requires something like reasoning to fill those gaps. So how can we start trying to build systems like that? Right, yeah, so this is by design, right? Deep learning models are like huge parametric models, differentiable, so continuous, that go from an input space to an output space. And they're trained with gradient descent. So they're trained pretty much point by point. They are learning a continuous geometric morphing from an input vector space to an output vector space. And because this is done point by point, a deep neural network can only make sense of points in experience space that are very close to things that it has already seen in string data. At best, it can do interpolation across points. But that means in order to train your network, you need a dense sampling of the input cross output space, almost a point by point sampling, which can be very expensive if you're dealing with complex real world problems, like autonomous driving, for instance, or robotics. It's doable if you're looking at the subset of the visual space. But even then, it's still fairly expensive. You still need millions of examples. And it's only going to be able to make sense of things that are very close to what it has seen before. And in contrast to that, well, of course, you have human intelligence. But even if you're not looking at human intelligence, you can look at very simple rules, algorithms. If you have a symbolic rule, it can actually apply to a very, very large set of inputs because it is abstract. It is not obtained by doing a point by point mapping. For instance, if you try to learn a sorting algorithm using a deep neural network, well, you're very much limited to learning point by point what the sorted representation of this specific list is like. But instead, you could have a very, very simple sorting algorithm written in a few lines. Maybe it's just two nested loops. And it can process any list at all because it is abstract, because it is a set of rules. So deep learning is really like point by point geometric morphings, train with good and decent. And meanwhile, abstract rules can generalize much better. And I think the future is we need to combine the two. So how do we, do you think, combine the two? How do we combine good point by point functions with programs, which is what the symbolic AI type systems? At which levels the combination happen? I mean, obviously we're jumping into the realm of where there's no good answers. It's just kind of ideas and intuitions and so on. Well, if you look at the really successful AI systems today, I think they are already hybrid systems that are combining symbolic AI with deep learning. For instance, successful robotics systems are already mostly model based, rule based, things like planning algorithms and so on. At the same time, they're using deep learning as perception modules. Sometimes they're using deep learning as a way to inject fuzzy intuition into a rule based process. If you look at the system like in a self driving car, it's not just one big end to end neural network. You know, that wouldn't work at all. Precisely because in order to train that, you would need a dense sampling of experience base when it comes to driving, which is completely unrealistic, obviously. Instead, the self driving car is mostly symbolic, you know, it's software, it's programmed by hand. So it's mostly based on explicit models. In this case, mostly 3D models of the environment around the car, but it's interfacing with the real world using deep learning modules, right? So the deep learning there serves as a way to convert the raw sensory information to something usable by symbolic systems. Okay, well, let's linger on that a little more. So dense sampling from input to output. You said it's obviously very difficult. Is it possible? In the case of self driving, you mean? Let's say self driving, right? Self driving for many people, let's not even talk about self driving, let's talk about steering, so staying inside the lane. Lane following, yeah, it's definitely a problem you can solve with an end to end deep learning model, but that's like one small subset. Hold on a second. Yeah, I don't know why you're jumping from the extreme so easily, because I disagree with you on that. I think, well, it's not obvious to me that you can solve lane following. No, it's not obvious, I think it's doable. I think in general, there is no hard limitations to what you can learn with a deep neural network, as long as the search space is rich enough, is flexible enough, and as long as you have this dense sampling of the input cross output space. The problem is that this dense sampling could mean anything from 10,000 examples to like trillions and trillions. So that's my question. So what's your intuition? And if you could just give it a chance and think what kind of problems can be solved by getting a huge amounts of data and thereby creating a dense mapping. So let's think about natural language dialogue, the Turing test. Do you think the Turing test can be solved with a neural network alone? Well, the Turing test is all about tricking people into believing they're talking to a human. And I don't think that's actually very difficult because it's more about exploiting human perception and not so much about intelligence. There's a big difference between mimicking intelligent behavior and actual intelligent behavior. So, okay, let's look at maybe the Alexa prize and so on. The different formulations of the natural language conversation that are less about mimicking and more about maintaining a fun conversation that lasts for 20 minutes. That's a little less about mimicking and that's more about, I mean, it's still mimicking, but it's more about being able to carry forward a conversation with all the tangents that happen in dialogue and so on. Do you think that problem is learnable with a neural network that does the point to point mapping? So I think it would be very, very challenging to do this with deep learning. I don't think it's out of the question either. I wouldn't rule it out. The space of problems that can be solved with a large neural network. What's your sense about the space of those problems? So useful problems for us. In theory, it's infinite, right? You can solve any problem. In practice, well, deep learning is a great fit for perception problems. In general, any problem which is naturally amenable to explicit handcrafted rules or rules that you can generate by exhaustive search over some program space. So perception, artificial intuition, as long as you have a sufficient training dataset. And that's the question, I mean, perception, there's interpretation and understanding of the scene, which seems to be outside the reach of current perception systems. So do you think larger networks will be able to start to understand the physics and the physics of the scene, the three dimensional structure and relationships of objects in the scene and so on? Or really that's where symbolic AI has to step in? Well, it's always possible to solve these problems with deep learning. It's just extremely inefficient. A model would be an explicit rule based abstract model would be a far better, more compressed representation of physics. Then learning just this mapping between in this situation, this thing happens. If you change the situation slightly, then this other thing happens and so on. Do you think it's possible to automatically generate the programs that would require that kind of reasoning? Or does it have to, so the way the expert systems fail, there's so many facts about the world had to be hand coded in. Do you think it's possible to learn those logical statements that are true about the world and their relationships? Do you think, I mean, that's kind of what theorem proving at a basic level is trying to do, right? Yeah, except it's much harder to formulate statements about the world compared to formulating mathematical statements. Statements about the world tend to be subjective. So can you learn rule based models? Yes, definitely. That's the field of program synthesis. However, today we just don't really know how to do it. So it's very much a grass search or tree search problem. And so we are limited to the sort of tree session grass search algorithms that we have today. Personally, I think genetic algorithms are very promising. So almost like genetic programming. Genetic programming, exactly. Can you discuss the field of program synthesis? Like how many people are working and thinking about it? Where we are in the history of program synthesis and what are your hopes for it? Well, if it were deep learning, this is like the 90s. So meaning that we already have existing solutions. We are starting to have some basic understanding of what this is about. But it's still a field that is in its infancy. There are very few people working on it. There are very few real world applications. So the one real world application I'm aware of is Flash Fill in Excel. It's a way to automatically learn very simple programs to format cells in an Excel spreadsheet from a few examples. For instance, learning a way to format a date, things like that. Oh, that's fascinating. Yeah. You know, OK, that's a fascinating topic. I always wonder when I provide a few samples to Excel, what it's able to figure out. Like just giving it a few dates, what are you able to figure out from the pattern I just gave you? That's a fascinating question. And it's fascinating whether that's learnable patterns. And you're saying they're working on that. How big is the toolbox currently? Are we completely in the dark? So if you said the 90s. In terms of program synthesis? No. So I would say, so maybe 90s is even too optimistic. Because by the 90s, we already understood back prop. We already understood the engine of deep learning, even though we couldn't really see its potential quite. Today, I don't think we have found the engine of program synthesis. So we're in the winter before back prop. Yeah. In a way, yes. So I do believe program synthesis and general discrete search over rule based models is going to be a cornerstone of AI research in the next century. And that doesn't mean we are going to drop deep learning. Deep learning is immensely useful. Like, being able to learn is a very flexible, adaptable, parametric model. So it's got to understand that's actually immensely useful. All it's doing is pattern cognition. But being good at pattern cognition, given lots of data, is just extremely powerful. So we are still going to be working on deep learning. We are going to be working on program synthesis. We are going to be combining the two in increasingly automated ways. So let's talk a little bit about data. You've tweeted, about 10,000 deep learning papers have been written about hard coding priors about a specific task in a neural network architecture works better than a lack of a prior. Basically, summarizing all these efforts, they put a name to an architecture. But really, what they're doing is hard coding some priors that improve the performance of the system. But which gets straight to the point is probably true. So you say that you can always buy performance by, in quotes, performance by either training on more data, better data, or by injecting task information to the architecture of the preprocessing. However, this isn't informative about the generalization power the techniques use, the fundamental ability to generalize. Do you think we can go far by coming up with better methods for this kind of cheating, for better methods of large scale annotation of data? So building better priors. If you automate it, it's not cheating anymore. Right. I'm joking about the cheating, but large scale. So basically, I'm asking about something that hasn't, from my perspective, been researched too much is exponential improvement in annotation of data. Do you often think about? I think it's actually been researched quite a bit. You just don't see publications about it. Because people who publish papers are going to publish about known benchmarks. Sometimes they're going to read a new benchmark. People who actually have real world large scale depending on problems, they're going to spend a lot of resources into data annotation and good data annotation pipelines, but you don't see any papers about it. That's interesting. So do you think, certainly resources, but do you think there's innovation happening? Oh, yeah. To clarify the point in the tweet. So machine learning in general is the science of generalization. You want to generate knowledge that can be reused across different data sets, across different tasks. And if instead you're looking at one data set and then you are hard coding knowledge about this task into your architecture, this is no more useful than training a network and then saying, oh, I found these weight values perform well. So David Ha, I don't know if you know David, he had a paper the other day about weight agnostic neural networks. And this is a very interesting paper because it really illustrates the fact that an architecture, even without weights, an architecture is knowledge about a task. It encodes knowledge. And when it comes to architectures that are uncrafted by researchers, in some cases, it is very, very clear that all they are doing is artificially reencoding the template that corresponds to the proper way to solve the task encoding a given data set. For instance, I know if you looked at the baby data set, which is about natural language question answering, it is generated by an algorithm. So this is a question answer pairs that are generated by an algorithm. The algorithm is solving a certain template. Turns out, if you craft a network that literally encodes this template, you can solve this data set with nearly 100% accuracy. But that doesn't actually tell you anything about how to solve question answering in general, which is the point. The question is just to linger on it, whether it's from the data side or from the size of the network. I don't know if you've read the blog post by Rich Sutton, The Bitter Lesson, where he says, the biggest lesson that we can read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective. So as opposed to figuring out methods that can generalize effectively, do you think we can get pretty far by just having something that leverages computation and the improvement of computation? Yeah, so I think Rich is making a very good point, which is that a lot of these papers, which are actually all about manually hardcoding prior knowledge about a task into some system, it doesn't have to be deep learning architecture, but into some system. These papers are not actually making any impact. Instead, what's making really long term impact is very simple, very general systems that are really agnostic to all these tricks. Because these tricks do not generalize. And of course, the one general and simple thing that you should focus on is that which leverages computation. Because computation, the availability of large scale computation has been increasing exponentially following Moore's law. So if your algorithm is all about exploiting this, then your algorithm is suddenly exponentially improving. So I think Rich is definitely right. However, he's right about the past 70 years. He's like assessing the past 70 years. I am not sure that this assessment will still hold true for the next 70 years. It might to some extent. I suspect it will not. Because the truth of his assessment is a function of the context in which this research took place. And the context is changing. Moore's law might not be applicable anymore, for instance, in the future. And I do believe that when you tweak one aspect of a system, when you exploit one aspect of a system, some other aspect starts becoming the bottleneck. Let's say you have unlimited computation. Well, then data is the bottleneck. And I think we are already starting to be in a regime where our systems are so large in scale and so data ingrained that data today and the quality of data and the scale of data is the bottleneck. And in this environment, the bitter lesson from Rich is not going to be true anymore. So I think we are going to move from a focus on a computation scale to focus on data efficiency. Data efficiency. So that's getting to the question of symbolic AI. But to linger on the deep learning approaches, do you have hope for either unsupervised learning or reinforcement learning, which are ways of being more data efficient in terms of the amount of data they need that required human annotation? So unsupervised learning and reinforcement learning are frameworks for learning, but they are not like any specific technique. So usually when people say reinforcement learning, what they really mean is deep reinforcement learning, which is like one approach which is actually very questionable. The question I was asking was unsupervised learning with deep neural networks and deep reinforcement learning. Well, these are not really data efficient because you're still leveraging these huge parametric models point by point with gradient descent. It is more efficient in terms of the number of annotations, the density of annotations you need. So the idea being to learn the latent space around which the data is organized and then map the sparse annotations into it. And sure, I mean, that's clearly a very good idea. It's not really a topic I would be working on, but it's clearly a good idea. So it would get us to solve some problems that? It will get us to incremental improvements in labeled data efficiency. Do you have concerns about short term or long term threats from AI, from artificial intelligence? Yes, definitely to some extent. And what's the shape of those concerns? This is actually something I've briefly written about. But the capabilities of deep learning technology can be used in many ways that are concerning from mass surveillance with things like facial recognition. In general, tracking lots of data about everyone and then being able to making sense of this data to do identification, to do prediction. That's concerning. That's something that's being very aggressively pursued by totalitarian states like China. One thing I am very much concerned about is that our lives are increasingly online, are increasingly digital, made of information, made of information consumption and information production, our digital footprint, I would say. And if you absorb all of this data and you are in control of where you consume information, social networks and so on, recommendation engines, then you can build a sort of reinforcement loop for human behavior. You can observe the state of your mind at time t. You can predict how you would react to different pieces of content, how to get you to move your mind in a certain direction. And then you can feed you the specific piece of content that would move you in a specific direction. And you can do this at scale in terms of doing it continuously in real time. You can also do it at scale in terms of scaling this to many, many people, to entire populations. So potentially, artificial intelligence, even in its current state, if you combine it with the internet, with the fact that all of our lives are moving to digital devices and digital information consumption and creation, what you get is the possibility to achieve mass manipulation of behavior and mass psychological control. And this is a very real possibility. Yeah, so you're talking about any kind of recommender system. Let's look at the YouTube algorithm, Facebook, anything that recommends content you should watch next. And it's fascinating to think that there's some aspects of human behavior that you can say a problem of, is this person hold Republican beliefs or Democratic beliefs? And this is a trivial, that's an objective function. And you can optimize, and you can measure, and you can turn everybody into a Republican or everybody into a Democrat. I do believe it's true. So the human mind is very, if you look at the human mind as a kind of computer program, it has a very large exploit surface. It has many, many vulnerabilities. Exploit surfaces, yeah. Ways you can control it. For instance, when it comes to your political beliefs, this is very much tied to your identity. So for instance, if I'm in control of your news feed on your favorite social media platforms, this is actually where you're getting your news from. And of course, I can choose to only show you news that will make you see the world in a specific way. But I can also create incentives for you to post about some political beliefs. And then when I get you to express a statement, if it's a statement that me as the controller, I want to reinforce. I can just show it to people who will agree, and they will like it. And that will reinforce the statement in your mind. If this is a statement I want you to, this is a belief I want you to abandon, I can, on the other hand, show it to opponents. We'll attack you. And because they attack you, at the very least, next time you will think twice about posting it. But maybe you will even start believing this because you got pushback. So there are many ways in which social media platforms can potentially control your opinions. And today, so all of these things are already being controlled by AI algorithms. These algorithms do not have any explicit political goal today. Well, potentially they could, like if some totalitarian government takes over social media platforms and decides that now we are going to use this not just for mass surveillance, but also for mass opinion control and behavior control. Very bad things could happen. But what's really fascinating and actually quite concerning is that even without an explicit intent to manipulate, you're already seeing very dangerous dynamics in terms of how these content recommendation algorithms behave. Because right now, the goal, the objective function of these algorithms is to maximize engagement, which seems fairly innocuous at first. However, it is not because content that will maximally engage people, get people to react in an emotional way, get people to click on something. It is very often content that is not healthy to the public discourse. For instance, fake news are far more likely to get you to click on them than real news simply because they are not constrained to reality. So they can be as outrageous, as surprising, as good stories as you want because they're artificial. To me, that's an exciting world because so much good can come. So there's an opportunity to educate people. You can balance people's worldview with other ideas. So there's so many objective functions. The space of objective functions that create better civilizations is large, arguably infinite. But there's also a large space that creates division and destruction, civil war, a lot of bad stuff. And the worry is, naturally, probably that space is bigger, first of all. And if we don't explicitly think about what kind of effects are going to be observed from different objective functions, then we're going to get into trouble. But the question is, how do we get into rooms and have discussions, so inside Google, inside Facebook, inside Twitter, and think about, OK, how can we drive up engagement and, at the same time, create a good society? Is it even possible to have that kind of philosophical discussion? I think you can definitely try. So from my perspective, I would feel rather uncomfortable with companies that are uncomfortable with these new student algorithms, with them making explicit decisions to manipulate people's opinions or behaviors, even if the intent is good, because that's a very totalitarian mindset. So instead, what I would like to see is probably never going to happen, because it's not super realistic, but that's actually something I really care about. I would like all these algorithms to present configuration settings to their users, so that the users can actually make the decision about how they want to be impacted by these information recommendation, content recommendation algorithms. For instance, as a user of something like YouTube or Twitter, maybe I want to maximize learning about a specific topic. So I want the algorithm to feed my curiosity, which is in itself a very interesting problem. So instead of maximizing my engagement, it will maximize how fast and how much I'm learning. And it will also take into account the accuracy, hopefully, of the information I'm learning. So yeah, the user should be able to determine exactly how these algorithms are affecting their lives. I don't want actually any entity making decisions about in which direction they're going to try to manipulate me. I want technology. So AI, these algorithms are increasingly going to be our interface to a world that is increasingly made of information. And I want everyone to be in control of this interface, to interface with the world on their own terms. So if someone wants these algorithms to serve their own personal growth goals, they should be able to configure these algorithms in such a way. Yeah, but so I know it's painful to have explicit decisions. But there is underlying explicit decisions, which is some of the most beautiful fundamental philosophy that we have before us, which is personal growth. If I want to watch videos from which I can learn, what does that mean? So if I have a checkbox that wants to emphasize learning, there's still an algorithm with explicit decisions in it that would promote learning. What does that mean for me? For example, I've watched a documentary on flat Earth theory, I guess. I learned a lot. I'm really glad I watched it. It was a friend recommended it to me. Because I don't have such an allergic reaction to crazy people, as my fellow colleagues do. But it was very eye opening. And for others, it might not be. From others, they might just get turned off from that, same with Republican and Democrat. And it's a non trivial problem. And first of all, if it's done well, I don't think it's something that wouldn't happen, that YouTube wouldn't be promoting, or Twitter wouldn't be. It's just a really difficult problem, how to give people control. Well, it's mostly an interface design problem. The way I see it, you want to create technology that's like a mentor, or a coach, or an assistant, so that it's not your boss. You are in control of it. You are telling it what to do for you. And if you feel like it's manipulating you, it's not actually doing what you want. You should be able to switch to a different algorithm. So that's fine tune control. You kind of learn that you're trusting the human collaboration. I mean, that's how I see autonomous vehicles too, is giving as much information as possible, and you learn that dance yourself. Yeah, Adobe, I don't know if you use Adobe product for like Photoshop. They're trying to see if they can inject YouTube into their interface, but basically allow you to show you all these videos, that everybody's confused about what to do with features. So basically teach people by linking to, in that way, it's an assistant that uses videos as a basic element of information. Okay, so what practically should people do to try to fight against abuses of these algorithms, or algorithms that manipulate us? Honestly, it's a very, very difficult problem, because to start with, there is very little public awareness of these issues. Very few people would think there's anything wrong with the unused algorithm, even though there is actually something wrong already, which is that it's trying to maximize engagement most of the time, which has very negative side effects. So ideally, so the very first thing is to stop trying to purely maximize engagement, try to propagate content based on popularity, right? Instead, take into account the goals and the profiles of each user. So you will be, one example is, for instance, when I look at topic recommendations on Twitter, it's like, you know, they have this news tab with switch recommendations. It's always the worst coverage, because it's content that appeals to the smallest common denominator to all Twitter users, because they're trying to optimize. They're purely trying to optimize popularity. They're purely trying to optimize engagement. But that's not what I want. So they should put me in control of some setting so that I define what's the objective function that Twitter is going to be following to show me this content. And honestly, so this is all about interface design. And we are not, it's not realistic to give users control of a bunch of knobs that define algorithm. Instead, we should purely put them in charge of defining the objective function. Like, let the user tell us what they want to achieve, how they want this algorithm to impact their lives. So do you think it is that, or do they provide individual article by article reward structure where you give a signal, I'm glad I saw this, or I'm glad I didn't? So like a Spotify type feedback mechanism, it works to some extent. I'm kind of skeptical about it because the only way the algorithm, the algorithm will attempt to relate your choices with the choices of everyone else, which might, you know, if you have an average profile that works fine, I'm sure Spotify accommodations work fine if you just like mainstream stuff. If you don't, it can be, it's not optimal at all actually. It'll be in an efficient search for the part of the Spotify world that represents you. So it's a tough problem, but do note that even a feedback system like what Spotify has does not give me control over what the algorithm is trying to optimize for. Well, public awareness, which is what we're doing now, is a good place to start. Do you have concerns about longterm existential threats of artificial intelligence? Well, as I was saying, our world is increasingly made of information. AI algorithms are increasingly going to be our interface to this world of information, and somebody will be in control of these algorithms. And that puts us in any kind of a bad situation, right? It has risks. It has risks coming from potentially large companies wanting to optimize their own goals, maybe profit, maybe something else. Also from governments who might want to use these algorithms as a means of control of the population. Do you think there's existential threat that could arise from that? So existential threat. So maybe you're referring to the singularity narrative where robots just take over. Well, I don't, I'm not terminating robots, and I don't believe it has to be a singularity. We're just talking to, just like you said, the algorithm controlling masses of populations. The existential threat being, hurt ourselves much like a nuclear war would hurt ourselves. That kind of thing. I don't think that requires a singularity. That requires a loss of control over AI algorithm. Yes. So I do agree there are concerning trends. Honestly, I wouldn't want to make any longterm predictions. I don't think today we really have the capability to see what the dangers of AI are going to be in 50 years, in 100 years. I do see that we are already faced with concrete and present dangers surrounding the negative side effects of content recombination systems, of newsfeed algorithms concerning algorithmic bias as well. So we are delegating more and more decision processes to algorithms. Some of these algorithms are uncrafted, some are learned from data, but we are delegating control. Sometimes it's a good thing, sometimes not so much. And there is in general very little supervision of this process, right? So we are still in this period of very fast change, even chaos, where society is restructuring itself, turning into an information society, which itself is turning into an increasingly automated information passing society. And well, yeah, I think the best we can do today is try to raise awareness around some of these issues. And I think we're actually making good progress. If you look at algorithmic bias, for instance, three years ago, even two years ago, very, very few people were talking about it. And now all the big companies are talking about it. They are often not in a very serious way, but at least it is part of the public discourse. You see people in Congress talking about it. And it all started from raising awareness. Right. So in terms of alignment problem, trying to teach as we allow algorithms, just even recommender systems on Twitter, encoding human values and morals, decisions that touch on ethics, how hard do you think that problem is? How do we have lost functions in neural networks that have some component, some fuzzy components of human morals? Well, I think this is really all about objective function engineering, which is probably going to be increasingly a topic of concern in the future. Like for now, we're just using very naive loss functions because the hard part is not actually what you're trying to minimize. It's everything else. But as the everything else is going to be increasingly automated, we're going to be focusing our human attention on increasingly high level components, like what's actually driving the whole learning system, like the objective function. So loss function engineering is going to be, loss function engineer is probably going to be a job title in the future. And then the tooling you're creating with Keras essentially takes care of all the details underneath. And basically the human expert is needed for exactly that. That's the idea. Keras is the interface between the data you're collecting and the business goals. And your job as an engineer is going to be to express your business goals and your understanding of your business or your product, your system as a kind of loss function or a kind of set of constraints. Does the possibility of creating an AGI system excite you or scare you or bore you? So intelligence can never really be general. You know, at best it can have some degree of generality like human intelligence. It also always has some specialization in the same way that human intelligence is specialized in a certain category of problems, is specialized in the human experience. And when people talk about AGI, I'm never quite sure if they're talking about very, very smart AI, so smart that it's even smarter than humans, or they're talking about human like intelligence, because these are different things. Let's say, presumably I'm oppressing you today with my humanness. So imagine that I was in fact a robot. So what does that mean? That I'm impressing you with natural language processing. Maybe if you weren't able to see me, maybe this is a phone call. So that kind of system. Companion. So that's very much about building human like AI. And you're asking me, you know, is this an exciting perspective? Yes. I think so, yes. Not so much because of what artificial human like intelligence could do, but, you know, from an intellectual perspective, I think if you could build truly human like intelligence, that means you could actually understand human intelligence, which is fascinating, right? Human like intelligence is going to require emotions. It's going to require consciousness, which is not things that would normally be required by an intelligent system. If you look at, you know, we were mentioning earlier like science as a superhuman problem solving agent or system, it does not have consciousness, it doesn't have emotions. In general, so emotions, I see consciousness as being on the same spectrum as emotions. It is a component of the subjective experience that is meant very much to guide behavior generation, right? It's meant to guide your behavior. In general, human intelligence and animal intelligence has evolved for the purpose of behavior generation, right? Including in a social context. So that's why we actually need emotions. That's why we need consciousness. An artificial intelligence system developed in a different context may well never need them, may well never be conscious like science. Well, on that point, I would argue it's possible to imagine that there's echoes of consciousness in science when viewed as an organism, that science is consciousness. So, I mean, how would you go about testing this hypothesis? How do you probe the subjective experience of an abstract system like science? Well, the point of probing any subjective experience is impossible because I'm not science, I'm Lex. So I can't probe another entity, it's no more than bacteria on my skin. You're Lex, I can ask you questions about your subjective experience and you can answer me, and that's how I know you're conscious. Yes, but that's because we speak the same language. You perhaps, we have to speak the language of science in order to ask it. Honestly, I don't think consciousness, just like emotions of pain and pleasure, is not something that inevitably arises from any sort of sufficiently intelligent information processing. It is a feature of the mind, and if you've not implemented it explicitly, it is not there. So you think it's an emergent feature of a particular architecture. So do you think... It's a feature in the same sense. So, again, the subjective experience is all about guiding behavior. If the problems you're trying to solve don't really involve an embodied agent, maybe in a social context, generating behavior and pursuing goals like this. And if you look at science, that's not really what's happening. Even though it is, it is a form of artificial AI, artificial intelligence, in the sense that it is solving problems, it is accumulating knowledge, accumulating solutions and so on. So if you're not explicitly implementing a subjective experience, implementing certain emotions and implementing consciousness, it's not going to just spontaneously emerge. Yeah. But so for a system like, human like intelligence system that has consciousness, do you think it needs to have a body? Yes, definitely. I mean, it doesn't have to be a physical body, right? And there's not that much difference between a realistic simulation in the real world. So there has to be something you have to preserve kind of thing. Yes, but human like intelligence can only arise in a human like context. Intelligence needs other humans in order for you to demonstrate that you have human like intelligence, essentially. Yes. So what kind of tests and demonstration would be sufficient for you to demonstrate human like intelligence? Yeah. Just out of curiosity, you've talked about in terms of theorem proving and program synthesis, I think you've written about that there's no good benchmarks for this. Yeah. That's one of the problems. So let's talk program synthesis. So what do you imagine is a good... I think it's related questions for human like intelligence and for program synthesis. What's a good benchmark for either or both? Right. So I mean, you're actually asking two questions, which is one is about quantifying intelligence and comparing the intelligence of an artificial system to the intelligence for human. And the other is about the degree to which this intelligence is human like. It's actually two different questions. So you mentioned earlier the Turing test. Well, I actually don't like the Turing test because it's very lazy. It's all about completely bypassing the problem of defining and measuring intelligence and instead delegating to a human judge or a panel of human judges. So it's a total copout, right? If you want to measure how human like an agent is, I think you have to make it interact with other humans. Maybe it's not necessarily a good idea to have these other humans be the judges. Maybe you should just observe behavior and compare it to what a human would actually have done. When it comes to measuring how smart, how clever an agent is and comparing that to the degree of human intelligence. So we're already talking about two things, right? The degree, kind of like the magnitude of an intelligence and its direction, right? Like the norm of a vector and its direction. And the direction is like human likeness and the magnitude, the norm is intelligence. You could call it intelligence, right? So the direction, your sense, the space of directions that are human like is very narrow. Yeah. So the way you would measure the magnitude of intelligence in a system in a way that also enables you to compare it to that of a human. Well, if you look at different benchmarks for intelligence today, they're all too focused on skill at a given task. Like skill at playing chess, skill at playing Go, skill at playing Dota. And I think that's not the right way to go about it because you can always beat a human at one specific task. The reason why our skill at playing Go or juggling or anything is impressive is because we are expressing this skill within a certain set of constraints. If you remove the constraints, the constraints that we have one lifetime, that we have this body and so on, if you remove the context, if you have unlimited string data, if you can have access to, you know, for instance, if you look at juggling, if you have no restriction on the hardware, then achieving arbitrary levels of skill is not very interesting and says nothing about the amount of intelligence you've achieved. So if you want to measure intelligence, you need to rigorously define what intelligence is, which in itself, you know, it's a very challenging problem. And do you think that's possible? To define intelligence? Yes, absolutely. I mean, you can provide, many people have provided, you know, some definition. I have my own definition. Where does your definition begin? Where does your definition begin if it doesn't end? Well, I think intelligence is essentially the efficiency with which you turn experience into generalizable programs. So what that means is it's the efficiency with which you turn a sampling of experience space into the ability to process a larger chunk of experience space. So measuring skill can be one proxy across many different tasks, can be one proxy for measuring intelligence. But if you want to only measure skill, you should control for two things. You should control for the amount of experience that your system has and the priors that your system has. But if you look at two agents and you give them the same priors and you give them the same amount of experience, there is one of the agents that is going to learn programs, representations, something, a model that will perform well on the larger chunk of experience space than the other. And that is the smaller agent. Yeah. So if you fix the experience, which generate better programs, better meaning more generalizable. That's really interesting. That's a very nice, clean definition of... Oh, by the way, in this definition, it is already very obvious that intelligence has to be specialized because you're talking about experience space and you're talking about segments of experience space. You're talking about priors and you're talking about experience. All of these things define the context in which intelligence emerges. And you can never look at the totality of experience space, right? So intelligence has to be specialized. But it can be sufficiently large, the experience space, even though it's specialized. There's a certain point when the experience space is large enough to where it might as well be general. It feels general. It looks general. Sure. I mean, it's very relative. Like, for instance, many people would say human intelligence is general. In fact, it is quite specialized. We can definitely build systems that start from the same innate priors as what humans have at birth. Because we already understand fairly well what sort of priors we have as humans. Like many people have worked on this problem. Most notably, Elisabeth Spelke from Harvard. I don't know if you know her. She's worked a lot on what she calls core knowledge. And it is very much about trying to determine and describe what priors we are born with. Like language skills and so on, all that kind of stuff. Exactly. So we have some pretty good understanding of what priors we are born with. So we could... So I've actually been working on a benchmark for the past couple years, you know, on and off. I hope to be able to release it at some point. That's exciting. The idea is to measure the intelligence of systems by countering for priors, countering for amount of experience, and by assuming the same priors as what humans are born with. So that you can actually compare these scores to human intelligence. You can actually have humans pass the same test in a way that's fair. Yeah. And so importantly, such a benchmark should be such that any amount of practicing does not increase your score. So try to picture a game where no matter how much you play this game, that does not change your skill at the game. Can you picture that? As a person who deeply appreciates practice, I cannot actually. There's actually a very simple trick. So in order to come up with a task, so the only thing you can measure is skill at the task. Yes. All tasks are going to involve priors. Yes. The trick is to know what they are and to describe that. And then you make sure that this is the same set of priors as what humans start with. So you create a task that assumes these priors, that exactly documents these priors, so that the priors are made explicit and there are no other priors involved. And then you generate a certain number of samples in experience space for this task, right? And this, for one task, assuming that the task is new for the agent passing it, that's one test of this definition of intelligence that we set up. And now you can scale that to many different tasks, that each task should be new to the agent passing it, right? And also it should be human interpretable and understandable so that you can actually have a human pass the same test. And then you can compare the score of your machine and the score of your human. Which could be a lot of stuff. You could even start a task like MNIST. Just as long as you start with the same set of priors. So the problem with MNIST, humans are already trying to recognize digits, right? But let's say we're considering objects that are not digits, some completely arbitrary patterns. Well, humans already come with visual priors about how to process that. So in order to make the game fair, you would have to isolate these priors and describe them and then express them as computational rules. Having worked a lot with vision science people, that's exceptionally difficult. A lot of progress has been made. There's been a lot of good tests and basically reducing all of human vision into some good priors. We're still probably far away from that perfectly, but as a start for a benchmark, that's an exciting possibility. Yeah, so Elisabeth Spelke actually lists objectness as one of the core knowledge priors. Objectness, cool. Objectness, yeah. So we have priors about objectness, like about the visual space, about time, about agents, about goal oriented behavior. We have many different priors, but what's interesting is that, sure, we have this pretty diverse and rich set of priors, but it's also not that diverse, right? We are not born into this world with a ton of knowledge about the world, with only a small set of core knowledge. Yeah, sorry, do you have a sense of how it feels to us humans that that set is not that large? But just even the nature of time that we kind of integrate pretty effectively through all of our perception, all of our reasoning, maybe how, you know, do you have a sense of how easy it is to encode those priors? Maybe it requires building a universe and then the human brain in order to encode those priors. Or do you have a hope that it can be listed like an axiomatic? I don't think so. So you have to keep in mind that any knowledge about the world that we are born with is something that has to have been encoded into our DNA by evolution at some point. Right. And DNA is a very, very low bandwidth medium. Like it's extremely long and expensive to encode anything into DNA because first of all, you need some sort of evolutionary pressure to guide this writing process. And then, you know, the higher level of information you're trying to write, the longer it's going to take. And the thing in the environment that you're trying to encode knowledge about has to be stable over this duration. So you can only encode into DNA things that constitute an evolutionary advantage. So this is actually a very small subset of all possible knowledge about the world. You can only encode things that are stable, that are true, over very, very long periods of time, typically millions of years. For instance, we might have some visual prior about the shape of snakes, right? But what makes a face, what's the difference between a face and an art face? But consider this interesting question. Do we have any innate sense of the visual difference between a male face and a female face? What do you think? For a human, I mean. I would have to look back into evolutionary history when the genders emerged. But yeah, most... I mean, the faces of humans are quite different from the faces of great apes. Great apes, right? Yeah. That's interesting. Yeah, you couldn't tell the face of a female chimpanzee from the face of a male chimpanzee, probably. Yeah, and I don't think most humans have all that ability. So we do have innate knowledge of what makes a face, but it's actually impossible for us to have any DNA encoded knowledge of the difference between a female human face and a male human face because that knowledge, that information came up into the world actually very recently. If you look at the slowness of the process of encoding knowledge into DNA. Yeah, so that's interesting. That's a really powerful argument that DNA is a low bandwidth and it takes a long time to encode. That naturally creates a very efficient encoding. But one important consequence of this is that, so yes, we are born into this world with a bunch of knowledge, sometimes high level knowledge about the world, like the shape, the rough shape of a snake, of the rough shape of a face. But importantly, because this knowledge takes so long to write, almost all of this innate knowledge is shared with our cousins, with great apes, right? So it is not actually this innate knowledge that makes us special. But to throw it right back at you from the earlier on in our discussion, it's that encoding might also include the entirety of the environment of Earth. To some extent. So it can include things that are important to survival and production, so for which there is some evolutionary pressure, and things that are stable, constant over very, very, very long time periods. And honestly, it's not that much information. There's also, besides the bandwidths constraint and the constraints of the writing process, there's also memory constraints, like DNA, the part of DNA that deals with the human brain, it's actually fairly small. It's like, you know, on the order of megabytes, right? There's not that much high level knowledge about the world you can encode. That's quite brilliant and hopeful for a benchmark that you're referring to of encoding priors. I actually look forward to, I'm skeptical whether you can do it in the next couple of years, but hopefully. I've been working. So honestly, it's a very simple benchmark, and it's not like a big breakthrough or anything. It's more like a fun side project, right? But these fun, so is ImageNet. These fun side projects could launch entire groups of efforts towards creating reasoning systems and so on. And I think... Yeah, that's the goal. It's trying to measure strong generalization, to measure the strength of abstraction in our minds, well, in our minds and in artificial intelligence agencies. And if there's anything true about this science organism is its individual cells love competition. So and benchmarks encourage competition. So that's an exciting possibility. If you, do you think an AI winter is coming? And how do we prevent it? Not really. So an AI winter is something that would occur when there's a big mismatch between how we are selling the capabilities of AI and the actual capabilities of AI. And today, some deep learning is creating a lot of value. And it will keep creating a lot of value in the sense that these models are applicable to a very wide range of problems that are relevant today. And we are only just getting started with applying these algorithms to every problem they could be solving. So deep learning will keep creating a lot of value for the time being. What's concerning, however, is that there's a lot of hype around deep learning and around AI. There are lots of people are overselling the capabilities of these systems, not just the capabilities, but also overselling the fact that they might be more or less, you know, brain like, like given the kind of a mystical aspect, these technologies and also overselling the pace of progress, which, you know, it might look fast in the sense that we have this exponentially increasing number of papers. But again, that's just a simple consequence of the fact that we have ever more people coming into the field. It doesn't mean the progress is actually exponentially fast. Let's say you're trying to raise money for your startup or your research lab. You might want to tell, you know, a grandiose story to investors about how deep learning is just like the brain and how it can solve all these incredible problems like self driving and robotics and so on. And maybe you can tell them that the field is progressing so fast and we are going to have AGI within 15 years or even 10 years. And none of this is true. And every time you're like saying these things and an investor or, you know, a decision maker believes them, well, this is like the equivalent of taking on credit card debt, but for trust, right? And maybe this will, you know, this will be what enables you to raise a lot of money, but ultimately you are creating damage, you are damaging the field. So that's the concern is that that debt, that's what happens with the other AI winters is the concern is you actually tweeted about this with autonomous vehicles, right? There's almost every single company now have promised that they will have full autonomous vehicles by 2021, 2022. That's a good example of the consequences of over hyping the capabilities of AI and the pace of progress. So because I work especially a lot recently in this area, I have a deep concern of what happens when all of these companies after I've invested billions have a meeting and say, how much do we actually, first of all, do we have an autonomous vehicle? The answer will definitely be no. And second will be, wait a minute, we've invested one, two, three, four billion dollars into this and we made no profit. And the reaction to that may be going very hard in other directions that might impact even other industries. And that's what we call an AI winter is when there is backlash where no one believes any of these promises anymore because they've turned that to be big lies the first time around. And this will definitely happen to some extent for autonomous vehicles because the public and decision makers have been convinced that around 2015, they've been convinced by these people who are trying to raise money for their startups and so on, that L5 driving was coming in maybe 2016, maybe 2017, maybe 2018. Now we're in 2019, we're still waiting for it. And so I don't believe we are going to have a full on AI winter because we have these technologies that are producing a tremendous amount of real value. But there is also too much hype. So there will be some backlash, especially there will be backlash. So some startups are trying to sell the dream of AGI and the fact that AGI is going to create infinite value. Like AGI is like a free lunch. Like if you can develop an AI system that passes a certain threshold of IQ or something, then suddenly you have infinite value. And well, there are actually lots of investors buying into this idea and they will wait maybe 10, 15 years and nothing will happen. And the next time around, well, maybe there will be a new generation of investors. No one will care. Human memory is fairly short after all. I don't know about you, but because I've spoken about AGI sometimes poetically, I get a lot of emails from people giving me, they're usually like a large manifestos of they've, they say to me that they have created an AGI system or they know how to do it. And there's a long write up of how to do it. I get a lot of these emails, yeah. They're a little bit feel like it's generated by an AI system actually, but there's usually no diagram, you have a transformer generating crank papers about AGI. So the question is about, because you've been such a good, you have a good radar for crank papers, how do we know they're not onto something? How do I, so when you start to talk about AGI or anything like the reasoning benchmarks and so on, so something that doesn't have a benchmark, it's really difficult to know. I mean, I talked to Jeff Hawkins, who's really looking at neuroscience approaches to how, and there's some, there's echoes of really interesting ideas in at least Jeff's case, which he's showing. How do you usually think about this? Like preventing yourself from being too narrow minded and elitist about deep learning, it has to work on these particular benchmarks, otherwise it's trash. Well, you know, the thing is, intelligence does not exist in the abstract. Intelligence has to be applied. So if you don't have a benchmark, if you have an improvement in some benchmark, maybe it's a new benchmark, right? Maybe it's not something we've been looking at before, but you do need a problem that you're trying to solve. You're not going to come up with a solution without a problem. So you, general intelligence, I mean, you've clearly highlighted generalization. If you want to claim that you have an intelligence system, it should come with a benchmark. It should, yes, it should display capabilities of some kind. It should show that it can create some form of value, even if it's a very artificial form of value. And that's also the reason why you don't actually need to care about telling which papers have actually some hidden potential and which do not. Because if there is a new technique that's actually creating value, this is going to be brought to light very quickly because it's actually making a difference. So it's the difference between something that is ineffectual and something that is actually useful. And ultimately usefulness is our guide, not just in this field, but if you look at science in general, maybe there are many, many people over the years that have had some really interesting theories of everything, but they were just completely useless. And you don't actually need to tell the interesting theories from the useless theories. All you need is to see, is this actually having an effect on something else? Is this actually useful? Is this making an impact or not? That's beautifully put. I mean, the same applies to quantum mechanics, to string theory, to the holographic principle. We are doing deep learning because it works. Before it started working, people considered people working on neural networks as cranks very much. No one was working on this anymore. And now it's working, which is what makes it valuable. It's not about being right. It's about being effective. And nevertheless, the individual entities of this scientific mechanism, just like Yoshua Banjo or Jan Lekun, they, while being called cranks, stuck with it. Right? Yeah. And so us individual agents, even if everyone's laughing at us, just stick with it. If you believe you have something, you should stick with it and see it through. That's a beautiful inspirational message to end on. Francois, thank you so much for talking today. That was amazing. Thank you.
François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38
The following is a conversation with Colin Angle. He's the CEO and co founder of iRobot, a robotics company that for 29 years has been creating robots that operate successfully in the real world. Not as a demo or on a scale of dozens, but on a scale of thousands and millions. As of this year, iRobot has sold more than 25 million robots to consumers, including the Roomba vacuum cleaning robot, the Bravo floor mopping robot, and soon the Terra lawn mowing robot. 29 million robots successfully operating autonomously in real people's homes, to me is an incredible accomplishment of science, engineering, logistics, and all kinds of general entrepreneurial innovation. Most robotics companies fail. iRobot has survived and succeeded for 29 years. I spent all day at iRobot, including a long tour and conversation with Colin about the history of iRobot, and then sat down for this podcast conversation that would have been much longer if I didn't spend all day learning about and playing with the various robots and the company's history. I'll release the video of the tour separately. Colin, iRobot, its founding team, its current team, and its mission has been and continues to be an inspiration to me and thousands of engineers who are working hard to create AI systems that help real people. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Colin Angle. In his 1942 short story, Runaround, from his iRobot collection, Asimov proposed the three laws of robotics in order, don't harm humans, obey orders, protect yourself. So two questions. First, does the Roomba follow these three laws? And also, more seriously, what role do you hope to see robots take in modern society and in the future world? So the three laws are very thought provoking and require such a profound understanding of the world a robot lives in, the ramifications of its action and its own sense of self that it's not a relevant bar, at least it won't be a relevant bar for decades to come. And so if Roomba follows the three laws, and I believe it does, it is designed to help humans, not hurt them, it's designed to be inherently safe, and we designed it to last a long time. It's not through any AI or intent on the robot's part. It's because following the three laws is aligned with being a good robot product. So I guess it does, but not by explicit design. So then the bigger picture, what role do you hope to see robotics, robots take in what's currently mostly a world of humans? We need robots to help us continue to improve our standard of living. We need robots because the average age of humanity is increasing very quickly, and simply the number of people young enough and spry enough to care for the elder growing demographic is inadequate. And so what is the role of robots? Today, the role is to make our lives a little easier, a little cleaner, maybe a little healthier. But in time, robots are going to be the difference between real gut wrenching declines in our ability to live independently and maintain our standard of living, and a future that is the bright one where we have more control over our lives, can spend more of our time focused on activities we choose. And I'm so honored and excited to be playing a role in that journey. So you've given me a tour. It showed me some of the long histories, now 29 years that iRobot has been at it, creating some incredible robots. You showed me Pacbot. You showed me a bunch of other stuff that led up to Roomba, that led to Braava and Terra. So let's skip that incredible history in the interest of time, cause we already talked about it. I'll show this incredible footage. You mentioned elderly and robotics in society. I think the home is a fascinating place for robots to be. So where do you see robots in the home? Currently, I would say, once again, probably most homes in the world don't have a robot. So how do you see that changing? What do you think is the big initial value add that robots can do? So iRobot has sort of, over the years, narrowed in on the home, the consumer's home, as the place where we want to innovate and deliver tools that will help a home be a more automatically maintained place, a healthier place, a safer place, and perhaps even a more efficient place to be. And today, we vacuum, we mop, soon we'll be mowing your lawn. But where things are going is, when do we get to the point where the home, not just the robots that live in your home, but the home itself becomes part of a system that maintains itself and plays an active role in caring for and helping the people live in that home. And I see everything that we're doing as steps along the path toward that future. So what are the steps? So if we can summarize some of the history of Roomba, you've mentioned, and maybe you can elaborate on it, but you mentioned that the early days were really taking a robot from something that works either in the lab or something that works in the field that helps soldiers do the difficult work they do to actually be in the hands of consumers and tens of thousands, hundreds of thousands of robots that don't break down over how much people love them over months of very extensive use. So that was the big first step. And then the second big step was the ability to sense the environment, to build a map, to localize, to be able to build a picture of the home that the human can then attach labels to in terms of giving some semantic knowledge to the robot about its environment. Okay, so that's like a huge, two big, huge steps. Maybe you can comment on them, but also what is the next step of making a robot part of the home? Sure, so the goal is to make a home that takes care of itself, takes care of the people in the home, and gives the user an experience of just living their life and the home is somehow doing the right thing, turning on and off lights when you leave, cleaning up the environment. And we went from robots that were great in the lab, but were both too expensive and not sufficiently capable to ever do an acceptable job of anything other than being a toy or a curio in your home to something that was both affordable and sufficiently effective to drive, be above threshold and drive purchase intent. Now we've disrupted the entire vacuuming industry. The number one selling vacuums, for example, in the US are Roombas, so not robot vacuums, but vacuums, and that's really crazy and weird. We need to pause that. I mean, that's incredible. That's incredible that a robot is the number one selling thing that does something. Yep. Something as essential as vacuuming. Yep. Congratulations. Thank you. It's still kind of fun to say, but just because this was a crazy idea that just started, you know, in a room here, we're like, do you think we can do this? So, hey, let's give it a try. But now the robots are starting to understand their environment. And if you think about the next step, there's two dimensions. I've been working so hard since the beginning of iRobot to make robots are autonomous, that, you know, they're smart enough and understand their task enough, they can just go do it without human involvement. Now what I'm really excited and working on is how do I make them less autonomous? Meaning that the robot is supposed to be your partner, not this automaton that just goes and does what a robot does. And so that if you tell it, hey, I just dropped some flour by the fridge in the kitchen, can you deal with it? Wouldn't it be awesome if the right thing just happened based on that utterance? And to some extent, that's less autonomous because it's actually listening to you, understanding the context and intent of the sentence, mapping it against its understanding of the home it lives in and knowing what to do. And so that's an area of research. It's an area where we're starting to roll out features. You can now tell your robot to clean up the kitchen and it knows what the kitchen is and can do that. And that's sort of 1.0 of where we're going. The other cool thing is that we're starting to know where stuff is. And why is that important? Well, robots are supposed to have arms, right? Data had an arm, Rosie had an arm, Robbie the robot had an arm. I mean, robots are, you know, they are physical things that move around in an environment and they're supposed to like do work. And if you think about it, if a robot doesn't know where anything is, why should it have an arm? But with this new dawn of home understanding that we're starting to go enjoy, I know where the kitchen is. I might in the future know where the refrigerator is. I might, if I had an arm, be able to find the handle, open it and even get myself a beer. Obviously, that's one of the true dreams of robotics is to have robots bringing us a beer while we watch television. But, you know, I think that that new category of tasks where physical manipulation, robot arms, is just a potpourri of new opportunity and excitement. And you see humans as a crucial part of that. So you kind of mentioned that. And I personally find that a really compelling idea. I think full autonomy can only take us so far, especially in the home. So you see humans as helping the robot understand or give deeper meaning to the spatial information. Right. It's a partnership. The robot is supposed to operate according to descriptors that you would use to describe your own home. The robot is supposed to, in lieu of better direction, kind of go about its routine, which ought to be basically right, and lead to a home maintained in a way that it's learned you like, but also be perpetually ready to take direction that would activate a different set of behaviors or actions to meet a current need to the extent it could actually perform that task. So I got to ask you, I think this is a fundamental and a fascinating question, because iRobot has been a successful company and a rare successful robotics company. So Anki, Jibo, Mayfield Robotics with their robot curry, SciFi Works, Rethink Robotics, these are robotics companies that were founded and run by brilliant people. But all, very unfortunately, at least for us roboticists, all went out of business recently. So why do you think they didn't last longer? Why do you think it is so hard to keep a robotics company alive? You know, I say this only partially in jest that back in the day before Roomba, you know, I was a high tech entrepreneur building robots. But it wasn't until I became a vacuum cleaner salesman that we had any success. So, I mean, the point is technology alone doesn't equal a successful business. We need to go and find the compelling need where the robot that we're creating can deliver clearly more value to the end user than it costs. And this is not a marginal thing where you're looking at the scale and you're like, yeah, it's close. Maybe we can hold our breath and make it work. It's clearly more value than the cost of the robot to bring, you know, in the store. And I think that the challenge has been finding those businesses where that's true in a sustainable fashion. You know, when you get into entertainment style things, you could be the cat's meow one year, but 85% of toys, regardless of their merit, fail to make it to their second season. It's just super hard to do so. And so that's just a tough business. And there has been a lot of experimentation around what is the right type of social companion, what is the right robot in the home that is doing something other than tasks people do every week that they'd rather not do. And I'm not sure we've got it all figured out right. And so that you get brilliant roboticists with super interesting robots that ultimately don't quite have that magical user experience and thus that value benefit equation remains ambiguous. So you as somebody who dreams of robots changing the world, what's your estimate? How big is the space of applications that fit the criteria that you just described where you can really demonstrate an obvious significant value over the alternative non robotic solution? Well, I think that we're just about none of the way to achieving the potential of robotics at home. But we have to do it in a really eyes wide open, honest fashion. And so another way to put that is the potential is infinite because we did take a few steps, but you're saying those steps are just very initial steps. So the Roomba is a hugely successful product, but you're saying that's just the very, very beginning. That's just the very, very beginning. It's the foot in the door. And I think I was lucky that in the early days of robotics, people would ask me, when are you going to clean my floor? It was something that I grew up saying, I got all these really good ideas, but everyone seems to want their floor clean. And so maybe we should do that. Yeah, your good ideas. Earn the right to do the next thing after that. So the good ideas have to match with the desire of the people and then the actual cost has to like the business, the financial aspect has to all match together. Yeah, during our partnership back a number of years ago with Johnson Wax, they would explain to me that they would go into homes and just watch how people lived and try to figure out what were they doing that they really didn't really like to do, but they had to do it frequently enough that it was top of mind and understood as a burden. Hey, let's make a product or come up with a solution to make that pain point less challenging. And sometimes we do certain burdens so often as a society that we actually don't even realize, like it's actually hard to see that that burden is something that could be removed. So it does require just going into the home and staring at, wait, how do I actually live life? What are the pain points? Yeah, and getting those insights is a lot harder than it would seem it should be in retrospect. So how hard on that point? I mean, one of the big challenges of robotics is driving the cost down to something that consumers, people would afford. So people would be less likely to buy a Roomba if it cost $500,000, which is probably sort of what a Roomba would cost several decades ago. So how do you drive, which I mentioned is very difficult, how do you drive the cost of a Roomba or a robot down such that people would want to buy it? When I started building robots, the cost of the robot had a lot to do with the amount of time it took to build it. And so that we build our robots out of aluminum, I would go spend my time in the machine shop on the milling machine, cutting out the parts and so forth. And then when we got into the toy industry, I realized that if we were building at scale, I could determine the cost of the Roomba instead of adding up all the hours to mill out the parts, but by weighing it. And that's liberating. You can say, wow, the world has just changed as I think about construction in a different way. The 3D CAD tools that are available to us today, the operating at scale where I can do tooling and injection mold, an arbitrarily complicated part, and the cost is going to be basically the weight of the plastic in that part, is incredibly exciting and liberating and opens up all sorts of opportunities. And for the sensing part of it, where we are today is instead of trying to build skin, which is really hard. For a long time, I spent creating strategies and ideas around how could we duplicate the skin on the human body because it's such an amazing sensor. Instead of going down that path, why don't we focus on vision? And how many of the problems that face a robot trying to do real work could be solved with a cheap camera and a big ass computer? Moore's law continues to work. The cell phone industry, the mobile industry is giving us better and better tools that can run on these embedded computers. And I think we passed an important moment maybe two years ago where you could put machine vision capable processors on robots at consumer price points. And I was waiting for it to happen. We avoided putting lasers on our robots to do navigation and instead spent years researching how to do vision based navigation because you could just see where these technology trends were going. And between injection molded plastic and a camera with a computer capable of running machine learning and visual object recognition, I could build an incredibly affordable, incredibly capable robot. And that's going to be the future. So on that point with a small tangent, but I think an important one, another industry in which I would say the only other industry in which there is automation actually touching people's lives today is autonomous vehicles. What the vision you just described of using computer vision and using cheap camera sensors, there's a debate on that of LIDAR versus computer vision. And the Elon Musk famously said that LIDAR is a crutch that really in the long term, camera only is the right solution, which echoes some of the ideas you're expressing. Of course, the domain in terms of its safety criticality is different. But what do you think about that approach in the autonomous vehicle space? And in general, do you see a connection between the incredible real world challenges you have to solve in the home with Roomba? And I saw a demonstration of some of them, corner cases literally, and autonomous vehicles. So there's absolutely a tremendous overlap between both the problems a robot vacuum and an autonomous vehicle are trying to solve and the tools and the types of sensors that are being applied in the pursuit of the solutions. In my world, my environment is actually much harder than the environment an automobile travels. We don't have roads. We have t shirts. We have steps. We have a near infinite number of patterns and colors and surface textures on the floor. Especially from a visual perspective. So the way the world looks is an infinitely variable. On the other hand, safety is way easier on the inside. My robots, they're not very heavy. They're not very fast. If they bump into your foot, you think it's funny. And autonomous vehicles kind of have the inverse problem. And so that for me saying vision is the future, I can say that without reservation. For autonomous vehicles, I think I believe what Elon's saying about the future is ultimately going to be vision. Maybe if we put a cheap lighter on there as a backup sensor, it might not be the worst idea in the world. So the stakes are much higher. The stakes are much higher. You have to be much more careful thinking through how far away that future is. Right. But I think that the primary environmental understanding sensor is going to be a visual system. Visual system. So on that point, well, let me ask, do you hope there's an iRobot robot in every home in the world one day? I expect there to be at least one iRobot robot in every home. We've sold 25 million robots. So we're in about 10% of US homes, which is a great start. But I think that when we think about the numbers of things that robots can do, today I can vacuum your floor, mop your floor, cut your lawn, or soon we'll be able to cut your lawn. But there are more things that we could do in the home. And I hope that we continue using the techniques I described around exploiting computer vision and low cost manufacturing that we'll be able to create these solutions at affordable price points. So let me ask on that point of a robot in every home, that's my dream as well. I'd love to see that. I think the possibilities there are indeed infinite positive possibilities. But in our current culture, no thanks to science fiction and so on, there's a serious kind of hesitation, anxiety, concern about robots, and also a concern about privacy. And it's a fascinating question to me why that concern is amongst a certain group of people is as intense as it is. So you have to think about it because it's a serious concern. But I wonder how you address it best. So from a perspective of vision sensors, so robots that move about the home and sense the world, how do you alleviate people's privacy concerns? How do you make sure that they can trust iRobot and the robots that they share their home with? I think that's a great question. And we've really leaned way forward on this because given our vision as to the role the company intends to play in the home, really for us, make or break is can our approach be trusted to protecting the data and the privacy of the people who have our robots? And so we've gone out publicly with a privacy manifesto stating we'll never sell your data. We've adopted GDPR not just where GDPR is required, but globally. We have ensured that images don't leave the robot. So processing data from the visual sensors happens locally on the robot. And only semantic knowledge of the home with the consumer's consent is sent up. We show you what we know and are trying to go use data as an enabler for the performance of the robots with the informed consent and understanding of the people who own those robots. We take it very seriously. And ultimately, we think that by showing a customer that if you let us build a semantic map of your home and know where the rooms are, well, then you can say clean the kitchen. If you don't want the robot to do that, don't make the map. It'll do its best job cleaning your home. But it won't be able to do that. And if you ever want us to forget that we know that it's your kitchen, you can have confidence that we will do that for you. So we're trying to go and be a data 2.0 perspective company where we treat the data that the robots have of the consumer's home as if it were the consumer's data and that they have rights to it. So we think by being the good guys on this front, we can build the trust and thus be entrusted to enable robots to do more things that are thoughtful. You think people's worries will diminish over time? As a society, broadly speaking, do you think you can win over trust not just for the company, but just the comfort that people have with AI in their home enriching their lives in some way? I think we're in an interesting place today where it's less about winning them over and more about finding a way to talk about privacy in a way that more people can understand. I would tell you that today, when there's a privacy breach, people get very upset and then go to the store and buy the cheapest thing, paying no attention to whether or not the products that they're buying honor privacy standards or not. In fact, if I put on the package of my Roomba, the privacy commitments that we have, I would sell less than I would if I did nothing at all. And that needs to change. So it's not a question about earning trust. I think that's necessary but not sufficient. We need to figure out how to have a comfortable set of what is the grade A meat standard applied to privacy that customers can trust and understand and then use in their buying decisions. That will reward companies for good behavior and that will ultimately be how this moves forward. And maybe be part of the conversation between regular people about what it means, what privacy means. If you have some standards, you can say, you can start talking about who's following them, who does not have more. Because most people are actually quite clueless about all aspects of artificial intelligence, the data collection, and so on. It would be nice to change that for people to understand the good that AI can do. And it's not some system that's trying to steal all the most sensitive data. Do you think, do you dream of a Roomba with human level intelligence one day? So you've mentioned a very successful localization and mapping of the environment, being able to do some basic communication to say, go clean the kitchen. Do you see in your maybe more bored moments, once you get the beer, to sit back with that beer and have a chat on a Friday night with a Roomba about how your day went? So to your latter question, absolutely. To your former question as to whether a Roomba can have human level intelligence, not in my lifetime. You can have you. I think you can have a great conversation, a meaningful conversation with a Roomba without it having anything that resembles human level intelligence. And I think that as long as you realize that conversation is not about the robot and making the robot feel good. That conversation is about you learning interesting things that make you feel like the conversation that you had with the robot is a pretty awesome way of learning something. And it could be about what kind of day your pet had. It could be about how can I make my home more energy efficient. It could be about if I'm thinking about climbing Mount Everest, what should I know? And that's a very doable thing. But if I think that that conversation I'm going to have with the robot is I'm going to be rewarded by making the robot happy, well, I could just put a button on the robot that you could push and the robot would smile. And that sort of thing. So I think you need to think about the question in the right way. And robots can be awesomely effective at helping people feel less isolated, learn more about the home that they live in, and fill some of those lonely gaps that we wish we were engaged learning cool stuff about our world. Last question. If you could hang out for a day with a robot from science fiction, movies, books, and safely pick its brain for that day, who would you pick? Data. Data. From Star Trek. I think that A, data is really smart. Data has been through a lot trying to go and save the galaxy. And I'm really interested actually in emotion and robotics. And I think you'd have a lot to say about that. Because I believe actually that emotion plays an incredibly useful role in doing reasonable things in situations where we have imperfect understanding of what's going on. In social situations when there's imperfect information. In social situations, also in competitive or dangerous situations that we have emotion for a reason. And so that ultimately, my theory is that as robots get smarter and smarter, they're actually going to get more emotional. Because you can't actually survive on pure logic. Because only a very tiny fraction of the situations we find ourselves in can be resolved reasonably with logic. And so I think Data would have a lot to say about that. And so I could find out whether he agrees. If you could ask Data one question, you would get a deep, honest answer to what would you ask. What's Captain Picard really like? OK, I think that's the perfect way to end it. Colin, thank you so much for talking today. I really appreciate it. My pleasure.
Colin Angle: iRobot CEO | Lex Fridman Podcast #39
The following is a conversation with Regina Barzilay. She's a professor at MIT and a world class researcher in natural language processing and applications of deep learning to chemistry and oncology or the use of deep learning for early diagnosis, prevention and treatment of cancer. She has also been recognized for teaching of several successful AI related courses at MIT, including the popular Introduction to Machine Learning course. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. And now here's my conversation with Regina Barzilay. In an interview you've mentioned that if there's one course you would take, it would be a literature course with a friend of yours that a friend of yours teaches. Just out of curiosity, because I couldn't find anything on it, are there books or ideas that had profound impact on your life journey, books and ideas perhaps outside of computer science and the technical fields? I think because I'm spending a lot of my time at MIT and previously in other institutions where I was a student, I have limited ability to interact with people. So a lot of what I know about the world actually comes from books. And there were quite a number of books that had profound impact on me and how I view the world. Let me just give you one example of such a book. I've maybe a year ago read a book called The Emperor of All Melodies. It's a book about, it's kind of a history of science book on how the treatments and drugs for cancer were developed. And that book, despite the fact that I am in the business of science, really opened my eyes on how imprecise and imperfect the discovery process is and how imperfect our current solutions and what makes science succeed and be implemented. And sometimes it's actually not the strengths of the idea, but devotion of the person who wants to see it implemented. So this is one of the books that, you know, at least for the last year, quite changed the way I'm thinking about scientific process just from the historical perspective and what do I need to do to make my ideas really implemented. Let me give you an example of a book which is not kind of, which is a fiction book. It's a book called Americana. And this is a book about a young female student who comes from Africa to study in the United States. And it describes her past, you know, within her studies and her life transformation that, you know, in a new country and kind of adaptation to a new culture. And when I read this book, I saw myself in many different points of it, but it also kind of gave me the lens on different events. And some of it that I never actually paid attention. One of the funny stories in this book is how she arrives to her new college and she starts speaking in English and she had this beautiful British accent because that's how she was educated in her country. This is not my case. And then she notices that the person who talks to her, you know, talks to her in a very funny way, in a very slow way. And she's thinking that this woman is disabled and she's also trying to kind of to accommodate her. And then after a while, when she finishes her discussion with this officer from her college, she sees how she interacts with the other students, with American students. And she discovers that actually she talked to her this way because she saw that she doesn't understand English. And I thought, wow, this is a funny experience. And literally within few weeks, I went to LA to a conference and I asked somebody in the airport, you know, how to find like a cab or something. And then I noticed that this person is talking in a very strange way. And my first thought was that this person have some, you know, pronunciation issues or something. And I'm trying to talk very slowly to him and I was with another professor, Ernst Frankel. And he's like laughing because it's funny that I don't get that the guy is talking in this way because he thinks that I cannot speak. So it was really kind of mirroring experience. And it led me think a lot about my own experiences moving, you know, from different countries. So I think that books play a big role in my understanding of the world. On the science question, you mentioned that it made you discover that personalities of human beings are more important than perhaps ideas. Is that what I heard? It's not necessarily that they are more important than ideas, but I think that ideas on their own are not sufficient. And many times, at least at the local horizon, it's the personalities and their devotion to their ideas is really that locally changes the landscape. Now, if you're looking at AI, like let's say 30 years ago, you know, dark ages of AI or whatever, what is symbolic times, you can use any word. You know, there were some people, now we're looking at a lot of that work and we're kind of thinking this was not really maybe a relevant work, but you can see that some people managed to take it and to make it so shiny and dominate the academic world and make it to be the standard. If you look at the area of natural language processing, it is well known fact that the reason that statistics in NLP took such a long time to become mainstream because there were quite a number of personalities which didn't believe in this idea and didn't stop research progress in this area. So I do not think that, you know, kind of asymptotically maybe personalities matters, but I think locally it does make quite a bit of impact and it's generally, you know, speeds up the rate of adoption of the new ideas. Yeah, and the other interesting question is in the early days of particular discipline, I think you mentioned in that book is ultimately a book of cancer. It's called The Emperor of All Melodies. Yeah, and those melodies included the trying to, the medicine, was it centered around? So it was actually centered on, you know, how people thought of curing cancer. Like for me, it was really a discovery how people, what was the science of chemistry behind drug development that it actually grew up out of dying, like coloring industry that people who developed chemistry in 19th century in Germany and Britain to do, you know, the really new dyes. They looked at the molecules and identified that they do certain things to cells. And from there, the process started. And, you know, like historically saying, yeah, this is fascinating that they managed to make the connection and look under the microscope and do all this discovery. But as you continue reading about it and you read about how chemotherapy drugs which were developed in Boston, and some of them were developed. And Farber, Dr. Farber from Dana Farber, you know, how the experiments were done that, you know, there was some miscalculation, let's put it this way. And they tried it on the patients and they just, and those were children with leukemia and they died. And then they tried another modification. You look at the process, how imperfect is this process? And, you know, like, if we're again looking back like 60 years ago, 70 years ago, you can kind of understand it. But some of the stories in this book which were really shocking to me were really happening, you know, maybe decades ago. And we still don't have a vehicle to do it much more fast and effective and, you know, scientific the way I'm thinking computer science scientific. So from the perspective of computer science, you've gotten a chance to work the application to cancer and to medicine in general. From a perspective of an engineer and a computer scientist, how far along are we from understanding the human body, biology of being able to manipulate it in a way we can cure some of the maladies, some of the diseases? So this is very interesting question. And if you're thinking as a computer scientist about this problem, I think one of the reasons that we succeeded in the areas we as a computer scientist succeeded is because we don't have, we are not trying to understand in some ways. Like if you're thinking about like eCommerce, Amazon, Amazon doesn't really understand you. And that's why it recommends you certain books or certain products, correct? And, you know, traditionally when people were thinking about marketing, you know, they divided the population to different kind of subgroups, identify the features of this subgroup and come up with a strategy which is specific to that subgroup. If you're looking about recommendation system, they're not claiming that they're understanding somebody, they're just managing to, from the patterns of your behavior to recommend you a product. Now, if you look at the traditional biology, and obviously I wouldn't say that I at any way, you know, educated in this field, but you know what I see, there's really a lot of emphasis on mechanistic understanding. And it was very surprising to me coming from computer science, how much emphasis is on this understanding. And given the complexity of the system, maybe the deterministic full understanding of this process is, you know, beyond our capacity. And the same ways in computer science when we're doing recognition, when you do recommendation and many other areas, it's just probabilistic matching process. And in some way, maybe in certain cases, we shouldn't even attempt to understand or we can attempt to understand, but in parallel, we can actually do this kind of matchings that would help us to find key role to do early diagnostics and so on. And I know that in these communities, it's really important to understand, but I'm sometimes wondering, you know, what exactly does it mean to understand here? Well, there's stuff that works and, but that can be, like you said, separate from this deep human desire to uncover the mysteries of the universe, of science, of the way the body works, the way the mind works. It's the dream of symbolic AI, of being able to reduce human knowledge into logic and be able to play with that logic in a way that's very explainable and understandable for us humans. I mean, that's a beautiful dream. So I understand it, but it seems that what seems to work today and we'll talk about it more is as much as possible, reduce stuff into data, reduce whatever problem you're interested in to data and try to apply statistical methods, apply machine learning to that. On a personal note, you were diagnosed with breast cancer in 2014. What did facing your mortality make you think about? How did it change you? You know, this is a great question and I think that I was interviewed many times and nobody actually asked me this question. I think I was 43 at a time. And the first time I realized in my life that I may die and I never thought about it before. And there was a long time since you're diagnosed until you actually know what you have and how severe is your disease. For me, it was like maybe two and a half months. And I didn't know where I am during this time because I was getting different tests and one would say it's bad and I would say, no, it is not. So until I knew where I am, I really was thinking about all these different possible outcomes. Were you imagining the worst or were you trying to be optimistic or? It would be really, I don't remember what was my thinking. It was really a mixture with many components at the time speaking in our terms. And one thing that I remember, and every test comes and then you're saying, oh, it could be this or it may not be this. And you're hopeful and then you're desperate. So it's like, there is a whole slew of emotions that goes through you. But what I remember is that when I came back to MIT, I was kind of going the whole time through the treatment to MIT, but my brain was not really there. But when I came back, really finished my treatment and I was here teaching and everything, I look back at what my group was doing, what other groups was doing. And I saw these trivialities. It's like people are building their careers on improving some parts around two or 3% or whatever. I was, it's like, seriously, I did a work on how to decipher ugaritic, like a language that nobody speak and whatever, like what is significance? When all of a sudden, I walked out of MIT, which is when people really do care what happened to your ICLR paper, what is your next publication to ACL, to the world where people, you see a lot of suffering that I'm kind of totally shielded on it on daily basis. And it's like the first time I've seen like real life and real suffering. And I was thinking, why are we trying to improve the parser or deal with trivialities when we have capacity to really make a change? And it was really challenging to me because on one hand, I have my graduate students really want to do their papers and their work, and they want to continue to do what they were doing, which was great. And then it was me who really kind of reevaluated what is the importance. And also at that point, because I had to take some break, I look back into like my years in science and I was thinking, like 10 years ago, this was the biggest thing, I don't know, topic models. We have like millions of papers on topic models and variation of topics models. Now it's totally like irrelevant. And you start looking at this, what do you perceive as important at different point of time and how it fades over time. And since we have a limited time, all of us have limited time on us, it's really important to prioritize things that really matter to you, maybe matter to you at that particular point. But it's important to take some time and understand what matters to you, which may not necessarily be the same as what matters to the rest of your scientific community and pursue that vision. So that moment, did it make you cognizant? You mentioned suffering of just the general amount of suffering in the world. Is that what you're referring to? So as opposed to topic models and specific detailed problems in NLP, did you start to think about other people who have been diagnosed with cancer? Is that the way you started to see the world perhaps? Oh, absolutely. And it actually creates, because like, for instance, there is parts of the treatment where you need to go to the hospital every day and you see the community of people that you see and many of them are much worse than I was at a time. And you all of a sudden see it all. And people who are happier someday just because they feel better. And for people who are in our normal realm, you take it totally for granted that you feel well, that if you decide to go running, you can go running and you're pretty much free to do whatever you want with your body. Like I saw like a community, my community became those people. And I remember one of my friends, Dina Katabi, took me to Prudential to buy me a gift for my birthday. And it was like the first time in months that I went to kind of to see other people. And I was like, wow, first of all, these people, they are happy and they're laughing and they're very different from these other my people. And second of thing, I think it's totally crazy. They're like laughing and wasting their money on some stupid gifts. And they may die. They already may have cancer and they don't understand it. So you can really see how the mind changes that you can see that, before that you can ask, didn't you know that you're gonna die? Of course I knew, but it was a kind of a theoretical notion. It wasn't something which was concrete. And at that point, when you really see it and see how little means sometimes the system has to have them, you really feel that we need to take a lot of our brilliance that we have here at MIT and translate it into something useful. Yeah, and you still couldn't have a lot of definitions, but of course, alleviating, suffering, alleviating, trying to cure cancer is a beautiful mission. So I of course know theoretically the notion of cancer, but just reading more and more about it's 1.7 million new cancer cases in the United States every year, 600,000 cancer related deaths every year. So this has a huge impact, United States globally. When broadly, before we talk about how machine learning, how MIT can help, when do you think we as a civilization will cure cancer? How hard of a problem is it from everything you've learned from it recently? I cannot really assess it. What I do believe will happen with the advancement in machine learning is that a lot of types of cancer we will be able to predict way early and more effectively utilize existing treatments. I think, I hope at least that with all the advancements in AI and drug discovery, we would be able to much faster find relevant molecules. What I'm not sure about is how long it will take the medical establishment and regulatory bodies to kind of catch up and to implement it. And I think this is a very big piece of puzzle that is currently not addressed. That's the really interesting question. So first a small detail that I think the answer is yes, but is cancer one of the diseases that when detected earlier that's a significantly improves the outcomes? So like, cause we will talk about there's the cure and then there is detection. And I think where machine learning can really help is earlier detection. So does detection help? Detection is crucial. For instance, the vast majority of pancreatic cancer patients are detected at the stage that they are incurable. That's why they have such a terrible survival rate. It's like just few percent over five years. It's pretty much today the sentence. But if you can discover this disease early, there are mechanisms to treat it. And in fact, I know a number of people who were diagnosed and saved just because they had food poisoning. They had terrible food poisoning. They went to ER, they got scan. There were early signs on the scan and that would save their lives. But this wasn't really an accidental case. So as we become better, we would be able to help to many more people that are likely to develop diseases. And I just want to say that as I got more into this field, I realized that cancer is of course terrible disease, but there are really the whole slew of terrible diseases out there like neurodegenerative diseases and others. So we, of course, a lot of us are fixated on cancer because it's so prevalent in our society. And you see these people where there are a lot of patients with neurodegenerative diseases and the kind of aging diseases that we still don't have a good solution for. And I felt as a computer scientist, we kind of decided that it's other people's job to treat these diseases because it's like traditionally people in biology or in chemistry or MDs are the ones who are thinking about it. And after kind of start paying attention, I think that it's really a wrong assumption and we all need to join the battle. So how it seems like in cancer specifically that there's a lot of ways that machine learning can help. So what's the role of machine learning in the diagnosis of cancer? So for many cancers today, we really don't know what is your likelihood to get cancer. And for the vast majority of patients, especially on the younger patients, it really comes as a surprise. Like for instance, for breast cancer, 80% of the patients are first in their families, it's like me. And I never saw that I had any increased risk because nobody had it in my family. And for some reason in my head, it was kind of inherited disease. But even if I would pay attention, the very simplistic statistical models that are currently used in clinical practice, they really don't give you an answer, so you don't know. And the same true for pancreatic cancer, the same true for non smoking lung cancer and many others. So what machine learning can do here is utilize all this data to tell us early who is likely to be susceptible and using all the information that is already there, be it imaging, be it your other tests, and eventually liquid biopsies and others, where the signal itself is not sufficiently strong for human eye to do good discrimination because the signal may be weak, but by combining many sources, machine which is trained on large volumes of data can really detect it early. And that's what we've seen with breast cancer and people are reporting it in other diseases as well. That really boils down to data, right? And in the different kinds of sources of data. And you mentioned regulatory challenges. So what are the challenges in gathering large data sets in this space? Again, another great question. So it took me after I decided that I want to work on it two years to get access to data. Any data, like any significant data set? Any significant amount, like right now in this country, there is no publicly available data set of modern mammograms that you can just go on your computer, sign a document and get it. It just doesn't exist. I mean, obviously every hospital has its own collection of mammograms. There are data that came out of clinical trials. What we're talking about here is a computer scientist who just wants to run his or her model and see how it works. This data, like ImageNet, doesn't exist. And there is a set which is called like Florida data set which is a film mammogram from 90s which is totally not representative of the current developments. Whatever you're learning on them doesn't scale up. This is the only resource that is available. And today there are many agencies that govern access to data. Like the hospital holds your data and the hospital decides whether they would give it to the researcher to work with this data or not. Individual hospital? Yeah. I mean, the hospital may, you know, assuming that you're doing research collaboration, you can submit, you know, there is a proper approval process guided by RB and if you go through all the processes, you can eventually get access to the data. But if you yourself know our OEI community, there are not that many people who actually ever got access to data because it's very challenging process. And sorry, just in a quick comment, MGH or any kind of hospital, are they scanning the data? Are they digitally storing it? Oh, it is already digitally stored. You don't need to do any extra processing steps. It's already there in the right format is that right now there are a lot of issues that govern access to the data because the hospital is legally responsible for the data. And, you know, they have a lot to lose if they give the data to the wrong person, but they may not have a lot to gain if they give it as a hospital, as a legal entity has given it to you. And the way, you know, what I would imagine happening in the future is the same thing that happens when you're getting your driving license, you can decide whether you want to donate your organs. You can imagine that whenever a person goes to the hospital, they, it should be easy for them to donate their data for research and it can be different kind of, do they only give you a test results or only mammogram or only imaging data or the whole medical record? Because at the end, we all will benefit from all this insights. And it's not like you say, I want to keep my data private, but I would really love to get it from other people because other people are thinking the same way. So if there is a mechanism to do this donation and the patient has an ability to say how they want to use their data for research, it would be really a game changer. People, when they think about this problem, there's a, it depends on the population, depends on the demographics, but there's some privacy concerns generally, not just medical data, just any kind of data. It's what you said, my data, it should belong kind of to me. I'm worried how it's going to be misused. How do we alleviate those concerns? Because that seems like a problem that needs to be, that problem of trust, of transparency needs to be solved before we build large data sets that help detect cancer, help save those very people in the future. So I think there are two things that could be done. There is a technical solutions and there are societal solutions. So on the technical end, we today have ability to improve disambiguation. Like, for instance, for imaging, it's, you know, for imaging, you can do it pretty well. What's disambiguation? And disambiguation, sorry, disambiguation, removing the identification, removing the names of the people. There are other data, like if it is a raw tax, you cannot really achieve 99.9%, but there are all these techniques that actually some of them are developed at MIT, how you can do learning on the encoded data where you locally encode the image, you train a network which only works on the encoded images and then you send the outcome back to the hospital and you can open it up. So those are the technical solutions. There are a lot of people who are working in this space where the learning happens in the encoded form. We are still early, but this is an interesting research area where I think we'll make more progress. There is a lot of work in natural language processing community how to do the identification better. But even today, there are already a lot of data which can be deidentified perfectly, like your test data, for instance, correct, where you can just, you know the name of the patient, you just want to extract the part with the numbers. The big problem here is again, hospitals don't see much incentive to give this data away on one hand and then there is general concern. Now, when I'm talking about societal benefits and about the education, the public needs to understand that I think that there are situation and I still remember myself when I really needed an answer, I had to make a choice. There was no information to make a choice, you're just guessing. And at that moment you feel that your life is at the stake, but you just don't have information to make the choice. And many times when I give talks, I get emails from women who say, you know, I'm in this situation, can you please run statistic and see what are the outcomes? We get almost every week a mammogram that comes by mail to my office at MIT, I'm serious. That people ask to run because they need to make life changing decisions. And of course, I'm not planning to open a clinic here, but we do run and give them the results for their doctors. But the point that I'm trying to make, that we all at some point or our loved ones will be in the situation where you need information to make the best choice. And if this information is not available, you would feel vulnerable and unprotected. And then the question is, you know, what do I care more? Because at the end, everything is a trade off, correct? Yeah, exactly. Just out of curiosity, it seems like one possible solution, I'd like to see what you think of it, based on what you just said, based on wanting to know answers for when you're yourself in that situation. Is it possible for patients to own their data as opposed to hospitals owning their data? Of course, theoretically, I guess patients own their data, but can you walk out there with a USB stick containing everything or upload it to the cloud? Where a company, you know, I remember Microsoft had a service, like I try, I was really excited about and Google Health was there. I tried to give, I was excited about it. Basically companies helping you upload your data to the cloud so that you can move from hospital to hospital from doctor to doctor. Do you see a promise of that kind of possibility? I absolutely think this is, you know, the right way to exchange the data. I don't know now who's the biggest player in this field, but I can clearly see that even for totally selfish health reasons, when you are going to a new facility and many of us are sent to some specialized treatment, they don't easily have access to your data. And today, you know, we might want to send this mammogram, need to go to the hospital, find some small office which gives them the CD and they ship as a CD. So you can imagine we're looking at kind of decades old mechanism of data exchange. So I definitely think this is an area where hopefully all the right regulatory and technical forces will align and we will see it actually implemented. It's sad because unfortunately, and I need to research why that happened, but I'm pretty sure Google Health and Microsoft Health Vault or whatever it's called both closed down, which means that there was either regulatory pressure or there's not a business case or there's challenges from hospitals, which is very disappointing. So when you say you don't know what the biggest players are, the two biggest that I was aware of closed their doors. So I'm hoping, I'd love to see why and I'd love to see who else can come up. It seems like one of those Elon Musk style problems that are obvious needs to be solved and somebody needs to step up and actually do this large scale data collection. So I know there is an initiative in Massachusetts, I think, which you led by the governor to try to create this kind of health exchange system where at least to help people who kind of when you show up in emergency room and there is no information about what are your allergies and other things. So I don't know how far it will go. But another thing that you said and I find it very interesting is actually who are the successful players in this space and the whole implementation, how does it go? To me, it is from the anthropological perspective, it's more fascinating that AI that today goes in healthcare, we've seen so many attempts and so very little successes. And it's interesting to understand that I've by no means have knowledge to assess it, why we are in the position where we are. Yeah, it's interesting because data is really fuel for a lot of successful applications. And when that data acquires regulatory approval, like the FDA or any kind of approval, it seems that the computer scientists are not quite there yet in being able to play the regulatory game, understanding the fundamentals of it. I think that in many cases when even people do have data, we still don't know what exactly do you need to demonstrate to change the standard of care. Like let me give you an example related to my breast cancer research. So in traditional breast cancer risk assessment, there is something called density, which determines the likelihood of a woman to get cancer. And this pretty much says, how much white do you see on the mammogram? The whiter it is, the more likely the tissue is dense. And the idea behind density, it's not a bad idea. In 1967, a radiologist called Wolf decided to look back at women who were diagnosed and see what is special in their images. Can we look back and say that they're likely to develop? So he come up with some patterns. And it was the best that his human eye can identify. Then it was kind of formalized and coded into four categories. And that's what we are using today. And today this density assessment is actually a federal law from 2019, approved by President Trump and for the previous FDA commissioner, where women are supposed to be advised by their providers if they have high density, putting them into higher risk category. And in some states, you can actually get supplementary screening paid by your insurance because you're in this category. Now you can say, how much science do we have behind it? Whatever, biological science or epidemiological evidence. So it turns out that between 40 and 50% of women have dense breasts. So about 40% of patients are coming out of their screening and somebody tells them, you are in high risk. Now, what exactly does it mean if you as half of the population in high risk? It's from saying, maybe I'm not, or what do I really need to do with it? Because the system doesn't provide me a lot of the solutions because there are so many people like me, we cannot really provide very expensive solutions for them. And the reason this whole density became this big deal, it's actually advocated by the patients who felt very unprotected because many women went and did the mammograms which were normal. And then it turns out that they already had cancer, quite developed cancer. So they didn't have a way to know who is really at risk and what is the likelihood that when the doctor tells you, you're okay, you are not okay. So at the time, and it was 15 years ago, this maybe was the best piece of science that we had. And it took quite 15, 16 years to make it federal law. But now this is a standard. Now with a deep learning model, we can so much more accurately predict who is gonna develop breast cancer just because you're trained on a logical thing. And instead of describing how much white and what kind of white machine can systematically identify the patterns, which was the original idea behind the thought of the cardiologist, machines can do it much more systematically and predict the risk when you're training the machine to look at the image and to say the risk in one to five years. Now you can ask me how long it will take to substitute this density, which is broadly used across the country and really is not helping to bring this new models. And I would say it's not a matter of the algorithm. Algorithms use already orders of magnitude better than what is currently in practice. I think it's really the question, who do you need to convince? How many hospitals do you need to run the experiment? What, you know, all this mechanism of adoption and how do you explain to patients and to women across the country that this is really a better measure? And again, I don't think it's an AI question. We can work more and make the algorithm even better, but I don't think that this is the current, you know, the barrier, the barrier is really this other piece that for some reason is not really explored. It's like anthropological piece. And coming back to your question about books, there is a book that I'm reading. It's called American Sickness by Elizabeth Rosenthal. And I got this book from my clinical collaborator, Dr. Connie Lehman. And I said, I know everything that I need to know about American health system, but you know, every page doesn't fail to surprise me. And I think there is a lot of interesting and really deep lessons for people like us from computer science who are coming into this field to really understand how complex is the system of incentives in the system to understand how you really need to play to drive adoption. You just said it's complex, but if we're trying to simplify it, who do you think most likely would be successful if we push on this group of people? Is it the doctors? Is it the hospitals? Is it the governments or policymakers? Is it the individual patients, consumers? Who needs to be inspired to most likely lead to adoption? Or is there no simple answer? There's no simple answer, but I think there is a lot of good people in medical system who do want to make a change. And I think a lot of power will come from us as consumers because we all are consumers or future consumers of healthcare services. And I think we can do so much more in explaining the potential and not in the hype terms and not saying that we now killed all Alzheimer and I'm really sick of reading this kind of articles which make these claims, but really to show with some examples what this implementation does and how it changes the care. Because I can't imagine, it doesn't matter what kind of politician it is, we all are susceptible to these diseases. There is no one who is free. And eventually, we all are humans and we're looking for a way to alleviate the suffering. And this is one possible way where we currently are under utilizing, which I think can help. So it sounds like the biggest problems are outside of AI in terms of the biggest impact at this point. But are there any open problems in the application of ML to oncology in general? So improving the detection or any other creative methods, whether it's on the detection segmentations or the vision perception side or some other clever of inference? Yeah, what in general in your view are the open problems in this space? Yeah, I just want to mention that beside detection, not the area where I am kind of quite active and I think it's really an increasingly important area in healthcare is drug design. Absolutely. Because it's fine if you detect something early, but you still need to get drugs and new drugs for these conditions. And today, all of the drug design, ML is non existent there. We don't have any drug that was developed by the ML model or even not developed, but at least even knew that ML model plays some significant role. I think this area with all the new ability to generate molecules with desired properties to do in silica screening is really a big open area. To be totally honest with you, when we are doing diagnostics and imaging, primarily taking the ideas that were developed for other areas and you applying them with some adaptation, the area of drug design is really technically interesting and exciting area. You need to work a lot with graphs and capture various 3D properties. There are lots and lots of opportunities to be technically creative. And I think there are a lot of open questions in this area. We're already getting a lot of successes even with kind of the first generation of these models, but there is much more new creative things that you can do. And what's very nice to see is that actually the more powerful, the more interesting models actually do do better. So there is a place to innovate in machine learning in this area. And some of these techniques are really unique to, let's say, to graph generation and other things. So... What, just to interrupt really quick, I'm sorry, graph generation or graphs, drug discovery in general, how do you discover a drug? Is this chemistry? Is this trying to predict different chemical reactions? Or is it some kind of... What do graphs even represent in this space? Oh, sorry, sorry. And what's a drug? Okay, so let's say you're thinking there are many different types of drugs, but let's say you're gonna talk about small molecules because I think today the majority of drugs are small molecules. So small molecule is a graph. The molecule is just where the node in the graph is an atom and then you have the bonds. So it's really a graph representation. If you look at it in 2D, correct, you can do it 3D, but let's say, let's keep it simple and stick in 2D. So pretty much my understanding today, how it is done at scale in the companies, without machine learning, you have high throughput screening. So you know that you are interested to get certain biological activity of the compound. So you scan a lot of compounds, like maybe hundreds of thousands, some really big number of compounds. You identify some compounds which have the right activity and then at this point, the chemists come and they're trying to now to optimize this original heat to different properties that you want it to be maybe soluble, you want it to decrease toxicity, you want it to decrease the side effects. Are those, sorry again to interrupt, can that be done in simulation or just by looking at the molecules or do you need to actually run reactions in real labs with lab coats and stuff? So when you do high throughput screening, you really do screening. It's in the lab. It's really the lab screening. You screen the molecules, correct? I don't know what screening is. The screening is just check them for certain property. Like in the physical space, in the physical world, like actually there's a machine probably that's actually running the reaction. Actually running the reactions, yeah. So there is a process where you can run and that's why it's called high throughput that it become cheaper and faster to do it on very big number of molecules. You run the screening, you identify potential good starts and then when the chemists come in who have done it many times and then they can try to look at it and say, how can you change the molecule to get the desired profile in terms of all other properties? So maybe how do I make it more bioactive and so on? And there the creativity of the chemists really is the one that determines the success of this design because again, they have a lot of domain knowledge of what works, how do you decrease the CCD and so on and that's what they do. So all the drugs that are currently in the FDA approved drugs or even drugs that are in clinical trials, they are designed using these domain experts which goes through this combinatorial space of molecules or graphs or whatever and find the right one or adjust it to be the right ones. It sounds like the breast density heuristic from 67 to the same echoes. It's not necessarily that. It's really driven by deep understanding. It's not like they just observe it. I mean, they do deeply understand chemistry and they do understand how different groups and how does it changes the properties. So there is a lot of science that gets into it and a lot of kind of simulation, how do you want it to behave? It's very, very complex. So they're quite effective at this design, obviously. Now effective, yeah, we have drugs. Like depending on how do you measure effective, if you measure it in terms of cost, it's prohibitive. If you measure it in terms of times, we have lots of diseases for which we don't have any drugs and we don't even know how to approach and don't need to mention few drugs or neurodegenerative disease drugs that fail. So there are lots of trials that fail in later stages, which is really catastrophic from the financial perspective. So is it the effective, the most effective mechanism? Absolutely no, but this is the only one that currently works. And I was closely interacting with people in pharmaceutical industry. I was really fascinated on how sharp and what a deep understanding of the domain do they have. It's not observation driven. There is really a lot of science behind what they do. But if you ask me, can machine learning change it, I firmly believe yes, because even the most experienced chemists cannot hold in their memory and understanding everything that you can learn from millions of molecules and reactions. And the space of graphs is a totally new space. I mean, it's a really interesting space for machine learning to explore, graph generation. Yeah, so there are a lot of things that you can do here. So we do a lot of work. So the first tool that we started with was the tool that can predict properties of the molecules. So you can just give the molecule and the property. It can be by activity property, or it can be some other property. And you train the molecules and you can now take a new molecule and predict this property. Now, when people started working in this area, it is something very simple. They do kind of existing fingerprints, which is kind of handcrafted features of the molecule. When you break the graph to substructures and then you run it in a feed forward neural network. And what was interesting to see that clearly, this was not the most effective way to proceed. And you need to have much more complex models that can induce a representation, which can translate this graph into the embeddings and do these predictions. So this is one direction. Then another direction, which is kind of related is not only to stop by looking at the embedding itself, but actually modify it to produce better molecules. So you can think about it as machine translation that you can start with a molecule and then there is an improved version of molecule. And you can again, with encoder translate it into the hidden space and then learn how to modify it to improve the in some ways version of the molecules. So that's, it's kind of really exciting. We already have seen that the property prediction works pretty well. And now we are generating molecules and there is actually labs which are manufacturing this molecule. So we'll see where it will get us. Okay, that's really exciting. There's a lot of promise. Speaking of machine translation and embeddings, I think you have done a lot of really great research in NLP, natural language processing. Can you tell me your journey through NLP? What ideas, problems, approaches were you working on? Were you fascinated with, did you explore before this magic of deep learning reemerged and after? So when I started my work in NLP, it was in 97. This was very interesting time. It was exactly the time that I came to ACL. And at the time I could barely understand English, but it was exactly like the transition point because half of the papers were really rule based approaches where people took more kind of heavy linguistic approaches for small domains and try to build up from there. And then there were the first generation of papers which were corpus based papers. And they were very simple in our terms when you collect some statistics and do prediction based on them. And I found it really fascinating that one community can think so very differently about the problem. And I remember my first paper that I wrote, it didn't have a single formula. It didn't have evaluation. It just had examples of outputs. And this was a standard of the field at the time. In some ways, I mean, people maybe just started emphasizing the empirical evaluation, but for many applications like summarization, you just show some examples of outputs. And then increasingly you can see that how the statistical approaches dominated the field and we've seen increased performance across many basic tasks. The sad part of the story maybe that if you look again through this journey, we see that the role of linguistics in some ways greatly diminishes. And I think that you really need to look through the whole proceeding to find one or two papers which make some interesting linguistic references. It's really big. Today, yeah. Today, today. This was definitely one of the. Things like syntactic trees, just even basically against our conversation about human understanding of language, which I guess what linguistics would be structured, hierarchical representing language in a way that's human explainable, understandable is missing today. I don't know if it is, what is explainable and understandable. In the end, we perform functions and it's okay to have machine which performs a function. Like when you're thinking about your calculator, correct? Your calculator can do calculation very different from you would do the calculation, but it's very effective in it. And this is fine if we can achieve certain tasks with high accuracy, doesn't necessarily mean that it has to understand it the same way as we understand. In some ways, it's even naive to request because you have so many other sources of information that are absent when you are training your system. So it's okay. Is it delivered? And I would tell you one application that is really fascinating. In 97, when it came to ACL, there were some papers on machine translation. They were like primitive. Like people were trying really, really simple. And the feeling, my feeling was that, you know, to make real machine translation system, it's like to fly at the moon and build a house there and the garden and live happily ever after. I mean, it's like impossible. I never could imagine that within, you know, 10 years, we would already see the system working. And now, you know, nobody is even surprised to utilize the system on daily basis. So this was like a huge, huge progress, saying that people for very long time tried to solve using other mechanisms. And they were unable to solve it. That's why coming back to your question about biology, that, you know, in linguistics, people try to go this way and try to write the syntactic trees and try to abstract it and to find the right representation. And, you know, they couldn't get very far with this understanding while these models using, you know, other sources actually capable to make a lot of progress. Now, I'm not naive to think that we are in this paradise space in NLP. And sure as you know, that when we slightly change the domain and when we decrease the amount of training, it can do like really bizarre and funny thing. But I think it's just a matter of improving generalization capacity, which is just a technical question. Wow, so that's the question. How much of language understanding can be solved with deep neural networks? In your intuition, I mean, it's unknown, I suppose. But as we start to creep towards romantic notions of the spirit of the Turing test and conversation and dialogue and something that maybe to me or to us, so the humans feels like it needs real understanding. How much can that be achieved with these neural networks or statistical methods? So I guess I am very much driven by the outcomes. Can we achieve the performance which would be satisfactory for us for different tasks? Now, if you again look at machine translation system, which are trained on large amounts of data, they really can do a remarkable job relatively to where they've been a few years ago. And if you project into the future, if it will be the same speed of improvement, you know, this is great. Now, does it bother me that it's not doing the same translation as we are doing? Now, if you go to cognitive science, we still don't really understand what we are doing. I mean, there are a lot of theories and there's obviously a lot of progress and studying, but our understanding what exactly goes on in our brains when we process language is still not crystal clear and precise that we can translate it into machines. What does bother me is that, you know, again, that machines can be extremely brittle when you go out of your comfort zone of when there is a distributional shift between training and testing. And it have been years and years, every year when I teach an LP class, now show them some examples of translation from some newspaper in Hebrew or whatever, it was perfect. And then I have a recipe that Tomi Yakel's system sent me a while ago and it was written in Finnish of Karelian pies. And it's just a terrible translation. You cannot understand anything what it does. It's not like some syntactic mistakes, it's just terrible. And year after year, I tried and will translate and year after year, it does this terrible work because I guess, you know, the recipes are not a big part of their training repertoire. So, but in terms of outcomes, that's a really clean, good way to look at it. I guess the question I was asking is, do you think, imagine a future, do you think the current approaches can pass the Turing test in the way, in the best possible formulation of the Turing test? Which is, would you wanna have a conversation with a neural network for an hour? Oh God, no, no, there are not that many people that I would want to talk for an hour, but. There are some people in this world, alive or not, that you would like to talk to for an hour. Could a neural network achieve that outcome? So I think it would be really hard to create a successful training set, which would enable it to have a conversation, a contextual conversation for an hour. Do you think it's a problem of data, perhaps? I think in some ways it's not a problem of data, it's a problem both of data and the problem of the way we're training our systems, their ability to truly, to generalize, to be very compositional. In some ways it's limited in the current capacity, at least we can translate well, we can find information well, we can extract information. So there are many capacities in which it's doing very well. And you can ask me, would you trust the machine to translate for you and use it as a source? I would say absolutely, especially if we're talking about newspaper data or other data which is in the realm of its own training set, I would say yes. But having conversations with the machine, it's not something that I would choose to do. But I would tell you something, talking about Turing tests and about all this kind of ELISA conversations, I remember visiting Tencent in China and they have this chat board and they claim there is really humongous amount of the local population which for hours talks to the chat board. To me it was, I cannot believe it, but apparently it's documented that there are some people who enjoy this conversation. And it brought to me another MIT story about ELISA and Weisenbaum. I don't know if you're familiar with the story. So Weisenbaum was a professor at MIT and when he developed this ELISA, which was just doing string matching, very trivial, like restating of what you said with very few rules, no syntax. Apparently there were secretaries at MIT that would sit for hours and converse with this trivial thing and at the time there was no beautiful interfaces so you actually need to go through the pain of communicating. And Weisenbaum himself was so horrified by this phenomenon that people can believe enough to the machine that you just need to give them the hint that machine understands you and you can complete the rest that he kind of stopped this research and went into kind of trying to understand what this artificial intelligence can do to our brains. So my point is, you know, how much, it's not how good is the technology, it's how ready we are to believe that it delivers the goods that we are trying to get. That's a really beautiful way to put it. I, by the way, I'm not horrified by that possibility, but inspired by it because, I mean, human connection, whether it's through language or through love, it seems like it's very amenable to machine learning and the rest is just challenges of psychology. Like you said, the secretaries who enjoy spending hours. I would say I would describe most of our lives as enjoying spending hours with those we love for very silly reasons. All we're doing is keyword matching as well. So I'm not sure how much intelligence we exhibit to each other with the people we love that we're close with. So it's a very interesting point of what it means to pass the Turing test with language. I think you're right. In terms of conversation, I think machine translation has very clear performance and improvement, right? What it means to have a fulfilling conversation is very person dependent and context dependent and so on. That's, yeah, it's very well put. But in your view, what's a benchmark in natural language, a test that's just out of reach right now, but we might be able to, that's exciting. Is it in perfecting machine translation or is there other, is it summarization? What's out there just out of reach? I think it goes across specific application. It's more about the ability to learn from few examples for real, what we call few short learning and all these cases because the way we publish these papers today, we say, if we have like naively, we get 55, but now we had a few example and we can move to 65. None of these methods actually are realistically doing anything useful. You cannot use them today. And the ability to be able to generalize and to move or to be autonomous in finding the data that you need to learn, to be able to perfect new tasks or new language, this is an area where I think we really need to move forward to and we are not yet there. Are you at all excited, curious by the possibility of creating human level intelligence? Is this, cause you've been very in your discussion. So if we look at oncology, you're trying to use machine learning to help the world in terms of alleviating suffering. If you look at natural language processing, you're focused on the outcomes of improving practical things like machine translation. But human level intelligence is a thing that our civilization has dreamed about creating, super human level intelligence. Do you think about this? Do you think it's at all within our reach? So as you said yourself, Elie, talking about how do you perceive our communications with each other, that we're matching keywords and certain behaviors and so on. So at the end, whenever one assesses, let's say relations with another person, you have separate kind of measurements and outcomes inside your head that determine what is the status of the relation. So one way, this is this classical level, what is the intelligence? Is it the fact that now we are gonna do the same way as human is doing, when we don't even understand what the human is doing? Or we now have an ability to deliver these outcomes, but not in one area, not in NLP, not just to translate or just to answer questions, but across many, many areas that we can achieve the functionalities that humans can achieve with their ability to learn and do other things. I think this is, and this we can actually measure how far we are. And that's what makes me excited that we, in my lifetime, at least so far what we've seen, it's like tremendous progress across these different functionalities. And I think it will be really exciting to see where we will be. And again, one way to think about it, there are machines which are improving their functionality. Another one is to think about us with our brains, which are imperfect, how they can be accelerated by this technology as it becomes stronger and stronger. Coming back to another book that I love, Flowers for Algernon. Have you read this book? Yes. So there is this point that the patient gets this miracle cure, which changes his brain. And all of a sudden they see life in a different way and can do certain things better, but certain things much worse. So you can imagine this kind of computer augmented cognition where it can bring you that now in the same way as the cars enable us to get to places where we've never been before, can we think differently? Can we think faster? And we already see a lot of it happening in how it impacts us, but I think we have a long way to go there. So that's sort of artificial intelligence and technology affecting our, augmenting our intelligence as humans. Yesterday, a company called Neuralink announced, they did this whole demonstration. I don't know if you saw it. It's, they demonstrated brain computer, brain machine interface, where there's like a sewing machine for the brain. Do you, you know, a lot of that is quite out there in terms of things that some people would say are impossible, but they're dreamers and want to engineer systems like that. Do you see, based on what you just said, a hope for that more direct interaction with the brain? I think there are different ways. One is a direct interaction with the brain. And again, there are lots of companies that work in this space and I think there will be a lot of developments. But I'm just thinking that many times we are not aware of our feelings, of motivation, what drives us. Like, let me give you a trivial example, our attention. There are a lot of studies that demonstrate that it takes a while to a person to understand that they are not attentive anymore. And we know that there are people who really have strong capacity to hold attention. There are other end of the spectrum people with ADD and other issues that they have problem to regulate their attention. Imagine to yourself that you have like a cognitive aid that just alerts you based on your gaze, that your attention is now not on what you are doing. And instead of writing a paper, you're now dreaming of what you're gonna do in the evening. So even this kind of simple measurement things, how they can change us. And I see it even in simple ways with myself. I have my zone app that I got in MIT gym. It kind of records, you know, how much did you run and you have some points and you can get some status, whatever. Like, I said, what is this ridiculous thing? Who would ever care about some status in some app? Guess what? So to maintain the status, you have to do set a number of points every month. And not only is that I do it every single month for the last 18 months, it went to the point that I was injured. And when I could run again, in two days, I did like some humongous amount of running just to complete the points. It was like really not safe. It was like, I'm not gonna lose my status because I want to get there. So you can already see that this direct measurement and the feedback is, you know, we're looking at video games and see why, you know, the addiction aspect of it, but you can imagine that the same idea can be expanded to many other areas of our life. When we really can get feedback and imagine in your case in relations, when we are doing keyword matching, imagine that the person who is generating the keywords, that person gets direct feedback before the whole thing explodes. Is it maybe at this happy point, we are going in the wrong direction. Maybe it will be really a behavior modifying moment. So yeah, it's a relationship management too. So yeah, that's a fascinating whole area of psychology actually as well, of seeing how our behavior has changed with basically all human relations now have other nonhuman entities helping us out. So you teach a large, a huge machine learning course here at MIT. I can ask you a million questions, but you've seen a lot of students. What ideas do students struggle with the most as they first enter this world of machine learning? Actually, this year was the first time I started teaching a small machine learning class. And it came as a result of what I saw in my big machine learning class that Tomi Yakel and I built maybe six years ago. What we've seen that as this area become more and more popular, more and more people at MIT want to take this class. And while we designed it for computer science majors, there were a lot of people who really are interested to learn it, but unfortunately, their background was not enabling them to do well in the class. And many of them associated machine learning with the word struggle and failure, primarily for non majors. And that's why we actually started a new class which we call machine learning from algorithms to modeling, which emphasizes more the modeling aspects of it and focuses on, it has majors and non majors. So we kind of try to extract the relevant parts and make it more accessible, because the fact that we're teaching 20 classifiers in standard machine learning class, it's really a big question to really need it. But it was interesting to see this from first generation of students, when they came back from their internships and from their jobs, what different and exciting things they can do. I would never think that you can even apply machine learning to, some of them are like matching, the relations and other things like variety. Everything is amenable as the machine learning. That actually brings up an interesting point of computer science in general. It almost seems, maybe I'm crazy, but it almost seems like everybody needs to learn how to program these days. If you're 20 years old, or if you're starting school, even if you're an English major, it seems like programming unlocks so much possibility in this world. So when you interacted with those non majors, is there skills that they were simply lacking at the time that you wish they had and that they learned in high school and so on? Like how should education change in this computerized world that we live in? I think because I knew that there is a Python component in the class, their Python skills were okay and the class isn't really heavy on programming. They primarily kind of add parts to the programs. I think it was more of the mathematical barriers and the class, again, with the design on the majors was using the notation, like big O for complexity and others, people who come from different backgrounds just don't have it in the lexical, so necessarily very challenging notion, but they were just not aware. So I think that kind of linear algebra and probability, the basics, the calculus, multivariate calculus, things that can help. What advice would you give to students interested in machine learning, interested, you've talked about detecting, curing cancer, drug design, if they want to get into that field, what should they do? Get into it and succeed as researchers and entrepreneurs. The first good piece of news is that right now there are lots of resources that are created at different levels and you can find online in your school classes which are more mathematical, more applied and so on. So you can find a kind of a preacher which preaches in your own language where you can enter the field and you can make many different types of contribution depending of what is your strengths. And the second point, I think it's really important to find some area which you really care about and it can motivate your learning and it can be for somebody curing cancer or doing self driving cars or whatever, but to find an area where there is data where you believe there are strong patterns and we should be doing it and we're still not doing it or you can do it better and just start there and see where it can bring you. So you've been very successful in many directions in life, but you also mentioned Flowers of Argonon. And I think I've read or listened to you mention somewhere that researchers often get lost in the details of their work. This is per our original discussion with cancer and so on and don't look at the bigger picture, bigger questions of meaning and so on. So let me ask you the impossible question of what's the meaning of this thing, of life, of your life, of research. Why do you think we descendant of great apes are here on this spinning ball? You know, I don't think that I have really a global answer. You know, maybe that's why I didn't go to humanities and I didn't take humanities classes in my undergrad. But the way I'm thinking about it, each one of us inside of them have their own set of, you know, things that we believe are important. And it just happens that we are busy with achieving various goals, busy listening to others and to kind of try to conform and to be part of the crowd, that we don't listen to that part. And, you know, we all should find some time to understand what is our own individual missions. And we may have very different missions and to make sure that while we are running 10,000 things, we are not, you know, missing out and we're putting all the resources to satisfy our own mission. And if I look over my time, when I was younger, most of these missions, you know, I was primarily driven by the external stimulus, you know, to achieve this or to be that. And now a lot of what I do is driven by really thinking what is important for me to achieve independently of the external recognition. And, you know, I don't mind to be viewed in certain ways. The most important thing for me is to be true to myself, to what I think is right. How long did it take? How hard was it to find the you that you have to be true to? So it takes time. And even now, sometimes, you know, the vanity and the triviality can take, you know. At MIT. Yeah, it can everywhere, you know, it's just the vanity at MIT is different, the vanity in different places, but we all have our piece of vanity. But I think actually for me, many times the place to get back to it is, you know, when I'm alone and also when I read. And I think by selecting the right books, you can get the right questions and learn from what you read. So, but again, it's not perfect. Like vanity sometimes dominates. Well, that's a beautiful way to end. Thank you so much for talking today. Thank you. That was fun. That was fun.
Regina Barzilay: Deep Learning for Cancer Diagnosis and Treatment | Lex Fridman Podcast #40
The following is a conversation with Leonard Susskind. He's a professor of theoretical physics at Stanford University and founding director of Stanford Institute of Theoretical Physics. He is widely regarded as one of the fathers of string theory and in general, as one of the greatest physicists of our time, both as a researcher and an educator. This is the Artificial Intelligence Podcast. Perhaps you noticed that the people I've been speaking with are not just computer scientists, but philosophers, mathematicians, writers, psychologists, physicists, and soon other disciplines. To me, AI is much bigger than deep learning, bigger than computing. It is our civilization's journey into understanding the human mind and creating echoes of it in the machine. If you enjoy the podcast, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A M. And now, here's my conversation with Leonard Susskind. You worked and were friends with Richard Feynman. How has he influenced you, changed you as a physicist and thinker? What I saw, I think what I saw was somebody who could do physics in this deeply intuitive way. His style was almost to close his eyes and visualize the phenomena that he was thinking about. And through visualization, outflank the mathematical, the highly mathematical and very, very sophisticated technical arguments that people would use. I think that was also natural to me, but I saw somebody who was actually successful at it, who could do physics in a way that I regarded as simpler, more direct, more intuitive. And while I don't think he changed my way of thinking, I do think he validated it. He made me look at it and say, yeah, that's something you can do and get away with. Practically didn't get away with it. So do you find yourself, whether you're thinking about quantum mechanics or black holes or string theory, using intuition as a first step or step throughout using visualization? Yeah, very much so, very much so. I tend not to think about the equations. I tend not to think about the symbols. I tend to try to visualize the phenomena themselves. And then when I get an insight that I think is valid, I might try to convert it to mathematics, but I'm not a natural mathematician. I'm good enough at it. I'm good enough at it, but I'm not a great mathematician. So for me, the way of thinking about physics is first intuitive, first visualization, scribble a few equations maybe, but then try to convert it to mathematics. Experience is that other people are better at converting it to mathematics than I am. And yet you've worked with very counterintuitive ideas. No, that's true. That's true. You can visualize something counterintuitive. How do you dare? By rewiring your brain in new ways. Yeah, quantum mechanics is not intuitive. Very little of modern physics is intuitive. Intuitive, what does intuitive mean? It means the ability to think about it with basic classical physics, the physics that we evolved with throwing stones, or splashing water, whatever it happens to be. Quantum physics, general relativity, quantum field theory are deeply unintuitive in that way. But after time and getting familiar with these things, you develop new intuitions. I always said you rewire. And it's to the point where me and many of my friends, I and many of my friends, can think more easily quantum mechanically than we can classically. We've gotten so used to it. I mean, yes, our neural wiring in our brain is such that we understand rocks and stones and water and so on. We sort of evolved for that. Evolved for it. Do you think it's possible to create a wiring of neuron like state devices that more naturally understand quantum mechanics, understand wave function, understand these weird things? Well, I'm not sure. I think many of us have evolved the ability to think quantum mechanically to some extent. But that doesn't mean you can think like an electron. That doesn't mean another example. Forget for a minute quantum mechanics. Just visualizing four dimensional space or five dimensional space or six dimensional space, I think we're fundamentally wired to visualize three dimensions. I can't even visualize two dimensions or one dimension without thinking about it as embedded in three dimensions. If I wanna visualize a line, I think of the line as being a line in three dimensions. Or I think of the line as being a line on a piece of paper with a piece of paper being in three dimensions. I never seem to be able to, in some abstract and pure way, visualize in my head the one dimension, the two dimension, the four dimension, the five dimensions. And I don't think that's ever gonna happen. The reason is I think our neural wiring is just set up for that. On the other hand, we do learn ways to think about five, six, seven dimensions. We learn ways, we learn mathematical ways, and we learn ways to visualize them, but they're different. And so yeah, I think we do rewire ourselves. Whether we can ever completely rewire ourselves to be completely comfortable with these concepts, I doubt. So that it's completely natural. To where it's completely natural. So I'm sure there's somewhat, you could argue, creatures that live in a two dimensional space. Yeah, maybe there are. And while it's romanticizing the notion of curse, we're all living, as far as we know, in three dimensional space. But how do those creatures imagine 3D space? Well, probably the way we imagine 4D, by using some mathematics and some equations and some tricks. Okay, so jumping back to Feynman just for a second. He had a little bit of an ego. Yes. Why, do you think ego is powerful or dangerous in science? I think both, both, both. I think you have to have both arrogance and humility. You have to have the arrogance to say, I can do this. Nature is difficult, nature is very, very hard. I'm smart enough, I can do it. I can win the battle with nature. On the other hand, I think you also have to have the humility to know that you're very likely to be wrong on any given occasion. Everything you're thinking could suddenly change. Young people can come along and say things you won't understand and you'll be lost and flabbergasted. So I think it's a combination of both. You better recognize that you're very limited, and you better be able to say to yourself, I'm not so limited that I can't win this battle with nature. It takes a special kind of person who can manage both of those, I would say. And I would say there's echoes of that in your own work, a little bit of ego, a little bit of outside of the box, humble thinking. I hope so. So was there a time where you felt, you looked at yourself and asked, am I completely wrong about this? Oh yeah, about the whole thing or about specific things? The whole thing. What do you mean? Wait, which whole thing? Me and me and my ability to do this thing. Oh, those kinds of doubts. First of all, did you have those kinds of doubts? No, I had different kind of doubts. I came from a very working class background and I was uncomfortable in academia for, oh, for a long time. But they weren't doubts about my ability or my, they were just the discomfort in being in an environment that my family hadn't participated in, I knew nothing about as a young person. I didn't learn that there was such a thing called physics until I was almost 20 years old. Yeah, so I did have certain kind of doubts, but not about my ability. I don't think I was too worried about whether I would succeed or not. I never felt this insecurity, am I ever gonna get a job? That had never occurred to me that I wouldn't. Maybe you could speak a little bit to this sense of what is academia. Because I too feel a bit uncomfortable in it. There's something I can't put quite into words what you have that's not, doesn't, if we call it music, you play a different kind of music than a lot of academia. How have you joined this orchestra? How do you think about it? I don't know that I thought about it as much as I just felt it. Thinking is one thing, feeling is another thing. I felt like an outsider until a certain age when I suddenly found myself the ultimate insider in academic physics. And that was a sharp transition, and I wasn't a young man. I was probably 50 years old. So you were never quite, it was a phase transition, you were never quite in the middle. Yeah, that's right, I wasn't. I always felt a little bit of an outsider. In the beginning, a lot an outsider. My way of thinking was different, my approach to mathematics was different, but also my social background that I came from was different. Now these days, half the young people I meet, they're parents or professors. That was not my case. But then all of a sudden, at some point, I found myself at very much the center of, maybe not the only one at the center, but certainly one of the people in the center of a certain kind of physics. And all that went away, it went away in a flash. So maybe a little bit with Feynman, but in general, how do you develop ideas? Do you work through ideas alone? Do you brainstorm with others? Oh, both, both, very definitely both. The younger time, I spent more time with myself. Now, because I'm at Stanford, because I have a lot of ex students and people who are interested in the same thing I am, I spend a good deal of time, almost on a daily basis, interacting, brainstorming, as you said. It's a very important part. I spend less time probably completely self focused than with a piece of paper and just sitting there staring at it. What are your hopes for quantum computers? So machines that are based on, that have some elements of leverage quantum mechanical ideas. Yeah, it's not just leveraging quantum mechanical ideas. You can simulate quantum systems on a classical computer. Simulate them means solve the Schrodinger equation for them or solve the equations of quantum mechanics or solve the equations of quantum mechanics on a computer, on a classical computer. But the classical computer is not doing, is not a quantum mechanical system itself. Of course it is. Everything's made of quantum mechanics, but it's not functioning. It's not functioning as a quantum system. It's just solving equations. The quantum computer is truly a quantum system which is actually doing the things that you're programming it to do. You want to program a quantum field theory. If you do it in classical physics, that program is not actually functioning in the computer as a quantum field theory. It's just solving some equations. Physically, it's not doing the things that the quantum system would do. The quantum computer is really a quantum mechanical system which is actually carrying out the quantum operations. You can measure it at the end. It intrinsically satisfies the uncertainty principle. It is limited in the same way that quantum systems are limited by uncertainty and so forth. And it really is a quantum system. That means that what you're doing when you program something for a quantum system is you're actually building a real version of the system. The limits of a classical computer, classical computers are enormously limited when it comes to the quantum systems. They're enormously limited because you've probably heard this before, but in order to store the amount of information that's in a quantum state of 400 spins, that's not very many, 400 I can put in my pocket, I can put 400 pennies in my pocket. To be able to simulate the quantum state of 400 elementary quantum systems, qubits we call them, to do that would take more information than can possibly be stored in the entire universe if it were packed so tightly that you couldn't pack any more in. 400 qubits. On the other hand, if your quantum computer is composed of 400 qubits, it can do everything 400 qubits can do. What kind of space, if you just intuitively think about the space of algorithms that that unlocks for us, so there's a whole complexity theory around classical computers, measuring the running time of things, and P, so on, what kind of algorithms just intuitively do you think it unlocks for us? Okay, so we know that there are a handful of algorithms that can seriously beat classical computers and which can have exponentially more power. This is a mathematical statement. Nobody's exhibited this in the laboratory. It's a mathematical statement. We know that's true, but it also seems more and more that the number of such things is very limited. Only very, very special problems exhibit that much advantage for a quantum computer, of standard problems. To my mind, as far as I can tell, the great power of quantum computers will actually be to simulate quantum systems. If you're interested in a certain quantum system and it's too hard to simulate classically, you simply build a version of the same system. You build a version of it. You build a model of it that's actually functioning as the system. You run it, and then you do the same thing you would do to the quantum system. You make measurements on it, quantum measurements on it. The advantage is you can run it much slower. You could say, why bother? Why not just use the real system? Why not just do experiments on the real system? Well, real systems are kind of limited. You can't change them. You can't manipulate them. You can't slow them down so that you can poke into them. You can't modify them in arbitrary kinds of ways to see what would happen if I change the system a little bit. I think that quantum computers will be extremely valuable in understanding quantum systems. At the lowest level of the fundamental laws. They're actually satisfying the same laws as the systems that they're simulating. Okay, so on the one hand, you have things like factoring. Factoring is the great thing of quantum computers. Factoring large numbers, that doesn't seem that much to do with quantum mechanics. It seems to be almost a fluke that a quantum computer can solve the factoring problem in a short time. And those problems seem to be extremely special, rare, and it's not clear to me that there's gonna be a lot of them. On the other hand, there are a lot of quantum systems. Chemistry, there's solid state physics, there's material science, there's quantum gravity, there's all kinds of quantum field theory. And some of these are actually turning out to be applied sciences, as well as very fundamental sciences. So we probably will run out of the ability to solve equations for these things. Solve equations by the standard methods of pencil and paper. Solve the equations by the method of classical computers. And so what we'll do is we'll build versions of these systems, run them, and run them under controlled circumstances where we can change them, manipulate them, make measurements on them, and find out all the things we wanna know. So in finding out the things we wanna know about very small systems, is there something that we can also find out about the macro level, about something about the function, forgive me, of our brain, biological systems, the stuff that's about one meter in size versus much, much smaller? Well, what all the excitement is about among the people that I interact with is understanding black holes. Black holes. Black holes are big things. They are many, many degrees of freedom. There is another kind of quantum system that is big. It's a large quantum computer. And one of the things we've learned is that the physics of large quantum computers is in some ways similar to the physics of large quantum black holes. And we're using that relationship. Now you asked, you didn't ask about quantum computers or systems, you didn't ask about black holes, you asked about brains. Yeah, about stuff that's in the middle of the two. It's different. So black holes are, there's something fundamental about black holes that feels to be very different than a brain. Yes. And they also function in a very quantum mechanical way. Right. Okay. It is, first of all, unclear to me, but of course it's unclear to me. I'm not a neuroscientist. I have, I don't even have very many friends who are neuroscientists. I would like to have more friends who are neuroscientists. I just don't run into them very often. Among the few neuroscientists I've ever talked about about this, they are pretty convinced that the brain functions classically, that it is not intrinsically a quantum mechanical system or it doesn't make use of the special features, entanglement, coherence, superposition. Are they right? I don't know. I sort of hope they're wrong just because I like the romantic idea that the brain is a quantum system. But I think probably not. The other thing, big systems can be composed of lots of little systems. Materials, the materials that we work with and so forth are, can be large systems, a large piece of material, but they're made out of quantum systems. Now, one of the things that's been happening over the last good number of years is we're discovering materials and quantum systems, which function much more quantum mechanically than we imagined. Topological insulators, this kind of thing, that kind of thing. Those are macroscopic systems, but they're just superconductors. Superconductors have a lot of quantum mechanics in them. You can have a large chunk of superconductor. So it's a big piece of material. On the other hand, it's functioning and its properties depend very, very strongly on quantum mechanics. And to analyze them, you need the tools of quantum mechanics. If we can go on to black holes and looking at the universe as a information processing system, as a computer, as a giant computer. It's a giant computer. What's the power of thinking of the universe as an information processing system? Or what is perhaps its use besides the mathematical use of discussing black holes and your famous debates and ideas around that to human beings, or life in general as information processing systems? Well, all systems are information processing systems. You poke them, they change a little bit, they evolve. All systems are information processing systems. So there's no extra magic to us humans? It certainly feels, consciousness intelligence feels like magic. It sure does. Where does it emerge from? If we look at information processing, what are the emergent phenomena that come from viewing the world as an information processing system? Here is what I think. My thoughts are not worth much in this. If you ask me about physics, my thoughts may be worth something. If you ask me about this, I'm not sure my thoughts are worth anything. But as I said earlier, I think when we do introspection, when we imagine doing introspection and try to figure out what it is when we do when we're thinking, I think we get it wrong. I'm pretty sure we get it wrong. Everything I've heard about the way the brain functions is so counterintuitive. For example, you have neurons which detect vertical lines. You have different neurons which detect lines at 45 degrees. You have different neurons. I never imagined that there were whole circuits which were devoted to vertical lines in my brain. Doesn't seem to be the way my brain works. My brain seems to work if I put my finger up vertically or if I put it horizontally or if I put it this way or that way. It seems to me it's the same circuits. It's not the way it works. The way the brain is compartmentalized seems to be very, very different than what I would have imagined if I were just doing psychological introspection about how things work. My conclusion is that we won't get it right that way, that how will we get it right? I think maybe computer scientists will get it right eventually. I don't think there are any ways near it. I don't even think they're thinking about it, but eventually we will build machines perhaps which are complicated enough and partly engineered, partly evolved, maybe evolved by machine learning and so forth. This machine learning is very interesting. By machine learning, we will evolve systems and we may start to discover mechanisms that have implications for how we think and for what this consciousness thing is all about and we'll be able to do experiments on them and perhaps answer questions that we can't possibly answer by introspection. So that's a really interesting point. In many cases, if you look at even a string theory, when you first think about a system, it seems really complicated, like the human brain, and through some basic reasoning and trying to discover fundamental low level behavior of the system, you find out that it's actually much simpler. Do you, one, have you, is that generally the process and two, do you have that also hope for biological systems as well, for all the kinds of stuff we're studying at the human level? Of course, physics always begins by trying to find the simplest version of something and analyze it. Yeah, I mean, there are lots of examples where physics has taken very complicated systems, analyzed them and found simplicity in them for sure. I said superconductors before, it's an obvious one. A superconductor seems like a monstrously complicated thing with all sorts of crazy electrical properties, magnetic properties and so forth. And when it finally is boiled down to its simplest elements, it's a very simple quantum mechanical phenomenon called spontaneous symmetry breaking, and which we, in other contexts, we learned about and we're very familiar with. So yeah, I mean, yes, we do take complicated things, make them simple, but what we don't want to do is take things which are intrinsically complicated and fool ourselves into thinking that we can make them simple. We don't want to make, I don't know who said this, but we don't want to make them simpler than they really are, okay? Is the brain a thing which ultimately functions by some simple rules or is it just complicated? In terms of artificial intelligence, nobody really knows what are the limits of our current approaches, you mentioned machine learning. How do we create human level intelligence? It seems that there's a lot of very smart physicists who perhaps oversimplify the nature of intelligence and think of it as information processing, and therefore there doesn't seem to be any theoretical reason why we can't artificially create human level or superhuman level intelligence. In fact, the reasoning goes, if you create human level intelligence, the same approach you just used to create human level intelligence should allow you to create superhuman level intelligence very easily, exponentially. So what do you think that way of thinking that comes from physicists is all about? I wish I knew, but there's a particular reason why I wish I knew. I have a second job. I consult for Google, not for Google, for Google X. I am the senior academic advisor to a group of machine learning physicists. Now that sounds crazy because I know nothing about the subject. I know very little about the subject. On the other hand, I'm good at giving advice, so I give them advice on things. Anyway, I see these young physicists who are approaching the machine learning problem. There is a real machine learning problem. Namely, why does it work as well as it does? Nobody really seems to understand why it is capable of doing the kind of generalizations that it does and so forth. And there are three groups of people who have thought about this. There are the engineers. The engineers are incredibly smart, but they tend not to think as hard about why the thing is working as much as they do how to use it. Obviously, they provided a lot of data, and it is they who demonstrated that machine learning can work much better than you have any right to expect. The machine learning systems are systems. The system's not too different than the kind of systems that physicists study. There's not all that much difference between quantum, in the structure of mathematics, physically, yes, but in the structure of mathematics, between a tensor network designed to describe a quantum system on the one hand and the kind of networks that are used in machine learning. So there are more and more, I think, young physicists are being drawn to this field of machine learning, some very, very good ones. I work with a number of very good ones, not on machine learning, but on having lunch. On having lunch? Right. Yeah. And I can tell you they are super smart. They don't seem to be so arrogant about their physics backgrounds that they think they can do things that nobody else can do. But the physics way of thinking, I think, will add great value to, or will bring value to the machine learning. I believe it will. And I think it already has. At what time scale do you think predicting the future becomes useless in your long experience and being surprised at new discoveries? Well, sometimes a day, sometimes 20 years. There are things which I thought we were very far from understanding, which practically in a snap of the fingers or a blink of the eye suddenly became understood, completely surprising to me. There are other things which I looked at and I said, we're not gonna understand these things for 500 years, in particular quantum gravity. The scale for that was 20 years, 25 years. And we understand a lot and we don't understand it completely now by any means, but I thought it was 500 years to make any progress. It turned out to be very, very far from that. It turned out to be more like 20 or 25 years from the time when I thought it was 500 years. So if we may, can we jump around quantum gravity, some basic ideas in physics? What is the dream of string theory mathematically? What is the hope? Where does it come from? What problem is it trying to solve? I don't think the dream of string theory is any different than the dream of fundamental theoretical physics altogether. Understanding a unified theory of everything. I don't like thinking of string theory as a subject unto itself with people called string theorists who are the practitioners of this thing called string theory. I much prefer to think of them as theoretical physicists trying to answer deep fundamental questions about nature, in particular gravity, in particular gravity and its connection with quantum mechanics, and who at the present time find string theory a useful tool rather than saying there's a subject called string theorists. I don't like being referred to as a string theorist. Yes, but as a tool, is it useful to think about our nature in multiple dimensions, the strings vibrating? I believe it is useful. I'll tell you what the main use of it has been up till now. Well, it has had a number of main uses. Originally, string theory was invented, and I know that I was there. I was right at the spot where it was being invented literally, and it was being invented to understand hadrons. Hadrons are subnuclear particles, protons, neutrons, mesons, and at that time, the late 60s, early 70s, it was clear from experiment that these particles called hadrons could vibrate, could rotate, could do all the things that a little closed string can do, and it was and is a valid and correct theory of these hadrons. It's been experimentally tested, and that is a done deal. It had a second life as a theory of gravity, the same basic mathematics, except on a very, very much smaller distance scale. The objects of gravitation are 19 orders of magnitude or orders of magnitude smaller than a proton, but the same mathematics turned up. The same mathematics turned up. What has been its value? Its value is that it's mathematically rigorous in many ways and enabled us to find mathematical structures which have both quantum mechanics and gravity. With rigor, we can test out ideas. We can test out ideas. We can't test them in the laboratory. They're 19 orders of magnitude too small are things that we're interested in, but we can test them out mathematically and analyze their internal consistency. By now, 40 years ago, 35 years ago, and so forth, people very, very much questioned the consistency between gravity and quantum mechanics. Stephen Hawking was very famous for it, rightly so. Now, nobody questions that consistency anymore. They don't because we have mathematically precise string theories which contain both gravity and quantum mechanics in a consistent way. So it's provided that certainty that quantum mechanics and gravity can coexist. That's not a small thing. It's a very big thing. It's a huge thing. Einstein would be proud. Einstein, he might be appalled. I don't know. He didn't like it. He didn't like it. He might not be appalled, I don't know. He didn't like quantum mechanics very much, but he would certainly be struck by it. I think that may be, at this time, its biggest contribution to physics in illustrating almost definitively that quantum mechanics and gravity are very closely related and not inconsistent with each other. Is there a possibility of something deeper, more profound that still is consistent with string theory but is deeper, that is to be found? Well, you could ask the same thing about quantum mechanics. Is there something? Exactly. Yeah, yeah. I think string theory is just an example of a quantum mechanical system that contains both gravitation and quantum mechanics. So is there something underlying quantum mechanics? Perhaps something deterministic. Perhaps something deterministic. My friend, Ferad Etouf, whose name you may know, he's a very famous physicist. Dutch, not as famous as he should be, but... Hard to spell his name. It's hard to say his name. No, it's easy to spell his name. Apostrophe, he's the only person I know whose name begins with an apostrophe. And he's one of my heroes in physics. He's a little younger than me, but he's nevertheless one of my heroes. Etouf believes that there is some substructure to the world which is classical in character, deterministic in character, which somehow by some mechanism that he has a hard time spelling out emerges as quantum mechanics. I don't. The wave function is somehow emergent. The wave function, not just the wave function, but the whole thing that goes with quantum mechanics, uncertainty, entanglement, all these things, are emergent. So you think quantum mechanics is the bottom of the well? Is the... Here I think is where you have to be humble. Here's where humility comes. I don't think anybody should say anything is the bottom of the well at this time. I think we can reasonably say, I can reasonably say when I look into the well, I can't see past quantum mechanics. I can't see past quantum mechanics. I don't see any reason for there to be anything beyond quantum mechanics. I think Etouf has asked very interesting and deep questions. I don't like his answers. Well, again, let me ask, if we look at the deepest nature of reality with whether it's deterministic or when observed as probabilistic, what does that mean for our human level of ideas of free will? Is there any connection whatsoever from this perception, perhaps illusion of free will that we have and the fundamental nature of reality? The only thing I can say is I am puzzled by that as much as you are. The illusion of it. The illusion of consciousness, the illusion of free will, the illusion of self. Does that connect to? How can a physical system do that? And I am as puzzled as anybody. There's echoes of it in the observer effect. So do you understand what it means to be an observer? I understand it at a technical level. An observer is a system with enough degrees of freedom that it can record information and which can become entangled with the thing that it's measuring. Entanglement is the key. When a system which we call an apparatus or an observer, same thing, interacts with the system that it's observing, it doesn't just look at it. It becomes physically entangled with it. And it's that entanglement which we call an observation or a measurement. Now, does that satisfy me personally as an observer? Yes and no. I find it very satisfying that we have a mathematical representation of what it means to observe a system. You are observing stuff right now, the conscious level. Do you think there's echoes of that kind of entanglement in our macro scale? Yes, absolutely, for sure. We're entangled with, quantum mechanically entangled with everything in this room. If we weren't, then it would just, well, we wouldn't be observing it. But on the other hand, you can ask, do I really, am I really comfortable with it? And I'm uncomfortable with it in the same way that I can never get comfortable with five dimensions. My brain isn't wired for it. Are you comfortable with four dimensions? A little bit more, because I can always imagine the fourth dimension is time. So the arrow of time, are you comfortable with that arrow? Do you think time is an emergent phenomena or is it fundamental to nature? That is a big question in physics right now. All the physics that we do, or at least that the people that I am comfortable with talking to, my friends, my friends. No, we all ask the same question that you just asked. Space, we have a pretty good idea is emergent and it emerges out of entanglement and other things. Time always seems to be built into our equations as just what Newton pretty much would have thought. Newton, modified a little bit by Einstein, would have called time. And mostly in our equations, it is not emergent. Time in physics is completely symmetric, forward and backward. Right, it's symmetric. So you don't really need to think about the arrow of time for most physical phenomena. For most microscopic phenomena, no. It's only when the phenomena involve systems which are big enough for thermodynamics to become important, for entropy to become important. For a small system, entropy is not a good concept. Entropy is something which emerges out of large numbers. It's a probabilistic idea or it's a statistical idea and it's a thermodynamic idea. Thermodynamics requires lots and lots and lots of little substructures, okay? So it's not until you emerge at the thermodynamic level that there's an arrow of time. Do we understand it? Yeah, I think we understand better than most people think they have. Most people say they think we understand it. Yeah, I think we understand it. It's a statistical idea. You mean like second law of thermodynamics, entropy and so on? Yeah, take a pack of cards and you fling it in the air and you look what happens to it, it gets random. We understand it. It doesn't go from random to simple. It goes from simple to random. But do you think it ever breaks down? What I think you can do is in a laboratory setting, you can take a system which is somewhere intermediate between being small and being large and make it go backward. A thing which looks like it only wants to go forward because of statistical mechanical reasons, because of the second law, you can very, very carefully manipulate it to make it run backward. I don't think you can take an egg, a Humpty Dumpty who fell on the floor and reverse that. But you can, in a very controlled situation, you can take systems which appear to be evolving statistically toward randomness, stop them, reverse them, and make them go back. What's the intuition behind that? How do we do that? How do we reverse it? You're saying a closed system. Yeah, pretty much closed system, yes. Did you just say that time travel is possible? No, I didn't say time travel is possible. I said you can make a system go backward. In time. You can make it go back. You can make it reverse its steps. You can make it reverse its trajectory. Yeah. How do we do it? What's the intuition there? Does it have, is it just a fluke thing that we can do at a small scale in the lab that doesn't have? Well, what I'm saying is you can do it a little bit better than a small scale. You can certainly do it with a simple, small system. Small systems don't have any sense of the arrow of time. Atoms, atoms are no sense of an arrow of time. They're completely reversible. It's only when you have, you know, the second law of thermodynamics is the law of large numbers. So you can break the law because it's not a deterministic law. You can break it, you can break it, but it's hard. It requires great care. The bigger the system is, the more care, the more, the harder it is. You have to overcome what's called chaos. And that's hard. And it requires more and more precision. For 10 particles, you might be able to do it with some effort. For a hundred particles, it's really hard. For a thousand or a million particles, forget it, but not for any fundamental reason, just because it's technologically too hard to make the system go backward. So, no time travel for engineering reasons. Oh, no, no, no, no. What is time travel? Time travel to the future? That's easy. You just close your eyes, go to sleep, and you wake up in the future. Yeah, yeah, a good nap gets you there, yeah. A good nap gets you there, right. But reversing the second law of thermodynamics, going backward in time for anything that's human scale is a very difficult engineering effort. I wouldn't call that time travel because it gets too mixed up with what science fiction calls time travel. This is just the ability to reverse a system. You take the system and you reverse the direction of motion of every molecule in it. That, you can do it with one molecule. If you find a particle moving in a certain direction, let's not say a particle, a baseball, you stop it dead and then you simply reverse its motion. In principle, that's not too hard. And it'll go back along its trajectory in the backward direction. Just running the program backwards. Running the program backward. Yeah. Okay. If you have two baseballs colliding, well, you can do it, but you have to be very, very careful to get it just right. If you have 10 baseballs, really, really, better yet, 10 billiard balls on an idealized, frictionless billiard table. Okay, so you start the balls all on a triangle, right? And you whack them. Depending on the game you're playing, you either whack them or you're really careful, but you whack them. And they go flying off in all possible directions. Okay, try to reverse that. Try to reverse that. Imagine trying to take every billiard ball, stopping it dead at some point, and reversing its motion so that it was going in the opposite direction. If you did that with tremendous care, it would reassemble itself back into the triangle. Okay, that is a fact. And you can probably do it with two billiard balls, maybe with three billiard balls if you're really lucky. But what happens is as the system gets more and more complicated, you have to be more and more precise not to make the tiniest error, because the tiniest errors will get magnified and you'll simply not be able to do the reversal. So yeah, but I wouldn't call that time travel. Yeah, that's something else. But if you think of it, it just made me think, if you think the unrolling of state that's happening as a program, if we look at the world, silly idea of looking at the world as a simulation, as a computer. But it's not a computer, it's just a single program. A question arises that might be useful. How hard is it to have a computer that runs the universe? Okay, so there are mathematical universes that we know about. One of them is called anti de Sitter space, where we, and it's quantum mechanics, I think we could simulate it in a computer, in a quantum computer. Classical computer, all you can do is solve its equations. You can't make it work like the real system. If we could build a quantum computer, a big enough one, a robust enough one, we could probably simulate a universe, a small version of an anti de Sitter universe. Anti de Sitter is a kind of cosmology. So I think we know how to do that. The trouble is the universe that we live in is not the anti de Sitter geometry, it's the de Sitter geometry. And we don't really understand its quantum mechanics at all. So at the present time, I would say we wouldn't have the vaguest idea how to simulate a universe similar to our own. No, we can ask, could we build in the laboratory a small version, a quantum mechanical version, the collection of quantum computers and tangled and coupled together, which would reproduce the phenomena that go on in the universe, even on a small scale. Yes, if it were anti de Sitter space, no, if it's de Sitter space. Can you slightly describe de Sitter space and anti de Sitter space? Yeah. What are the geometric properties of? They differ by the sign of a single constant called the cosmological constant. One of them is negatively curved, the other is positively curved. Anti de Sitter space, which is the negatively curved one, you can think of as an isolated system in a box with reflecting walls. You could think of it as a system of quantum mechanical system isolated in an isolated environment. De Sitter space is the one we really live in. And that's the one that's exponentially expanding, exponential expansion, dark energy, whatever we wanna call it. And we don't understand that mathematically. Do we understand? Not everybody would agree with me, but I don't understand. They would agree with me, they definitely would agree with me that I don't understand it. What about, is there an understanding of the birth, the origin, the big bang? So there's one problem with the other. No, no, there's theories. There are theories. My favorite is the one called eternal inflation. The infinity can be on both sides, on one of the sides and none of the sides. So what's eternal infinity? Okay. Infinity on both sides. Oh boy. Yeah, yeah, that's. Why is that your favorite? Because it's the most just mind blowing? No. Because we want a beginning. No, why do we want a beginning? In practice there was a beginning, of course. In practice there was a beginning. But could it have been a random fluctuation in an otherwise infinite time? Maybe. In any case, the eternal inflation theory, I think if correctly understood, would be infinite in both directions. How do you think about infinity? Oh God. So, okay, of course you can think about it mathematically. I just finished this discussion with my friend Sergei Brin. How do you think about infinity? I say, well, Sergei Brin is infinitely rich. How do you test that hypothesis? Okay. Such a good line. Right. Yeah, so there's really no way to visualize some of these things. Yeah, no, this is a very good question. Does physics have any, does infinity have any place in physics? Right. Right, and all I can say is very good question. So what do you think of the recent first image of a black hole visualized from the Horizon Telescope? It's an incredible triumph of science. In itself, the fact that there are black holes which collide is not a surprise. And they seem to work exactly the way they're supposed to work. Will we learn a great deal from it? I don't know, we might. But the kind of things we'll learn won't really be about black holes. Why there are black holes in nature of that particular mass scale and why they're so common may tell us something about the structure, evolution of structure in the universe. But I don't think it's gonna tell us anything new about black holes. But it's a triumph in the sense that you go back 100 years and it was a continuous development, general relativity, the discovery of black holes, LIGO, the incredible technology that went into LIGO. It is something that I never would have believed was gonna happen 30, 40 years ago. And I think it's a magnificent structure, magnificent thing, this evolution of general relativity, LIGO, high precision, ability to measure things on a scale of 10 to the minus 21. So, astonishing. So you're just in awe that this path took us to this picture. Is it different? You've thought a lot about black holes. How did you visualize them in your mind? And is the picture different than you've visualized it? No, it's simply confirmed. It's a magnificent triumph to have confirmed a direct observation that Einstein's theory of gravity at the level of black hole collisions actually works is awesome, it is really awesome. I know some of the people who are involved in that. They're just ordinary people. And the idea that they could carry this out, I just, I'm shocked. Yeah, just these little homo sapiens? Yeah, just these little monkeys. Yeah, got together and took a picture of... Slightly advanced limer's, I think. What kind of questions can science not currently answer but you hope might be able to soon? Well, you've already addressed them. What is consciousness, for example? You think that's within the reach of science? I think it's somewhat within the reach of science, but I think that now I think it's in the hands of the computer scientists and the neuroscientists. Not a physicist, with the help. Perhaps at some point, but I think physicists will try to simplify it down to something that they can use their methods and maybe they're not appropriate. Maybe we simply need to do more machine learning on bigger scales, evolve machines. Machines not only that learn but evolve their own architecture. As a process of learning, evolve in architecture. Not under our control, only partially under our control, but under the control of machine learning. I'll tell you another thing that I find awesome. You know this Google thing that they taught the computers how to play chess? Yeah, yeah. Okay, they taught the computers how to play chess, not by teaching them how to play chess, but just having them play against each other. Against each other, self play. Against each other, this is a form of evolution. These machines evolved, they evolved in intelligence. They evolved in intelligence without anybody telling them how to do it. They were not engineered, they just played against each other and got better and better and better. That makes me think that machines can evolve intelligence. What exact kind of intelligence, I don't know. But in understanding that better and better, maybe we'll get better clues as to what goes on in our own intelligence. What life in intelligence is. Last question, what kind of questions can science not currently answer and may never be able to answer? Yeah. Yeah. Is there an intelligence out there that's underlies the whole thing? You can call them with a G word if you want. I can say, are we a computer simulation with a purpose? Is there an agent, an intelligent agent that underlies or is responsible for the whole thing? Does that intelligent agent satisfy the laws of physics? Does it satisfy the laws of quantum mechanics? Is it made of atoms and molecules? Yeah, there's a lot of questions. And I don't see, it seems to me a real question. It's an answerable question. Well, I don't know if it's answerable. The questions have to be answerable to be real. Some philosophers would say that a question is not a question unless it's answerable. This question doesn't seem to me answerable by any known method, but it seems to me real. There's no better place to end. Leonard, thank you so much for talking today. Okay, good.
Leonard Susskind: Quantum Mechanics, String Theory and Black Holes | Lex Fridman Podcast #41
The following is a conversation with Peter Norvig. He's the Director of Research at Google and the coauthor with Stuart Russell of the book Artificial Intelligence, A Modern Approach, that educated and inspired a whole generation of researchers, including myself, to get into the field of artificial intelligence. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give five stars on iTunes, support on Patreon, or simply connect with me on Twitter. I'm Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Peter Norvig. Most researchers in the AI community, including myself, own all three editions, red, green, and blue, of the Artificial Intelligence, A Modern Approach. It's a field defining textbook, as many people are aware, that you wrote with Stuart Russell. How has the book changed and how have you changed in relation to it from the first edition to the second to the third and now fourth edition as you work on it? Yeah, so it's been a lot of years, a lot of changes. One of the things changing from the first to maybe the second or third was just the rise of computing power, right? So I think in the first edition, we said, here's predicate logic, but that only goes so far because pretty soon you have millions of short little predicate expressions and they can possibly fit in memory. So we're gonna use first order logic that's more concise. And then we quickly realized, oh, predicate logic is pretty nice because there are really fast SAT solvers and other things. And look, there's only millions of expressions and that fits easily into memory, or maybe even billions fit into memory now. So that was a change of the type of technology we needed just because the hardware expanded. Even to the second edition, resource constraints were loosened significantly for the second. And that was early 2000s second edition. Right, so 95 was the first and then 2000, 2001 or so. And then moving on from there, I think we're starting to see that again with the GPUs and then more specific type of machinery like the TPUs and you're seeing custom ASICs and so on for deep learning. So we're seeing another advance in terms of the hardware. Then I think another thing that we especially noticed this time around is in all three of the first editions, we kind of said, well, we're gonna find AI as maximizing expected utility and you tell me your utility function. And now we've got 27 chapters where the cool techniques for how to optimize that. I think in this edition, we're saying more, you know what, maybe that optimization part is the easy part and the hard part is deciding what is my utility function? What do I want? And if I'm a collection of agents or a society, what do we want as a whole? So you touched that topic in this edition. You get a little bit more into utility. Yeah. That's really interesting. On a technical level, we're almost pushing the philosophical. I guess it is philosophical, right? So we've always had a philosophy chapter, which I was glad that we were supporting. And now it's less kind of the Chinese room type argument and more of these ethical and societal type issues. So we get into the issues of fairness and bias and just the issue of aggregating utilities. So how do you encode human values into a utility function? Is this something that you can do purely through data in a learned way or is there some systematic, obviously there's no good answers yet. There's just beginnings to this, to even opening the doors to these questions. So there is no one answer. Yes, there are techniques to try to learn that. So we talk about inverse reinforcement learning, right? So reinforcement learning, you take some actions, you get some rewards and you figure out what actions you should take. And inverse reinforcement learning, you observe somebody taking actions and you figure out, well, this must be what they were trying to do. If they did this action, it must be because they want it. Of course, there's restrictions to that, right? So lots of people take actions that are self destructive or they're suboptimal in certain ways. So you don't wanna learn that. You wanna somehow learn the perfect actions rather than the ones they actually take. So that's a challenge for that field. Then another big part of it is just kind of theoretical of saying, what can we accomplish? And so you look at like this work on the programs to predict recidivism and decide who should get parole or who should get bail or whatever. And how are you gonna evaluate that? And one of the big issues is fairness across protected classes. Protected classes being things like sex and race and so on. And so two things you want is you wanna say, well, if I get a score of say six out of 10, then I want that to mean the same whether no matter what race I'm on, right? Yes, right, so I wanna have a 60% chance of reoccurring regardless. And one of the makers of a commercial program to do that says that's what we're trying to optimize and look, we achieved that. We've reached that kind of balance. And then on the other side, you also wanna say, well, if it makes mistakes, I want that to affect both sides of the protected class equally. And it turns out they don't do that, right? So they're twice as likely to make a mistake that would harm a black person over a white person. So that seems unfair. So you'd like to say, well, I wanna achieve both those goals. And then it turns out you do the analysis and it's theoretically impossible to achieve both those goals. So you have to trade them off one against the other. So that analysis is really helpful to know what you can aim for and how much you can get. You can't have everything. But the analysis certainly can't tell you where should we make that trade off point. But nevertheless, then we can as humans deliberate where that trade off should be. Yeah, so at least we now we're arguing in an informed way. We're not asking for something impossible. We're saying, here's where we are and here's what we aim for. And this strategy is better than that strategy. So that's, I would argue is a really powerful and really important first step, but it's a doable one sort of removing undesirable degrees of bias in systems in terms of protected classes. And then there's something I listened to your commencement speech, or there's some fuzzier things like, you mentioned angry birds. Do you wanna create systems that feed the dopamine enjoyment that feed, that optimize for you returning to the system, enjoying the moment of playing the game of getting likes or whatever, this kind of thing, or some kind of longterm improvement? Right. Are you even thinking about that? That's really going to the philosophical area. No, I think that's a really important issue too. Certainly thinking about that. I don't think about that as an AI issue as much. But as you say, the point is we've built this society and this infrastructure where we say we have a marketplace for attention and we've decided as a society that we like things that are free. And so we want all the apps on our phone to be free. And that means they're all competing for your attention. And then eventually they make some money some way through ads or in game sales or whatever. But they can only win by defeating all the other apps by instilling your attention. And we build a marketplace where it seems like they're working against you rather than working with you. And I'd like to find a way where we can change the playing field so you feel more like, well, these things are on my side. Yes, they're letting me have some fun in the short term, but they're also helping me in the long term rather than competing against me. And those aren't necessarily conflicting objectives. They're just the incentives, the direct current incentives as we try to figure out this whole new world seem to be on the easier part of that, which is feeding the dopamine, the rush. Right. But so maybe taking a quick step back at the beginning of the Artificial Intelligence, the Modern Approach book of writing. So here you are in the 90s. When you first sat down with Stuart to write the book to cover an entire field, which is one of the only books that's successfully done that for AI and actually in a lot of other computer science fields, it's a huge undertaking. So it must've been quite daunting. What was that process like? Did you envision that you would be trying to cover the entire field? Was there a systematic approach to it that was more step by step? How was, how did it feel? So I guess it came about, go to lunch with the other AI faculty at Berkeley and we'd say, the field is changing. It seems like the current books are a little bit behind. Nobody's come out with a new book recently. We should do that. And everybody said, yeah, yeah, that's a great thing to do. And we never did anything. Right. And then I ended up heading off to industry. I went to Sun Labs. So I thought, well, that's the end of my possible academic publishing career. But I met Stuart again at a conference like a year later and said, you know that book we were always talking about, you guys must be half done with it by now, right? And he said, well, we keep talking, we never do anything. So I said, well, you know, we should do it. And I think the reason is that we all felt it was a time where the field was changing. And that was in two ways. So, you know, the good old fashioned AI was based primarily on Boolean logic. And you had a few tricks to deal with uncertainty. And it was based primarily on knowledge engineering. That the way you got something done is you went out, you interviewed an expert and you wrote down by hand everything they knew. And we saw in 95 that the field was changing in two ways. One, we're moving more towards probability rather than Boolean logic. And we're moving more towards machine learning rather than knowledge engineering. And the other books hadn't caught that way if they were still in the, more in the old school. Although, so certainly they had part of that on the way. But we said, if we start now completely taking that point of view, we can have a different kind of book. And we were able to put that together. And what was literally the process if you remember, did you start writing a chapter? Did you outline? Yeah, I guess we did an outline and then we sort of assigned chapters to each person. At the time I had moved to Boston and Stuart was in Berkeley. So basically we did it over the internet. And, you know, that wasn't the same as doing it today. It meant, you know, dial up lines and telnetting in. And, you know, you telnet it into one shell and you type cat file name and you hoped it was captured at the other end. And certainly you're not sending images and figures back and forth. Right, right, that didn't work. But, you know, did you anticipate where the field would go from that day, from the 90s? Did you see the growth into learning based methods and to data driven methods that followed in the future decades? We certainly thought that learning was important. I guess we missed it as being as important as it is today. We missed this idea of big data. We missed that the idea of deep learning hadn't been invented yet. We could have taken the book from a complete machine learning point of view right from the start. We chose to do it more from a point of view of we're gonna first develop different types of representations. And we're gonna talk about different types of environments. Is it fully observable or partially observable? And is it deterministic or stochastic and so on? And we made it more complex along those axes rather than focusing on the machine learning axis first. Do you think, you know, there's some sense in which the deep learning craze is extremely successful for a particular set of problems. And, you know, eventually it's going to, in the general case, hit challenges. So in terms of the difference between perception systems and robots that have to act in the world, do you think we're gonna return to AI modern approach type breadth in addition five and six? In future decades, do you think deep learning will take its place as a chapter in this bigger view of AI? Yeah, I think we don't know yet how it's all gonna play out. So in the new edition, we have a chapter on deep learning. We got Ian Goodfellow to be the guest author for that chapter. So he said he could condense his whole deep learning book into one chapter. I think he did a great job. We were also encouraged that he's, you know, we gave him the old neural net chapter and said, modernize that. And he said, you know, half of that was okay. That certainly there's lots of new things that have been developed, but some of the core was still the same. So I think we'll gain a better understanding of what you can do there. I think we'll need to incorporate all the things we can do with the other technologies, right? So deep learning started out with convolutional networks and very close to perception. And it's since moved to be able to do more with actions and some degree of longer term planning. But we need to do a better job with representation than reasoning and one shot learning and so on. And I think we don't know yet how that's gonna play out. So do you think looking at some success, but certainly eventual demise, a partial demise of experts to symbolic systems in the 80s, do you think there is kernels of wisdom and the work that was done there with logic and reasoning and so on that will rise again in your view? So certainly I think the idea of representation and reasoning is crucial that sometimes you just don't have enough data about the world to learn de novo. So you've got to have some idea of representation, whether that was programmed in or told or whatever, and then be able to take steps of reasoning. I think the problem with the good old fashioned AI was one, we tried to base everything on these symbols that were atomic. And that's great if you're like trying to define the properties of a triangle, right? Because they have necessary and sufficient conditions. But things in the real world don't. The real world is messy and doesn't have sharp edges and atomic symbols do. So that was a poor match. And then the other aspect was that the reasoning was universal and applied anywhere, which in some sense is good, but it also means there's no guidance as to where to apply. And so you started getting these paradoxes like, well, if I have a mountain and I remove one grain of sand, then it's still a mountain. But if I do that repeatedly, at some point it's not, right? And with logic, there's nothing to stop you from applying things repeatedly. But maybe with something like deep learning, and I don't really know what the right name for it is, we could separate out those ideas. So one, we could say a mountain isn't just an atomic notion. It's some sort of something like a word embedding that has a more complex representation. And secondly, we could somehow learn, yeah, there's this rule that you can remove one grain of sand and you can do that a bunch of times, but you can't do it a near infinite amount of times. But on the other hand, when you're doing induction on the integer, sure, then it's fine to do it an infinite number of times. And if we could, somehow we have to learn when these strategies are applicable rather than having the strategies be completely neutral and available everywhere. Anytime you use neural networks, anytime you learn from data, form representation from data in an automated way, it's not very explainable as to, or it's not introspective to us humans in terms of how this neural network sees the world, where, why does it succeed so brilliantly in so many cases and fail so miserably in surprising ways and small. So what do you think is the future there? Can simply more data, better data, more organized data solve that problem? Or is there elements of symbolic systems that need to be brought in which are a little bit more explainable? Yeah, so I prefer to talk about trust and validation and verification rather than just about explainability. And then I think explanations are one tool that you use towards those goals. And I think it is an important issue that we don't wanna use these systems unless we trust them and we wanna understand where they work and where they don't work. And an explanation can be part of that, right? So I apply for a loan and I get denied, I want some explanation of why. And you have, in Europe, we have the GDPR that says you're required to be able to get that. But on the other hand, the explanation alone is not enough, right? So we are used to dealing with people and with organizations and corporations and so on, and they can give you an explanation and you have no guarantee that that explanation relates to reality, right? So the bank can tell me, well, you didn't get the loan because you didn't have enough collateral. And that may be true, or it may be true that they just didn't like my religion or something else. I can't tell from the explanation, and that's true whether the decision was made by a computer or by a person. So I want more. I do wanna have the explanations and I wanna be able to have a conversation to go back and forth and said, well, you gave this explanation, but what about this? And what would have happened if this had happened? And what would I need to change that? So I think a conversation is a better way to think about it than just an explanation as a single output. And I think we need testing of various kinds, right? So in order to know, was the decision really based on my collateral or was it based on my religion or skin color or whatever? I can't tell if I'm only looking at my case, but if I look across all the cases, then I can detect the pattern, right? So you wanna have that kind of capability. You wanna have these adversarial testing, right? So we thought we were doing pretty good at object recognition in images. We said, look, we're at sort of pretty close to human level performance on ImageNet and so on. And then you start seeing these adversarial images and you say, wait a minute, that part is nothing like human performance. You can mess with it really easily. You can mess with it really easily, right? And yeah, you can do that to humans too, right? So we. In a different way perhaps. Right, humans don't know what color the dress was. Right. And so they're vulnerable to certain attacks that are different than the attacks on the machines, but the attacks on the machines are so striking. They really change the way you think about what we've done, right? And the way I think about it is, I think part of the problem is we're seduced by our low dimensional metaphors, right? Yeah. I like that phrase. You look in a textbook and you say, okay, now we've mapped out the space and a cat is here and dog is here and maybe there's a tiny little spot in the middle where you can't tell the difference, but mostly we've got it all covered. And if you believe that metaphor, then you say, well, we're nearly there. And there's only gonna be a couple adversarial images. But I think that's the wrong metaphor and what you should really say is, it's not a 2D flat space that we've got mostly covered. It's a million dimension space and a cat is this string that goes out in this crazy path. And if you step a little bit off the path in any direction, you're in nowhere's land and you don't know what's gonna happen. And so I think that's where we are and now we've got to deal with that. So it wasn't so much an explanation, but it was an understanding of what the models are and what they're doing and now we can start exploring, how do you fix that? Yeah, validating the robustness of the system and so on, but take it back to this word trust. Do you think we're a little too hard on our robots in terms of the standards we apply? So, you know, there's a dance in nonverbal and verbal communication between humans. If we apply the same kind of standard in terms of humans, we trust each other pretty quickly. You know, you and I haven't met before and there's some degree of trust, right? That nothing's gonna go crazy wrong and yet to AI, when we look at AI systems or we seem to approach skepticism always, always. And it's like they have to prove through a lot of hard work that they're even worthy of even inkling of our trust. What do you think about that? How do we break that barrier, close that gap? I think that's right. I think that's a big issue. Just listening, my friend Mark Moffat is a naturalist and he says, the most amazing thing about humans is that you can walk into a coffee shop or a busy street in a city and there's lots of people around you that you've never met before and you don't kill each other. Yeah. He says, chimpanzees cannot do that. Yeah, right. Right? If a chimpanzee's in a situation where here's some that aren't from my tribe, bad things happen. Especially in a coffee shop, there's delicious food around, you know. Yeah, yeah. But we humans have figured that out, right? And you know. For the most part. For the most part. We still go to war, we still do terrible things but for the most part, we've learned to trust each other and live together. So that's gonna be important for our AI systems as well. And also I think a lot of the emphasis is on AI but in many cases, AI is part of the technology but isn't really the main thing. So a lot of what we've seen is more due to communications technology than AI technology. Yeah, you wanna make these good decisions but the reason we're able to have any kind of system at all is we've got the communication so that we're collecting the data and so that we can reach lots of people around the world. I think that's a bigger change that we're dealing with. Speaking of reaching a lot of people around the world, on the side of education, one of the many things in terms of education you've done, you've taught the Intro to Artificial Intelligence course that signed up 160,000 students. There's one of the first successful example of a MOOC, Massive Open Online Course. What did you learn from that experience? What do you think is the future of MOOCs, of education online? Yeah, it was a great fun doing it, particularly being right at the start just because it was exciting and new but it also meant that we had less competition, right? So one of the things you hear about, well, the problem with MOOCs is the completion rates are so low so there must be a failure and I gotta admit, I'm a prime contributor, right? I probably started 50 different courses that I haven't finished but I got exactly what I wanted out of them because I had never intended to finish them. I just wanted to dabble in a little bit either to see the topic matter or just to see the pedagogy of how are they doing this class. So I guess the main thing I learned is when I came in, I thought the challenge was information, saying if I'm just, take the stuff I want you to know and I'm very clear and explain it well, then my job is done and good things are gonna happen. And then in doing the course, I learned, well, yeah, you gotta have the information but really the motivation is the most important thing that if students don't stick with it, it doesn't matter how good the content is. And I think being one of the first classes, we were helped by sort of exterior motivation. So we tried to do a good job of making it enticing and setting up ways for the community to work with each other to make it more motivating but really a lot of it was, hey, this is a new thing and I'm really excited to be part of a new thing. And so the students brought their own motivation. And so I think this is great because there's lots of people around the world who have never had this before, would never have the opportunity to go to Stanford and take a class or go to MIT or go to one of the other schools but now we can bring that to them and if they bring their own motivation, they can be successful in a way they couldn't before. But that's really just the top tier of people that are ready to do that. The rest of the people just don't see or don't have the motivation and don't see how if they push through and were able to do it, what advantage that would get them. So I think we got a long way to go before we were able to do that. And I think some of it is based on technology but more of it's based on the idea of community. You gotta actually get people together. Some of the getting together can be done online. I think some of it really has to be done in person in order to build that type of community and trust. You know, there's an intentional mechanism that we've developed a short attention span, especially younger people because sort of shorter and shorter videos online, there's a whatever the way the brain is developing now and with people that have grown up with the internet, they have quite a short attention span. So, and I would say I had the same when I was growing up too, probably for different reasons. So I probably wouldn't have learned as much as I have if I wasn't forced to sit in a physical classroom, sort of bored, sometimes falling asleep, but sort of forcing myself through that process. So sometimes extremely difficult computer science courses. What's the difference in your view between in person education experience, which you, first of all, yourself had and you yourself taught and online education and how do we close that gap if it's even possible? Yeah, so I think there's two issues. One is whether it's in person or online. So it's sort of the physical location and then the other is kind of the affiliation, right? So you stuck with it in part because you were in the classroom and you saw everybody else was suffering the same way you were, but also because you were enrolled, you had paid tuition, sort of everybody was expecting you to stick with it. Society, parents, peers. And so those are two separate things. I mean, you could certainly imagine I pay a huge amount of tuition and everybody signed up and says, yes, you're doing this, but then I'm in my room and my classmates are in different rooms, right? We could have things set up that way. So it's not just the online versus offline. I think what's more important is the commitment that you've made. And certainly it is important to have that kind of informal, you know, I meet people outside of class, we talk together because we're all in it together. I think that's really important, both in keeping your motivation and also that's where some of the most important learning goes on. So you wanna have that. Maybe, you know, especially now we start getting into higher bandwidths and augmented reality and virtual reality, you might be able to get that without being in the same physical place. Do you think it's possible we'll see a course at Stanford, for example, that for students, enrolled students is only online in the near future or literally sort of it's part of the curriculum and there is no... Yeah, so you're starting to see that. I know Georgia Tech has a master's that's done that way. Oftentimes it's sort of, they're creeping in in terms of a master's program or sort of further education, considering the constraints of students and so on. But I mean, literally, is it possible that we, you know, Stanford, MIT, Berkeley, all these places go online only in the next few decades? Yeah, probably not, because, you know, they've got a big commitment to a physical campus. Sure, so there's a momentum that's both financial and culturally. Right, and then there are certain things that's just hard to do virtually, right? So, you know, we're in a field where, if you have your own computer and your own paper, and so on, you can do the work anywhere. But if you're in a biology lab or something, you know, you don't have all the right stuff at home. Right, so our field, programming, you've also done a lot of programming yourself. In 2001, you wrote a great article about programming called Teach Yourself Programming in 10 Years, sort of response to all the books that say teach yourself programming in 21 days. So if you were giving advice to someone getting into programming today, this is a few years since you've written that article, what's the best way to undertake that journey? I think there's lots of different ways, and I think programming means more things now. And I guess, you know, when I wrote that article, I was thinking more about becoming a professional software engineer, and I thought that's a, you know, sort of a career long field of study. But I think there's lots of things now that people can do where programming is a part of solving what they wanna solve without achieving that professional level status, right? So I'm not gonna be going and writing a million lines of code, but, you know, I'm a biologist or a physicist or something, or even a historian, and I've got some data, and I wanna ask a question of that data. And I think for that, you don't need 10 years, right? So there are many shortcuts to being able to answer those kinds of questions. And, you know, you see today a lot of emphasis on learning to code, teaching kids how to code. I think that's great, but I wish they would change the message a little bit, right, so I think code isn't the main thing. I don't really care if you know the syntax of JavaScript or if you can connect these blocks together in this visual language. But what I do care about is that you can analyze a problem, you can think of a solution, you can carry out, you know, make a model, run that model, test the model, see the results, verify that they're reasonable, ask questions and answer them, right? So it's more modeling and problem solving, and you use coding in order to do that, but it's not just learning coding for its own sake. That's really interesting. So it's actually almost, in many cases, it's learning to work with data, to extract something useful out of data. So when you say problem solving, you really mean taking some kind of, maybe collecting some kind of data set, cleaning it up, and saying something interesting about it, which is useful in all kinds of domains. And, you know, and I see myself being stuck sometimes in kind of the old ways, right? So, you know, I'll be working on a project, maybe with a younger employee, and we say, oh, well, here's this new package that could help solve this problem. And I'll go and I'll start reading the manuals, and, you know, I'll be two hours into reading the manuals, and then my colleague comes back and says, I'm done. You know, I downloaded the package, I installed it, I tried calling some things, the first one didn't work, the second one worked, now I'm done. And I say, but I have a hundred questions about how does this work and how does that work? And they say, who cares, right? I don't need to understand the whole thing. I answered my question, it's a big, complicated package, I don't understand the rest of it, but I got the right answer. And I'm just, it's hard for me to get into that mindset. I want to understand the whole thing. And, you know, if they wrote a manual, I should probably read it. And, but that's not necessarily the right way. I think I have to get used to dealing with more, being more comfortable with uncertainty and not knowing everything. Yeah, so I struggle with the same, instead of the spectrum between Donald and Don Knuth. Yeah. It's kind of the very, you know, before he can say anything about a problem, he really has to get down to the machine code assembly. Yeah. And that forces exactly what you said of several students in my group that, you know, 20 years old, and they can solve almost any problem within a few hours. That would take me probably weeks because I would try to, as you said, read the manual. So do you think the nature of mastery, you're mentioning biology, sort of outside disciplines, applying programming, but computer scientists. So over time, there's higher and higher levels of abstraction available now. So with this week, there's the TensorFlow Summit, right? So if you're not particularly into deep learning, but you're still a computer scientist, you can accomplish an incredible amount with TensorFlow without really knowing any fundamental internals of machine learning. Do you think the nature of mastery is changing, even for computer scientists, like what it means to be an expert programmer? Yeah, I think that's true. You know, we never really should have focused on programmer, right, because it's still, it's the skill, and what we really want to focus on is the result. So we built this ecosystem where the way you can get stuff done is by programming it yourself. At least when I started, you know, library functions meant you had square root, and that was about it, right? Everything else you built from scratch. And then we built up an ecosystem where a lot of times, well, you can download a lot of stuff that does a big part of what you need. And so now it's more a question of assembly rather than manufacturing. And that's a different way of looking at problems. From another perspective in terms of mastery and looking at programmers or people that reason about problems in a computational way. So Google, you know, from the hiring perspective, from the perspective of hiring or building a team of programmers, how do you determine if someone's a good programmer? Or if somebody, again, so I want to deviate from, I want to move away from the word programmer, but somebody who could solve problems of large scale data and so on. What's, how do you build a team like that through the interviewing process? Yeah, and I think as a company grows, you get more expansive in the types of people you're looking for, right? So I think, you know, in the early days, we'd interview people and the question we were trying to ask is how close are they to Jeff Dean? And most people were pretty far away, but we take the ones that were not that far away. And so we got kind of a homogeneous group of people who were really great programmers. Then as a company grows, you say, well, we don't want everybody to be the same, to have the same skill set. And so now we're hiring biologists in our health areas and we're hiring physicists, we're hiring mechanical engineers, we're hiring, you know, social scientists and ethnographers and people with different backgrounds who bring different skills. So you have mentioned that you still may partake in code reviews, given that you have a wealth of experience, as you've also mentioned. What errors do you often see and tend to highlight in the code of junior developers of people coming up now, given your background from Blisp to a couple of decades of programming? Yeah, that's a great question. You know, sometimes I try to look at the flexibility of the design of, yes, you know, this API solves this problem, but where is it gonna go in the future? Who else is gonna wanna call this? And, you know, are you making it easier for them to do that? That's a matter of design, is it documentation, is it sort of an amorphous thing you can't really put into words? It's just how it feels. If you put yourself in the shoes of a developer, would you use this kind of thing? I think it is how you feel, right? And so yeah, documentation is good, but it's more a design question, right? If you get the design right, then people will figure it out, whether the documentation is good or not. And if the design's wrong, then it'd be harder to use. How have you yourself changed as a programmer over the years? In a way, you already started to say sort of, you want to read the manual, you want to understand the core of the syntax to how the language is supposed to be used and so on. But what's the evolution been like from the 80s, 90s to today? I guess one thing is you don't have to worry about the small details of efficiency as much as you used to, right? So like I remember I did my list book in the 90s, and one of the things I wanted to do was say, here's how you do an object system. And basically, we're going to make it so each object is a hash table, and you look up the methods, and here's how it works. And then I said, of course, the real Common Lisp object system is much more complicated. It's got all these efficiency type issues, and this is just a toy, and nobody would do this in real life. And it turns out Python pretty much did exactly what I said and said objects are just dictionaries. And yeah, they have a few little tricks as well. But mostly, the thing that would have been 100 times too slow in the 80s is now plenty fast for most everything. So you had to, as a programmer, let go of perhaps an obsession that I remember coming up with of trying to write efficient code. Yeah, to say what really matters is the total time it takes to get the project done. And most of that's gonna be the programmer time. So if you're a little bit less efficient, but it makes it easier to understand and modify, then that's the right trade off. So you've written quite a bit about Lisp. Your book on programming is in Lisp. You have a lot of code out there that's in Lisp. So myself and people who don't know what Lisp is should look it up. It's my favorite language for many AI researchers. It is a favorite language. The favorite language they never use these days. So what part of Lisp do you find most beautiful and powerful? So I think the beautiful part is the simplicity that in half a page, you can define the whole language. And other languages don't have that. So you feel like you can hold everything in your head. And then a lot of people say, well, then that's too simple. Here's all these things I wanna do. And my Java or Python or whatever has 100 or 200 or 300 different syntax rules and don't I need all those? And Lisp's answer was, no, we're only gonna give you eight or so syntax rules, but we're gonna allow you to define your own. And so that was a very powerful idea. And I think this idea of saying, I can start with my problem and with my data, and then I can build the language I want for that problem and for that data. And then I can make Lisp define that language. So you're sort of mixing levels and saying, I'm simultaneously a programmer in a language and a language designer. And that allows a better match between your problem and your eventual code. And I think Lisp had done that better than other languages. Yeah, it's a very elegant implementation of functional programming. But why do you think Lisp has not had the mass adoption and success of languages like Python? Is it the parentheses? Is it all the parentheses? Yeah, so I think a couple things. So one was, I think it was designed for a single programmer or a small team and a skilled programmer who had the good taste to say, well, I am doing language design and I have to make good choices. And if you make good choices, that's great. If you make bad choices, you can hurt yourself and it can be hard for other people on the team to understand it. So I think there was a limit to the scale of the size of a project in terms of number of people that Lisp was good for. And as an industry, we kind of grew beyond that. I think it is in part the parentheses. You know, one of the jokes is the acronym for Lisp is lots of irritating, silly parentheses. My acronym was Lisp is syntactically pure, saying all you need is parentheses and atoms. But I remember, you know, as we had the AI textbook and because we did it in the nineties, we had pseudocode in the book, but then we said, well, we'll have Lisp online because that's the language of AI at the time. And I remember some of the students complaining because they hadn't had Lisp before and they didn't quite understand what was going on. And I remember one student complained, I don't understand how this pseudocode corresponds to this Lisp. And there was a one to one correspondence between the symbols in the code and the pseudocode. And the only thing difference was the parentheses. So I said, it must be that for some people, a certain number of left parentheses shuts off their brain. Yeah, it's very possible in that sense and Python just goes the other way. So that was the point at which I said, okay, can't have only Lisp as a language. Cause I don't wanna, you know, you only got 10 or 12 or 15 weeks or whatever it is to teach AI and I don't want to waste two weeks of that teaching Lisp. So I say, I gotta have another language. Java was the most popular language at the time. I started doing that. And then I said, it's really hard to have a one to one correspondence between the pseudocode and the Java because Java is so verbose. So then I said, I'm gonna do a survey and find the language that's most like my pseudocode. And it turned out Python basically was my pseudocode. Somehow I had channeled Guido, designed a pseudocode that was the same as Python, although I hadn't heard of Python at that point. And from then on, that's what I've been using cause it's been a good match. So what's the story in Python behind PyTudes? Your GitHub repository with puzzles and exercises in Python is pretty fun. Yeah, just it, it seems like fun, you know, I like doing puzzles and I like being an educator. I did a class with Udacity, Udacity 212, I think it was. It was basically problem solving using Python and looking at different problems. Does PyTudes feed that class in terms of the exercises? I was wondering what the... Yeah, so the class came first. Some of the stuff that's in PyTudes was write ups of what was in the class and then some of it was just continuing to work on new problems. So what's the organizing madness of PyTudes? Is it just a collection of cool exercises? Just whatever I thought was fun. Okay, awesome. So you were the director of search quality at Google from 2001 to 2005 in the early days when there's just a few employees and when the company was growing like crazy, right? So, I mean, Google revolutionized the way we discover, share and aggregate knowledge. So just, this is one of the fundamental aspects of civilization, right, is information being shared and there's different mechanisms throughout history but Google has just 10x improved that, right? And you're a part of that, right? People discovering that information. So what were some of the challenges on a philosophical or the technical level in those early days? It definitely was an exciting time and as you say, we were doubling in size every year and the challenges were we wanted to get the right answers, right? And we had to figure out what that meant. We had to implement that and we had to make it all efficient and we had to keep on testing and seeing if we were delivering good answers. And now when you say good answers, it means whatever people are typing in in terms of keywords, in terms of that kind of thing that the results they get are ordered by the desirability for them of those results. Like they're like, the first thing they click on will likely be the thing that they were actually looking for. Right, one of the metrics we had was focused on the first thing. Some of it was focused on the whole page. Some of it was focused on top three or so. So we looked at a lot of different metrics for how well we were doing and we broke it down into subclasses of, maybe here's a type of query that we're not doing well on and we try to fix that. Early on we started to realize that we were in an adversarial position, right, so we started thinking, well, we're kind of like the card catalog in the library, right, so the books are here and we're off to the side and we're just reflecting what's there. And then we realized every time we make a change, the webmasters make a change and it's game theoretic. And so we had to think not only of is this the right move for us to make now, but also if we make this move, what's the counter move gonna be? Is that gonna get us into a worse place, in which case we won't make that move, we'll make a different move. And did you find, I mean, I assume with the popularity and the growth of the internet that people were creating new content, so you're almost helping guide the creation of new content. Yeah, so that's certainly true, right, so we definitely changed the structure of the network. So if you think back in the very early days, Larry and Sergey had the PageRank paper and John Kleinberg had this hubs and authorities model, which says the web is made out of these hubs, which will be my page of cool links about dogs or whatever, and people would just list links. And then there'd be authorities, which were the page about dogs that most people linked to. That doesn't happen anymore. People don't bother to say my page of cool links, because we took over that function, right, so we changed the way that worked. Did you imagine back then that the internet would be as massively vibrant as it is today? I mean, it was already growing quickly, but it's just another, I don't know if you've ever, today, if you sit back and just look at the internet with wonder the amount of content that's just constantly being created, constantly being shared and deployed. Yeah, it's always been surprising to me. I guess I'm not very good at predicting the future. And I remember being a graduate student in 1980 or so, and we had the ARPANET, and then there was this proposal to commercialize it, and have this internet, and this crazy Senator Gore thought that might be a good idea. And I remember thinking, oh, come on, you can't expect a commercial company to understand this technology. They'll never be able to do it. Yeah, okay, we can have this.com domain, but it won't go anywhere. So I was wrong, Al Gore was right. At the same time, the nature of what it means to be a commercial company has changed, too. So Google, in many ways, at its founding is different than what companies were before, I think. Right, so there's all these business models that are so different than what was possible back then. So in terms of predicting the future, what do you think it takes to build a system that approaches human level intelligence? You've talked about, of course, that we shouldn't be so obsessed about creating human level intelligence. We just create systems that are very useful for humans. But what do you think it takes to approach that level? Right, so certainly I don't think human level intelligence is one thing, right? So I think there's lots of different tasks, lots of different capabilities. I also don't think that should be the goal, right? So I wouldn't wanna create a calculator that could do multiplication at human level, right? That would be a step backwards. And so for many things, we should be aiming far beyond human level for other things. Maybe human level is a good level to aim at. And for others, we'd say, well, let's not bother doing this because we already have humans can take on those tasks. So as you say, I like to focus on what's a useful tool. And in some cases, being at human level is an important part of crossing that threshold to make the tool useful. So we see in things like these personal assistants now that you get either on your phone or on a speaker that sits on the table, you wanna be able to have a conversation with those. And I think as an industry, we haven't quite figured out what the right model is for what these things can do. And we're aiming towards, well, you just have a conversation with them the way you can with a person. But we haven't delivered on that model yet, right? So you can ask it, what's the weather? You can ask it, play some nice songs. And five or six other things, and then you run out of stuff that it can do. In terms of a deep, meaningful connection. So you've mentioned the movie Her as one of your favorite AI movies. Do you think it's possible for a human being to fall in love with an AI assistant, as you mentioned? So taking this big leap from what's the weather to having a deep connection. Yeah, I think as people, that's what we love to do. And I was at a showing of Her where we had a panel discussion and somebody asked me, what other movie do you think Her is similar to? And my answer was Life of Brian, which is not a science fiction movie, but both movies are about wanting to believe in something that's not necessarily real. Yeah, by the way, for people that don't know, it's Monty Python. Yeah, it's been brilliantly put. Right, so I think that's just the way we are. We want to trust, we want to believe, we want to fall in love, and it doesn't necessarily take that much, right? So my kids fell in love with their teddy bear, and the teddy bear was not very interactive. So that's all us pushing our feelings onto our devices and our things, and I think that that's what we like to do, so we'll continue to do that. So yeah, as human beings, we long for that connection, and just AI has to do a little bit of work to catch us in the other end. Yeah, and certainly, if you can get to dog level, a lot of people have invested a lot of love in their pets. In their pets. Some people, as I've been told, in working with autonomous vehicles, have invested a lot of love into their inanimate cars, so it really doesn't take much. So what is a good test to linger on a topic that may be silly or a little bit philosophical? What is a good test of intelligence in your view? Is natural conversation like in the Turing test a good test? Put another way, what would impress you if you saw a computer do it these days? Yeah, I mean, I get impressed all the time. Go playing, StarCraft playing, those are all pretty cool. And I think, sure, conversation is important. I think we sometimes have these tests where it's easy to fool the system, where you can have a chat bot that can have a conversation, but it never gets into a situation where it has to be deep enough that it really reveals itself as being intelligent or not. I think Turing suggested that, but I think if he were alive, he'd say, you know, I didn't really mean that seriously. And I think, this is just my opinion, but I think Turing's point was not that this test of conversation is a good test. I think his point was having a test is the right thing. So rather than having the philosophers say, oh, no, AI is impossible, you should say, well, we'll just have a test, and then the result of that will tell us the answer. And it doesn't necessarily have to be a conversation test. That's right. And coming up a new, better test as the technology evolves is probably the right way. Do you worry, as a lot of the general public does about, not a lot, but some vocal part of the general public about the existential threat of artificial intelligence? So looking farther into the future, as you said, most of us are not able to predict much. So when shrouded in such mystery, there's a concern of, well, you start thinking about worst case. Is that something that occupies your mind, space, much? So I certainly think about threats. I think about dangers. And I think any new technology has positives and negatives. And if it's a powerful technology, it can be used for bad as well as for good. So I'm certainly not worried about the robot apocalypse and the Terminator type scenarios. I am worried about change in employment. And are we going to be able to react fast enough to deal with that? I think we're already seeing it today, where a lot of people are disgruntled about the way income inequality is working. And automation could help accelerate those kinds of problems. I see powerful technologies can always be used as weapons, whether they're robots or drones or whatever. Some of that we're seeing due to AI. A lot of it, you don't need AI. And I don't know what's a worst threat, if it's an autonomous drone or it's CRISPR technology becoming available. Or we have lots of threats to face. And some of them involve AI, and some of them don't. So the threats that technology presents, are you, for the most part, optimistic about technology also alleviating those threats or creating new opportunities or protecting us from the more detrimental effects of these new technologies? I don't know. Again, it's hard to predict the future. And as a society so far, we've survived nuclear bombs and other things. Of course, only societies that have survived are having this conversation. So maybe that's survivorship bias there. What problem stands out to you as exciting, challenging, impactful to work on in the near future for yourself, for the community, and broadly? So we talked about these assistance and conversation. I think that's a great area. I think combining common sense reasoning with the power of data is a great area. In which application? In conversation, or just broadly speaking? Just in general, yeah. As a programmer, I'm interested in programming tools, both in terms of the current systems we have today with TensorFlow and so on. Can we make them much easier to use for a broader class of people? And also, can we apply machine learning to the more traditional type of programming? So when you go to Google and you type in a query and you spell something wrong, it says, did you mean? And the reason we're able to do that is because lots of other people made a similar error, and then they corrected it. We should be able to go into our code bases and our bug fix bases. And when I type a line of code, it should be able to say, did you mean such and such? If you type this today, you're probably going to type in this bug fix tomorrow. Yeah, that's a really exciting application of almost an assistant for the coding programming experience at every level. So I think I could safely speak for the entire AI community, first of all, for thanking you for the amazing work you've done, certainly for the amazing work you've done with AI and Modern Approach book. I think we're all looking forward very much for the fourth edition, and then the fifth edition, and so on. So Peter, thank you so much for talking today. Yeah, thank you. My pleasure.
Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42
The following is a conversation with Gary Marcus. He's a professor emeritus at NYU, founder of Robust AI and Geometric Intelligence. The latter is a machine learning company that was acquired by Uber in 2016. He's the author of several books, Unnatural and Artificial Intelligence, including his new book, Rebooting AI, Building Machines We Can Trust. Gary has been a critical voice, highlighting the limits of deep learning and AI in general and discussing the challenges before our AI community that must be solved in order to achieve artificial general intelligence. As I'm having these conversations, I try to find paths toward insight, towards new ideas. I try to have no ego in the process. It gets in the way. I'll often continuously try on several hats, several roles. One, for example, is the role of a three year old who understands very little about anything and asks big what and why questions. The other might be a role of a devil's advocate who presents counter ideas with the goal of arriving at greater understanding through debate. Hopefully, both are useful, interesting, and even entertaining at times. I ask for your patience as I learn to have better conversations. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Gary Marcus. Do you think human civilization will one day have to face an AI driven technological singularity that will, in a societal way, modify our place in the food chain of intelligent living beings on this planet? I think our place in the food chain has already changed. So there are lots of things people used to do by hand that they do with machine. If you think of a singularity as like one single moment, which is, I guess, what it suggests, I don't know if it'll be like that, but I think that there's a lot of gradual change and AI is getting better and better. I mean, I'm here to tell you why I think it's not nearly as good as people think, but the overall trend is clear. Maybe Rick Hertzweil thinks it's an exponential and I think it's linear. In some cases, it's close to zero right now, but it's all gonna happen. I mean, we are gonna get to human level intelligence or whatever you want, artificial general intelligence at some point, and that's certainly gonna change our place in the food chain, because a lot of the tedious things that we do now, we're gonna have machines do, and a lot of the dangerous things that we do now, we're gonna have machines do. I think our whole lives are gonna change from people finding their meaning through their work through people finding their meaning through creative expression. So the singularity will be a very gradual, in fact, removing the meaning of the word singularity. It'll be a very gradual transformation in your view. I think that it'll be somewhere in between, and I guess it depends what you mean by gradual and sudden. I don't think it's gonna be one day. I think it's important to realize that intelligence is a multidimensional variable. So people sort of write this stuff as if IQ was one number, and the day that you hit 262 or whatever, you displace the human beings. And really, there's lots of facets to intelligence. So there's verbal intelligence, and there's motor intelligence, and there's mathematical intelligence and so forth. Machines, in their mathematical intelligence, far exceed most people already. In their ability to play games, they far exceed most people already. In their ability to understand language, they lag behind my five year old, far behind my five year old. So there are some facets of intelligence that machines have grasped, and some that they haven't, and we have a lot of work left to do to get them to, say, understand natural language, or to understand how to flexibly approach some kind of novel MacGyver problem solving kind of situation. And I don't know that all of these things will come at once. I think there are certain vital prerequisites that we're missing now. So for example, machines don't really have common sense now. So they don't understand that bottles contain water, and that people drink water to quench their thirst, and that they don't wanna dehydrate. They don't know these basic facts about human beings, and I think that that's a rate limiting step for many things. It's a great limiting step for reading, for example, because stories depend on things like, oh my God, that person's running out of water. That's why they did this thing. Or if they only had water, they could put out the fire. So you watch a movie, and your knowledge about how things work matter. And so a computer can't understand that movie if it doesn't have that background knowledge. Same thing if you read a book. And so there are lots of places where, if we had a good machine interpretable set of common sense, many things would accelerate relatively quickly, but I don't think even that is a single point. There's many different aspects of knowledge. And we might, for example, find that we make a lot of progress on physical reasoning, getting machines to understand, for example, how keys fit into locks, or that kind of stuff, or how this gadget here works, and so forth and so on. And so machines might do that long before they do really good psychological reasoning, because it's easier to get kind of labeled data or to do direct experimentation on a microphone stand than it is to do direct experimentation on human beings to understand the levers that guide them. That's a really interesting point, actually, whether it's easier to gain common sense knowledge or psychological knowledge. I would say the common sense knowledge includes both physical knowledge and psychological knowledge. And the argument I was making. Well, you said physical versus psychological. Yeah, physical versus psychological. And the argument I was making is physical knowledge might be more accessible, because you could have a robot, for example, lift a bottle, try putting a bottle cap on it, see that it falls off if it does this, and see that it could turn it upside down, and so the robot could do some experimentation. We do some of our psychological reasoning by looking at our own minds. So I can sort of guess how you might react to something based on how I think I would react to it. And robots don't have that intuition, and they also can't do experiments on people in the same way or we'll probably shut them down. So if we wanted to have robots figure out how I respond to pain by pinching me in different ways, like that's probably, it's not gonna make it past the human subjects board and companies are gonna get sued or whatever. So there's certain kinds of practical experience that are limited or off limits to robots. That's a really interesting point. What is more difficult to gain a grounding in? Because to play devil's advocate, I would say that human behavior is easier expressed in data and digital form. And so when you look at Facebook algorithms, they get to observe human behavior. So you get to study and manipulate even a human behavior in a way that you perhaps cannot study or manipulate the physical world. So it's true why you said pain is like physical pain, but that's again, the physical world. Emotional pain might be much easier to experiment with, perhaps unethical, but nevertheless, some would argue it's already going on. I think that you're right, for example, that Facebook does a lot of experimentation in psychological reasoning. In fact, Zuckerberg talked about AI at a talk that he gave in NIPS. I wasn't there, but the conference has been renamed NeurIPS, but he used to be called NIPS when he gave the talk. And he talked about Facebook basically having a gigantic theory of mind. So I think it is certainly possible. I mean, Facebook does some of that. I think they have a really good idea of how to addict people to things. They understand what draws people back to things. I think they exploit it in ways that I'm not very comfortable with. But even so, I think that there are only some slices of human experience that they can access through the kind of interface they have. And of course, they're doing all kinds of VR stuff, and maybe that'll change and they'll expand their data. And I'm sure that that's part of their goal. So it is an interesting question. I think love, fear, insecurity, all of the things that, I would say some of the deepest things about human nature and the human mind could be explored through digital form. It's that you're actually the first person just now that brought up, I wonder what is more difficult. Because I think folks who are the slow, and we'll talk a lot about deep learning, but the people who are thinking beyond deep learning are thinking about the physical world. You're starting to think about robotics in the home robotics. How do we make robots manipulate objects, which requires an understanding of the physical world and then requires common sense reasoning. And that has felt to be like the next step for common sense reasoning, but you've now brought up the idea that there's also the emotional part. And it's interesting whether that's hard or easy. I think some parts of it are and some aren't. So my company that I recently founded with Rod Brooks, from MIT for many years and so forth, we're interested in both. We're interested in physical reasoning and psychological reasoning, among many other things. And there are pieces of each of these that are accessible. So if you want a robot to figure out whether it can fit under a table, that's a relatively accessible piece of physical reasoning. If you know the height of the table and you know the height of the robot, it's not that hard. If you wanted to do physical reasoning about Jenga, it gets a little bit more complicated and you have to have higher resolution data in order to do it. With psychological reasoning, it's not that hard to know, for example, that people have goals and they like to act on those goals, but it's really hard to know exactly what those goals are. But ideas of frustration. I mean, you could argue it's extremely difficult to understand the sources of human frustration as they're playing Jenga with you, or not. You could argue that it's very accessible. There's some things that are gonna be obvious and some not. So I don't think anybody really can do this well yet, but I think it's not inconceivable to imagine machines in the not so distant future being able to understand that if people lose in a game, that they don't like that. That's not such a hard thing to program and it's pretty consistent across people. Most people don't enjoy losing and so that makes it relatively easy to code. On the other hand, if you wanted to capture everything about frustration, well, people can get frustrated for a lot of different reasons. They might get sexually frustrated, they might get frustrated, they can get their promotion at work, all kinds of different things. And the more you expand the scope, the harder it is for anything like the existing techniques to really do that. So I'm talking to Garret Kasparov next week and he seemed pretty frustrated with his game against Deep Blue, so. Yeah, well, I'm frustrated with my game against him last year, because I played him, I had two excuses, I'll give you my excuses up front, but it won't mitigate the outcome. I was jet lagged and I hadn't played in 25 or 30 years, but the outcome is he completely destroyed me and it wasn't even close. Have you ever been beaten in any board game by a machine? I have, I actually played the predecessor to Deep Blue. Deep Thought, I believe it was called, and that too crushed me. And that was, and after that you realize it's over for us. Well, there's no point in my playing Deep Blue. I mean, it's a waste of Deep Blue's computation. I mean, I played Kasparov because we both gave lectures this same event and he was playing 30 people. I forgot to mention that. Not only did he crush me, but he crushed 29 other people at the same time. I mean, but the actual philosophical and emotional experience of being beaten by a machine, I imagine is a, I mean, to you who thinks about these things may be a profound experience. Or no, it was a simple mathematical experience. Yeah, I think a game like chess particularly where you have perfect information, it's two player closed end and there's more computation for the computer, it's no surprise the machine wins. I mean, I'm not sad when a computer, I'm not sad when a computer calculates a cube root faster than me. Like, I know I can't win that game. I'm not gonna try. Well, with a system like AlphaGo or AlphaZero, do you see a little bit more magic in a system like that even though it's simply playing a board game? But because there's a strong learning component? You know, I find you should mention that in the context of this conversation because Kasparov and I are working on an article that's gonna be called AI is not magic. And, you know, neither one of us thinks that it's magic. And part of the point of this article is that AI is actually a grab bag of different techniques and some of them have, or they each have their own unique strengths and weaknesses. So, you know, you read media accounts and it's like, ooh, AI, it must be magical or it can solve any problem. Well, no, some problems are really accessible like chess and go and other problems like reading are completely outside the current technology. And it's not like you can take the technology, that drives AlphaGo and apply it to reading and get anywhere. You know, DeepMind has tried that a bit. They have all kinds of resources. You know, they built AlphaGo and they have, you know, I wrote a piece recently that they lost and you can argue about the word lost, but they spent $530 million more than they made last year. So, you know, they're making huge investments. They have a large budget and they have applied the same kinds of techniques to reading or to language. It's just much less productive there because it's a fundamentally different kind of problem. Chess and go and so forth are closed end problems. The rules haven't changed in 2,500 years. There's only so many moves you can make. You can talk about the exponential as you look at the combinations of moves, but fundamentally, you know, the go board has 361 squares. That's it. That's the only, you know, those intersections are the only places that you can place your stone. Whereas when you're reading, the next sentence could be anything. You know, it's completely up to the writer what they're gonna do next. That's fascinating that you think this way. You're clearly a brilliant mind who points out the emperor has no clothes, but so I'll play the role of a person who says. You're gonna put clothes on the emperor? Good luck with it. It romanticizes the notion of the emperor, period, suggesting that clothes don't even matter. Okay, so that's really interesting that you're talking about language. So there's the physical world of being able to move about the world, making an omelet and coffee and so on. There's language where you first understand what's being written and then maybe even more complicated than that, having a natural dialogue. And then there's the game of go and chess. I would argue that language is much closer to go than it is to the physical world. Like it is still very constrained. When you say the possibility of the number of sentences that could come, it is huge, but it nevertheless is much more constrained. It feels maybe I'm wrong than the possibilities that the physical world brings us. There's something to what you say in some ways in which I disagree. So one interesting thing about language is that it abstracts away. This bottle, I don't know if it would be in the field of view is on this table and I use the word on here and I can use the word on here, maybe not here, but that one word encompasses in analog space sort of infinite number of possibilities. So there is a way in which language filters down the variation of the world and there's other ways. So we have a grammar and more or less you have to follow the rules of that grammar. You can break them a little bit, but by and large we follow the rules of grammar and so that's a constraint on language. So there are ways in which language is a constrained system. On the other hand, there are many arguments that say there's an infinite number of possible sentences and you can establish that by just stacking them up. So I think there's water on the table, you think that I think there's water on the table, your mother thinks that you think that I think that water's on the table, your brother thinks that maybe your mom is wrong to think that you think that I think, right? So we can make sentences of infinite length or we can stack up adjectives. This is a very silly example, a very, very silly example, a very, very, very, very, very, very silly example and so forth. So there are good arguments that there's an infinite range of sentences. In any case, it's vast by any reasonable measure and for example, almost anything in the physical world we can talk about in the language world and interestingly, many of the sentences that we understand, we can only understand if we have a very rich model of the physical world. So I don't ultimately want to adjudicate the debate that I think you just set up, but I find it interesting. Maybe the physical world is even more complicated than language, I think that's fair, but. Language is really, really complicated. It's really, really hard. Well, it's really, really hard for machines, for linguists, people trying to understand it. It's not that hard for children and that's part of what's driven my whole career. I was a student of Steven Pinker's and we were trying to figure out why kids could learn language when machines couldn't. I think we're gonna get into language, we're gonna get into communication intelligence and neural networks and so on, but let me return to the high level, the futuristic for a brief moment. So you've written in your book, in your new book, it would be arrogant to suppose that we could forecast where AI will be or the impact it will have in a thousand years or even 500 years. So let me ask you to be arrogant. What do AI systems with or without physical bodies look like 100 years from now? If you would just, you can't predict, but if you were to philosophize and imagine, do. Can I first justify the arrogance before you try to push me beyond it? Sure. I mean, there are examples like, people figured out how electricity worked, they had no idea that that was gonna lead to cell phones. I mean, things can move awfully fast once new technologies are perfected. Even when they made transistors, they weren't really thinking that cell phones would lead to social networking. There are nevertheless predictions of the future, which are statistically unlikely to come to be, but nevertheless is the best. You're asking me to be wrong. Asking you to be statistically. In which way would I like to be wrong? Pick the least unlikely to be wrong thing, even though it's most very likely to be wrong. I mean, here's some things that we can safely predict, I suppose. We can predict that AI will be faster than it is now. It will be cheaper than it is now. It will be better in the sense of being more general and applicable in more places. It will be pervasive. I mean, these are easy predictions. I'm sort of modeling them in my head on Jeff Bezos's famous predictions. He says, I can't predict the future, not in every way, I'm paraphrasing. But I can predict that people will never wanna pay more money for their stuff. They're never gonna want it to take longer to get there. So you can't predict everything, but you can predict something. Sure, of course it's gonna be faster and better. But what we can't really predict is the full scope of where AI will be in a certain period. I mean, I think it's safe to say that, although I'm very skeptical about current AI, that it's possible to do much better. You know, there's no in principled argument that says AI is an insolvable problem, that there's magic inside our brains that will never be captured. I mean, I've heard people make those kind of arguments. I don't think they're very good. So AI's gonna come, and probably 500 years is plenty to get there. And then once it's here, it really will change everything. So when you say AI's gonna come, are you talking about human level intelligence? So maybe I... I like the term general intelligence. So I don't think that the ultimate AI, if there is such a thing, is gonna look just like humans. I think it's gonna do some things that humans do better than current machines, like reason flexibly. And understand language and so forth. But it doesn't mean they have to be identical to humans. So for example, humans have terrible memory, and they suffer from what some people call motivated reasoning. So they like arguments that seem to support them, and they dismiss arguments that they don't like. There's no reason that a machine should ever do that. So you see that those limitations of memory as a bug, not a feature. Absolutely. I'll say two things about that. One is I was on a panel with Danny Kahneman, the Nobel Prize winner, last night, and we were talking about this stuff. And I think what we converged on is that humans are a low bar to exceed. They may be outside of our skill right now, but as AI programmers, but eventually AI will exceed it. So we're not talking about human level AI. We're talking about general intelligence that can do all kinds of different things and do it without some of the flaws that human beings have. The other thing I'll say is I wrote a whole book, actually, about the flaws of humans. It's actually a nice bookend to the, or counterpoint to the current book. So I wrote a book called Cluj, which was about the limits of the human mind. The current book is kind of about those few things that humans do a lot better than machines. Do you think it's possible that the flaws of the human mind, the limits of memory, our mortality, our bias, is a strength, not a weakness, that that is the thing that enables, from which motivation springs and meaning springs or not? I've heard a lot of arguments like this. I've never found them that convincing. I think that there's a lot of making lemonade out of lemons. So we, for example, do a lot of free association where one idea just leads to the next and they're not really that well connected. And we enjoy that and we make poetry out of it and we make kind of movies with free associations and it's fun and whatever. I don't think that's really a virtue of the system. I think that the limitations in human reasoning actually get us in a lot of trouble. Like, for example, politically we can't see eye to eye because we have the motivational reasoning I was talking about and something related called confirmation bias. So we have all of these problems that actually make for a rougher society because we can't get along because we can't interpret the data in shared ways. And then we do some nice stuff with that. So my free associations are different from yours and you're kind of amused by them and that's great. And hence poetry. So there are lots of ways in which we take a lousy situation and make it good. Another example would be our memories are terrible. So we play games like Concentration where you flip over two cards, try to find a pair. Can you imagine a computer playing that? Computer's like, this is the dullest game in the world. I know where all the cards are, I see it once, I know where it is, what are you even talking about? So we make a fun game out of having this terrible memory. So we are imperfect in discovering and optimizing some kind of utility function. But you think in general, there is a utility function. There's an objective function that's better than others. I didn't say that. But see, the presumption, when you say... I think you could design a better memory system. You could argue about utility functions and how you wanna think about that. But objectively, it would be really nice to do some of the following things. To get rid of memories that are no longer useful. Objectively, that would just be good. And we're not that good at it. So when you park in the same lot every day, you confuse where you parked today with where you parked yesterday with where you parked the day before and so forth. So you blur together a series of memories. There's just no way that that's optimal. I mean, I've heard all kinds of wacky arguments of people trying to defend that. But in the end of the day, I don't think any of them hold water. It's just above. Or memories of traumatic events would be possibly a very nice feature to have to get rid of those. It'd be great if you could just be like, I'm gonna wipe this sector. I'm done with that. I didn't have fun last night. I don't wanna think about it anymore. Whoop, bye bye. I'm gone. But we can't. Do you think it's possible to build a system... So you said human level intelligence is a weird concept, but... Well, I'm saying I prefer general intelligence. General intelligence. I mean, human level intelligence is a real thing. And you could try to make a machine that matches people or something like that. I'm saying that per se shouldn't be the objective, but rather that we should learn from humans the things they do well and incorporate that into our AI, just as we incorporate the things that machines do well that people do terribly. So, I mean, it's great that AI systems can do all this brute force computation that people can't. And one of the reasons I work on this stuff is because I would like to see machines solve problems that people can't, that combine the strength, or that in order to be solved would combine the strengths of machines to do all this computation with the ability, let's say, of people to read. So I'd like machines that can read the entire medical literature in a day. 7,000 new papers or whatever the numbers, comes out every day. There's no way for any doctor or whatever to read them all. A machine that could read would be a brilliant thing. And that would be strengths of brute force computation combined with kind of subtlety and understanding medicine that a good doctor or scientist has. So if we can linger a little bit on the idea of general intelligence. So Yann LeCun believes that human intelligence isn't general at all, it's very narrow. How do you think? I don't think that makes sense. We have lots of narrow intelligences for specific problems. But the fact is, like, anybody can walk into, let's say, a Hollywood movie, and reason about the content of almost anything that goes on there. So you can reason about what happens in a bank robbery, or what happens when someone is infertile and wants to go to IVF to try to have a child, or you can, the list is essentially endless. And not everybody understands every scene in the movie, but there's a huge range of things that pretty much any ordinary adult can understand. His argument is, is that actually, the set of things seems large for us humans because we're very limited in considering the kind of possibilities of experiences that are possible. But in fact, the amount of experience that are possible is infinitely larger. Well, I mean, if you wanna make an argument that humans are constrained in what they can understand, I have no issue with that. I think that's right. But it's still not the same thing at all as saying, here's a system that can play Go. It's been trained on five million games. And then I say, can it play on a rectangular board rather than a square board? And you say, well, if I retrain it from scratch on another five million games, it can. That's really, really narrow, and that's where we are. We don't have even a system that could play Go and then without further retraining, play on a rectangular board, which any human could do with very little problem. So that's what I mean by narrow. And so it's just wordplay to say. That is semantics, yeah. Then it's just words. Then yeah, you mean general in a sense that you can do all kinds of Go board shapes flexibly. Well, that would be like a first step in the right direction, but obviously that's not what it really meaning. You're kidding. What I mean by general is that you could transfer the knowledge you learn in one domain to another. So if you learn about bank robberies in movies and there's chase scenes, then you can understand that amazing scene in Breaking Bad when Walter White has a car chase scene with only one person. He's the only one in it. And you can reflect on how that car chase scene is like all the other car chase scenes you've ever seen and totally different and why that's cool. And the fact that the number of domains you can do that with is finite doesn't make it less general. So the idea of general is you could just do it on a lot of, don't transfer it across a lot of domains. Yeah, I mean, I'm not saying humans are infinitely general or that humans are perfect. I just said a minute ago, it's a low bar, but it's just, it's a low bar. But right now, like the bar is here and we're there and eventually we'll get way past it. So speaking of low bars, you've highlighted in your new book as well, but a couple of years ago wrote a paper titled Deep Learning, A Critical Appraisal that lists 10 challenges faced by current deep learning systems. So let me summarize them as data efficiency, transfer learning, hierarchical knowledge, open ended inference, explainability, integrating prior knowledge, cause of reasoning, modeling on a stable world, robustness, adversarial examples and so on. And then my favorite probably is reliability in the engineering of real world systems. So whatever people can read the paper, they should definitely read the paper, should definitely read your book. But which of these challenges is solved in your view has the biggest impact on the AI community? It's a very good question. And I'm gonna be evasive because I think that they go together a lot. So some of them might be solved independently of others, but I think a good solution to AI starts by having real, what I would call cognitive models of what's going on. So right now we have a approach that's dominant where you take statistical approximations of things, but you don't really understand them. So you know that bottles are correlated in your data with bottle caps, but you don't understand that there's a thread on the bottle cap that fits with the thread on the bottle and then that's what tightens it. If I tighten enough that there's a seal and the water won't come out. Like there's no machine that understands that. And having a good cognitive model of that kind of everyday phenomena is what we call common sense. And if you had that, then a lot of these other things start to fall into at least a little bit better place. Right now you're like learning correlations between pixels when you play a video game or something like that. And it doesn't work very well. It works when the video game is just the way that you studied it and then you alter the video game in small ways, like you move the paddle and break out a few pixels and the system falls apart. Because it doesn't understand, it doesn't have a representation of a paddle, a ball, a wall, a set of bricks and so forth. And so it's reasoning at the wrong level. So the idea of common sense, it's full of mystery, you've worked on it, but it's nevertheless full of mystery, full of promise. What does common sense mean? What does knowledge mean? So the way you've been discussing it now is very intuitive. It makes a lot of sense that that is something we should have and that's something deep learning systems don't have. But the argument could be that we're oversimplifying it because we're oversimplifying the notion of common sense because that's how it feels like we as humans at the cognitive level approach problems. So maybe. A lot of people aren't actually gonna read my book. But if they did read the book, one of the things that might come as a surprise to them is that we actually say common sense is really hard and really complicated. So they would probably, my critics know that I like common sense, but that chapter actually starts by us beating up not on deep learning, but kind of on our own home team as it will. So Ernie and I are first and foremost people that believe in at least some of what good old fashioned AI tried to do. So we believe in symbols and logic and programming. Things like that are important. And we go through why even those tools that we hold fairly dear aren't really enough. So we talk about why common sense is actually many things. And some of them fit really well with those classical sets of tools. So things like taxonomy. So I know that a bottle is an object or it's a vessel, let's say. And I know a vessel is an object and objects are material things in the physical world. So I can make some inferences. If I know that vessels need to not have holes in them, then I can infer that in order to carry their contents, then I can infer that a bottle shouldn't have a hole in it in order to carry its contents. So you can do hierarchical inference and so forth. And we say that's great, but it's only a tiny piece of what you need for common sense. We give lots of examples that don't fit into that. So another one that we talk about is a cheese grater. You've got holes in a cheese grater. You've got a handle on top. You can build a model in the game engine sense of a model so that you could have a little cartoon character flying around through the holes of the grater. But we don't have a system yet. Taxonomy doesn't help us that much that really understands why the handle is on top and what you do with the handle, or why all of those circles are sharp, or how you'd hold the cheese with respect to the grater in order to make it actually work. Do you think these ideas are just abstractions that could emerge on a system like a very large deep neural network? I'm a skeptic that that kind of emergence per se can work. So I think that deep learning might play a role in the systems that do what I want systems to do, but it won't do it by itself. I've never seen a deep learning system really extract an abstract concept. What they do, principled reasons for that stemming from how back propagation works, how the architectures are set up. One example is deep learning people actually all build in something called convolution, which Jan Lacune is famous for, which is an abstraction. They don't have their systems learn this. So the abstraction is an object that looks the same if it appears in different places. And what Lacune figured out and why, essentially why he was a co winner of the Turing Award was that if you programmed this in innately, then your system would be a whole lot more efficient. In principle, this should be learnable, but people don't have systems that kind of reify things and make them more abstract. And so what you'd really wind up with if you don't program that in advance is a system that kind of realizes that this is the same thing as this, but then I take your little clock there and I move it over and it doesn't realize that the same thing applies to the clock. So the really nice thing, you're right, that convolution is just one of the things that's like, it's an innate feature that's programmed by the human expert. We need more of those, not less. Yes, but the nice feature is it feels like that requires coming up with that brilliant idea, can get you a Turing Award, but it requires less effort than encoding and something we'll talk about, the expert system. So encoding a lot of knowledge by hand. So it feels like there's a huge amount of limitations which you clearly outline with deep learning, but the nice feature of deep learning, whatever it is able to accomplish, it does a lot of stuff automatically without human intervention. Well, and that's part of why people love it, right? But I always think of this quote from Bertrand Russell, which is it has all the advantages of theft over honest toil. It's really hard to program into a machine a notion of causality or even how a bottle works or what containers are. Ernie Davis and I wrote a, I don't know, 45 page academic paper trying just to understand what a container is, which I don't think anybody ever read the paper, but it's a very detailed analysis of all the things, well, not even all of it, some of the things you need to do in order to understand a container. It would be a whole lot nice, and I'm a coauthor on the paper, I made it a little bit better, but Ernie did the hard work for that particular paper. And it took him like three months to get the logical statements correct. And maybe that's not the right way to do it, it's a way to do it. But on that way of doing it, it's really hard work to do something as simple as understanding containers. And nobody wants to do that hard work, even Ernie didn't want to do that hard work. Everybody would rather just like feed their system in with a bunch of videos with a bunch of containers and have the systems infer how containers work. It would be like so much less effort, let the machine do the work. And so I understand the impulse, I understand why people want to do that. I just don't think that it works. I've never seen anybody build a system that in a robust way can actually watch videos and predict exactly which containers would leak and which ones wouldn't or something like, and I know someone's gonna go out and do that since I said it, and I look forward to seeing it. But getting these things to work robustly is really, really hard. So Yann LeCun, who was my colleague at NYU for many years, thinks that the hard work should go into defining an unsupervised learning algorithm that will watch videos, use the next frame basically in order to tell it what's going on. And he thinks that's the Royal road and he's willing to put in the work in devising that algorithm. Then he wants the machine to do the rest. And again, I understand the impulse. My intuition, based on years of watching this stuff and making predictions 20 years ago that still hold even though there's a lot more computation and so forth, is that we actually have to do a different kind of hard work, which is more like building a design specification for what we want the system to do, doing hard engineering work to figure out how we do things like what Yann did for convolution in order to figure out how to encode complex knowledge into the systems. The current systems don't have that much knowledge other than convolution, which is again, this objects being in different places and having the same perception, I guess I'll say. Same appearance. People don't want to do that work. They don't see how to naturally fit one with the other. I think that's, yes, absolutely. But also on the expert system side, there's a temptation to go too far the other way. So we're just having an expert sort of sit down and encode the description, the framework for what a container is, and then having the system reason the rest. From my view, one really exciting possibility is of active learning where it's continuous interaction between a human and machine. As the machine, there's kind of deep learning type extraction of information from data patterns and so on, but humans also guiding the learning procedures, guiding both the process and the framework of how the machine learns, whatever the task is. I was with you with almost everything you said except the phrase deep learning. What I think you really want there is a new form of machine learning. So let's remember, deep learning is a particular way of doing machine learning. Most often it's done with supervised data for perceptual categories. There are other things you can do with deep learning, some of them quite technical, but the standard use of deep learning is I have a lot of examples and I have labels for them. So here are pictures. This one's the Eiffel Tower. This one's the Sears Tower. This one's the Empire State Building. This one's a cat. This one's a pig and so forth. You just get millions of examples, millions of labels, and deep learning is extremely good at that. It's better than any other solution that anybody has devised, but it is not good at representing abstract knowledge. It's not good at representing things like bottles contain liquid and have tops to them and so forth. It's not very good at learning or representing that kind of knowledge. It is an example of having a machine learn something, but it's a machine that learns a particular kind of thing, which is object classification. It's not a particularly good algorithm for learning about the abstractions that govern our world. There may be such a thing. Part of what we counsel in the book is maybe people should be working on devising such things. So one possibility, just I wonder what you think about it, is that deep neural networks do form abstractions, but they're not accessible to us humans in terms of we can't. There's some truth in that. So is it possible that either current or future neural networks form very high level abstractions, which are as powerful as our human abstractions of common sense. We just can't get a hold of them. And so the problem is essentially we need to make them explainable. This is an astute question, but I think the answer is at least partly no. One of the kinds of classical neural network architecture is what we call an auto associator. It just tries to take an input, goes through a set of hidden layers, and comes out with an output. And it's supposed to learn essentially the identity function, that your input is the same as your output. So you think of it as binary numbers. You've got the one, the two, the four, the eight, the 16, and so forth. And so if you want to input 24, you turn on the 16, you turn on the eight. It's like binary one, one, and a bunch of zeros. So I did some experiments in 1998 with the precursors of contemporary deep learning. And what I showed was you could train these networks on all the even numbers, and they would never generalize to the odd number. A lot of people thought that I was, I don't know, an idiot or faking the experiment, or it wasn't true or whatever. But it is true that with this class of networks that we had in that day, that they would never ever make this generalization. And it's not that the networks were stupid, it's that they see the world in a different way than we do. They were basically concerned, what is the probability that the rightmost output node is going to be one? And as far as they were concerned, in everything they'd ever been trained on, it was a zero. That node had never been turned on, and so they figured, why turn it on now? Whereas a person would look at the same problem and say, well, it's obvious, we're just doing the thing that corresponds. The Latin for it is mutatis mutandis, we'll change what needs to be changed. And we do this, this is what algebra is. So I can do f of x equals y plus two, and I can do it for a couple of values, I can tell you if y is three, then x is five, and if y is four, x is six. And now I can do it with some totally different number, like a million, then you can say, well, obviously it's a million and two, because you have an algebraic operation that you're applying to a variable. And deep learning systems kind of emulate that, but they don't actually do it. The particular example, you could fudge a solution to that particular problem. The general form of that problem remains, that what they learn is really correlations between different input and output nodes. And they're complex correlations with multiple nodes involved and so forth. Ultimately, they're correlative, they're not structured over these operations over variables. Now, someday, people may do a new form of deep learning that incorporates that stuff, and I think it will help a lot. And there's some tentative work on things like differentiable programming right now that fall into that category. But the sort of classic stuff like people use for ImageNet doesn't have it. And you have people like Hinton going around saying, symbol manipulation, like what Marcus, what I advocate is like the gasoline engine. It's obsolete. We should just use this cool electric power that we've got with the deep learning. And that's really destructive, because we really do need to have the gasoline engine stuff that represents, I mean, I don't think it's a good analogy, but we really do need to have the stuff that represents symbols. Yeah, and Hinton as well would say that we do need to throw out everything and start over. Hinton said that to Axios, and I had a friend who interviewed him and tried to pin him down on what exactly we need to throw out, and he was very evasive. Well, of course, because we can't, if he knew. Then he'd throw it out himself. But I mean, you can't have it both ways. You can't be like, I don't know what to throw out, but I am gonna throw out the symbols. I mean, and not just the symbols, but the variables and the operations over variables. Don't forget, the operations over variables, the stuff that I'm endorsing and which John McCarthy did when he founded AI, that stuff is the stuff that we build most computers out of. There are people now who say, we don't need computer programmers anymore. Not quite looking at the statistics of how much computer programmers actually get paid right now. We need lots of computer programmers, and most of them, they do a little bit of machine learning, but they still do a lot of code, right? Code where it's like, if the value of X is greater than the value of Y, then do this kind of thing, like conditionals and comparing operations over variables. Like, there's this fantasy you can machine learn anything. There's some things you would never wanna machine learn. I would not use a phone operating system that was machine learned. Like, you made a bunch of phone calls and you recorded which packets were transmitted and you just machine learned it, it'd be insane. Or to build a web browser by taking logs of keystrokes and images, screenshots, and then trying to learn the relation between them. Nobody would ever, no rational person would ever try to build a browser that made, they would use symbol manipulation, the stuff that I think AI needs to avail itself of in addition to deep learning. Can you describe your view of symbol manipulation in its early days? Can you describe expert systems and where do you think they hit a wall or a set of challenges? Sure, so I mean, first I just wanna clarify, I'm not endorsing expert systems per se. You've been kind of contrasting them. There is a contrast, but that's not the thing that I'm endorsing. So expert systems tried to capture things like medical knowledge with a large set of rules. So if the patient has this symptom and this other symptom, then it is likely that they have this disease. So there are logical rules and they were symbol manipulating rules of just the sort that I'm talking about. And the problem. They encode a set of knowledge that the experts then put in. And very explicitly so. So you'd have somebody interview an expert and then try to turn that stuff into rules. And at some level I'm arguing for rules. But the difference is those guys did in the 80s was almost entirely rules, almost entirely handwritten with no machine learning. What a lot of people are doing now is almost entirely one species of machine learning with no rules. And what I'm counseling is actually a hybrid. I'm saying that both of these things have their advantage. So if you're talking about perceptual classification, how do I recognize a bottle? Deep learning is the best tool we've got right now. If you're talking about making inferences about what a bottle does, something closer to the expert systems is probably still the best available alternative. And probably we want something that is better able to handle quantitative and statistical information than those classical systems typically were. So we need new technologies that are gonna draw some of the strengths of both the expert systems and the deep learning, but are gonna find new ways to synthesize them. How hard do you think it is to add knowledge at the low level? So mine human intellects to add extra information to symbol manipulating systems? In some domains it's not that hard, but it's often really hard. Partly because a lot of the things that are important, people wouldn't bother to tell you. So if you pay someone on Amazon Mechanical Turk to tell you stuff about bottles, they probably won't even bother to tell you some of the basic level stuff that's just so obvious to a human being and yet so hard to capture in machines. They're gonna tell you more exotic things, and they're all well and good, but they're not getting to the root of the problem. So untutored humans aren't very good at knowing, and why should they be, what kind of knowledge the computer system developers actually need? I don't think that that's an irremediable problem. I think it's historically been a problem. People have had crowdsourcing efforts, and they don't work that well. There's one at MIT, we're recording this at MIT, called Virtual Home, where, and we talk about this in the book, find the exact example there, but people were asked to do things like describe an exercise routine. And the things that the people describe are at a very low level and don't really capture what's going on. So they're like, go to the room with the television and the weights, turn on the television, press the remote to turn on the television, lift weight, put weight down, whatever. It's like very micro level, and it's not telling you what an exercise routine is really about, which is like, I wanna fit a certain number of exercises in a certain time period, I wanna emphasize these muscles. You want some kind of abstract description. The fact that you happen to press the remote control in this room when you watch this television isn't really the essence of the exercise routine. But if you just ask people like, what did they do? Then they give you this fine grain. And so it takes a level of expertise about how the AI works in order to craft the right kind of knowledge. So there's this ocean of knowledge that we all operate on. Some of them may not even be conscious, or at least we're not able to communicate it effectively. Yeah, most of it we would recognize if somebody said it, if it was true or not, but we wouldn't think to say that it's true or not. That's a really interesting mathematical property. This ocean has the property that every piece of knowledge in it, we will recognize it as true if we're told, but we're unlikely to retrieve it in the reverse. So that interesting property, I would say there's a huge ocean of that knowledge. What's your intuition? Is it accessible to AI systems somehow? Can we? So you said this. I mean, most of it is not, well, I'll give you an asterisk on this in a second, but most of it has not ever been encoded in machine interpretable form. And so, I mean, if you say accessible, there's two meanings of that. One is like, could you build it into a machine? Yes. The other is like, is there some database that we could go download and stick into our machine? But the first thing, could we? What's your intuition? I think we could. I think it hasn't been done right. You know, the closest, and this is the asterisk, is the CYC psych system tried to do this. A lot of logicians worked for Doug Lennon for 30 years on this project. I think they stuck too closely to logic, didn't represent enough about probabilities, tried to hand code it. There are various issues, and it hasn't been that successful. That is the closest existing system to trying to encode this. Why do you think there's not more excitement slash money behind this idea currently? There was. People view that project as a failure. I think that they confuse the failure of a specific instance that was conceived 30 years ago for the failure of an approach, which they don't do for deep learning. So in 2010, people had the same attitude towards deep learning. They're like, this stuff doesn't really work. And all these other algorithms work better and so forth. And then certain key technical advances were made, but mostly it was the advent of graphics processing units that changed that. It wasn't even anything foundational in the techniques. And there was some new tricks, but mostly it was just more compute and more data, things like ImageNet that didn't exist before that allowed deep learning. And it could be, to work, it could be that CYC just needs a few more things or something like CYC, but the widespread view is that that just doesn't work. And people are reasoning from a single example. They don't do that with deep learning. They don't say nothing that existed in 2010, and there were many, many efforts in deep learning was really worth anything. I mean, really, there's no model from 2010 in deep learning or the predecessors of deep learning that has any commercial value whatsoever at this point. They're all failures. But that doesn't mean that there wasn't anything there. I have a friend, I was getting to know him, and he said, I had a company too, I was talking about I had a new company. He said, I had a company too, and it failed. And I said, well, what did you do? And he said, deep learning. And the problem was he did it in 1986 or something like that. And we didn't have the tools then, or 1990, we didn't have the tools then, not the algorithms. His algorithms weren't that different from model algorithms, but he didn't have the GPUs to run it fast enough. He didn't have the data. And so it failed. It could be that symbol manipulation per se with modern amounts of data and compute and maybe some advance in compute for that kind of compute might be great. My perspective on it is not that we want to resuscitate that stuff per se, but we want to borrow lessons from it, bring together with other things that we've learned. And it might have an ImageNet moment where it would spark the world's imagination and there'll be an explosion of symbol manipulation efforts. Yeah, I think that people at AI2, Paul Allen's AI Institute, are trying to build data sets. Well, they're not doing it for quite the reason that you say, but they're trying to build data sets that at least spark interest in common sense reasoning. To create benchmarks. Benchmarks for common sense. That's a large part of what the AI2.org is working on right now. So speaking of compute, Rich Sutton wrote a blog post titled Bitter Lesson. I don't know if you've read it, but he said that the biggest lesson that can be read from so many years of AI research is that general methods that leverage computation are ultimately the most effective. Do you think that? The most effective at what? Right, so they have been most effective for perceptual classification problems and for some reinforcement learning problems. And he works on reinforcement learning. Well, no, let me push back on that. You're actually absolutely right. But I would also say they have been most effective generally because everything we've done up to... Would you argue against that? Is, to me, deep learning is the first thing that has been successful at anything in AI. And you're pointing out that this success is very limited, folks, but has there been something truly successful before deep learning? Sure, I mean, I want to make a larger point, but on the narrower point, classical AI is used, for example, in doing navigation instructions. It's very successful. Everybody on the planet uses it now, like multiple times a day. That's a measure of success, right? So I don't think classical AI was wildly successful, but there are cases like that. They're just used all the time. Nobody even notices them because they're so pervasive. So there are some successes for classical AI. I think deep learning has been more successful, but my usual line about this, and I didn't invent it, but I like it a lot, is just because you can build a better ladder doesn't mean you can build a ladder to the moon. So the bitter lesson is if you have a perceptual classification problem, throwing a lot of data at it is better than anything else. But that has not given us any material progress in natural language understanding, common sense reasoning, like a robot would need to navigate a home. Problems like that, there's no actual progress there. So flip side of that, if we remove data from the picture, another bitter lesson is that you just have a very simple algorithm, and you wait for compute to scale. It doesn't have to be learning. It doesn't have to be deep learning. It doesn't have to be data driven, but just wait for the compute. So my question for you, do you think compute can unlock some of the things with either deep learning or symbol manipulation that? Sure, but I'll put a proviso on that. I think more compute's always better. Nobody's gonna argue with more compute. It's like having more money. I mean, there's the data. There's diminishing returns on more money. Exactly, there's diminishing returns on more money, but nobody's gonna argue if you wanna give them more money, right? Except maybe the people who signed the giving pledge, and some of them have a problem. They've promised to give away more money than they're able to. But the rest of us, if you wanna give me more money, fine. I'm saying more money, more problems, but okay. That's true too. What I would say to you is your brain uses like 20 watts, and it does a lot of things that deep learning doesn't do, or that symbol manipulation doesn't do, that AI just hasn't figured out how to do. So it's an existence proof that you don't need server resources that are Google scale in order to have an intelligence. I built, with a lot of help from my wife, two intelligences that are 20 watts each, and far exceed anything that anybody else has built at a silicon. Speaking of those two robots, what have you learned about AI from having? Well, they're not robots, but. Sorry, intelligent agents. Those two intelligent agents. I've learned a lot by watching my two intelligent agents. I think that what's fundamentally interesting, well, one of the many things that's fundamentally interesting about them is the way that they set their own problems to solve. So my two kids are a year and a half apart. They're both five and six and a half. They play together all the time, and they're constantly creating new challenges. That's what they do, is they make up games, and they're like, well, what if this, or what if that, or what if I had this superpower, or what if you could walk through this wall? So they're doing these what if scenarios all the time, and that's how they learn something about the world and grow their minds, and machines don't really do that. So that's interesting, and you've talked about this, you've written about it, you've thought about it, nature versus nurture. So what innate knowledge do you think we're born with, and what do we learn along the way in those early months and years? Can I just say how much I like that question? You phrased it just right, and almost nobody ever does, which is what is the innate knowledge and what's learned along the way? So many people dichotomize it, and they think it's nature versus nurture, when it is obviously has to be nature and nurture. They have to work together. You can't learn this stuff along the way unless you have some innate stuff, but just because you have the innate stuff doesn't mean you don't learn anything. And so many people get that wrong, including in the field. People think if I work in machine learning, the learning side, I must not be allowed to work on the innate side, or that will be cheating. Exactly, people have said that to me, and it's just absurd, so thank you. But you could break that apart more. I've talked to folks who studied the development of the brain, and the growth of the brain in the first few days in the first few months in the womb, all of that, is that innate? So that process of development from a stem cell to the growth of the central nervous system and so on, to the information that's encoded through the long arc of evolution. So all of that comes into play, and it's unclear. It's not just whether it's a dichotomy or not. It's where most, or where the knowledge is encoded. So what's your intuition about the innate knowledge, the power of it, what's contained in it, what can we learn from it? One of my earlier books was actually trying to understand the biology of this. The book was called The Birth of the Mind. Like how is it the genes even build innate knowledge? And from the perspective of the conversation we're having today, there's actually two questions. One is what innate knowledge or mechanisms, or what have you, people or other animals might be endowed with. I always like showing this video of a baby ibex climbing down a mountain. That baby ibex, a few hours after its birth, knows how to climb down a mountain. That means that it knows, not consciously, something about its own body and physics and 3D geometry and all of this kind of stuff. So there's one question about what does biology give its creatures and what has evolved in our brains? How is that represented in our brains? The question I thought about in the book The Birth of the Mind. And then there's a question of what AI should have. And they don't have to be the same. But I would say that it's a pretty interesting set of things that we are equipped with that allows us to do a lot of interesting things. So I would argue or guess, based on my reading of the developmental psychology literature, which I've also participated in, that children are born with a notion of space, time, other agents, places, and also this kind of mental algebra that I was describing before. No certain causation if I didn't just say that. So at least those kinds of things. They're like frameworks for learning the other things. Are they disjoint in your view or is it just somehow all connected? You've talked a lot about language. Is it all kind of connected in some mesh that's language like? If understanding concepts all together or? I don't think we know for people how they're represented and machines just don't really do this yet. So I think it's an interesting open question both for science and for engineering. Some of it has to be at least interrelated in the way that the interfaces of a software package have to be able to talk to one another. So the systems that represent space and time can't be totally disjoint because a lot of the things that we reason about are the relations between space and time and cause. So I put this on and I have expectations about what's gonna happen with the bottle cap on top of the bottle and those span space and time. If the cap is over here, I get a different outcome. If the timing is different, if I put this here, after I move that, then I get a different outcome. That relates to causality. So obviously these mechanisms, whatever they are, can certainly communicate with each other. So I think evolution had a significant role to play in the development of this whole kluge, right? How efficient do you think is evolution? Oh, it's terribly inefficient except that. Okay, well, can we do better? Well, I'll come to that in a sec. It's inefficient except that. Once it gets a good idea, it runs with it. So it took, I guess, a billion years, if I went roughly a billion years, to evolve to a vertebrate brain plan. And once that vertebrate brain plan evolved, it spread everywhere. So fish have it and dogs have it and we have it. We have adaptations of it and specializations of it, but, and the same thing with a primate brain plan. So monkeys have it and apes have it and we have it. So there are additional innovations like color vision and those spread really rapidly. So it takes evolution a long time to get a good idea, but, and I'm being anthropomorphic and not literal here, but once it has that idea, so to speak, which cashes out into one set of genes or in the genome, those genes spread very rapidly and they're like subroutines or libraries, I guess the word people might use nowadays or be more familiar with. They're libraries that get used over and over again. So once you have the library for building something with multiple digits, you can use it for a hand, but you can also use it for a foot. You just kind of reuse the library with slightly different parameters. Evolution does a lot of that, which means that the speed over time picks up. So evolution can happen faster because you have bigger and bigger libraries. And what I think has happened in attempts at evolutionary computation is that people start with libraries that are very, very minimal, like almost nothing, and then progress is slow and it's hard for someone to get a good PhD thesis out of it and they give up. If we had richer libraries to begin with, if you were evolving from systems that had an rich innate structure to begin with, then things might speed up. Or more PhD students, if the evolutionary process is indeed in a meta way runs away with good ideas, you need to have a lot of ideas, pool of ideas in order for it to discover one that you can run away with. And PhD students representing individual ideas as well. Yeah, I mean, you could throw a billion PhD students at it. Yeah, the monkeys are typewriters with Shakespeare, yep. Well, I mean, those aren't cumulative, right? That's just random. And part of the point that I'm making is that evolution is cumulative. So if you have a billion monkeys independently, you don't really get anywhere. But if you have a billion monkeys, and I think Dawkins made this point originally, or probably other people, Dawkins made it very nice and either a selfish gene or blind watchmaker. If there is some sort of fitness function that can drive you towards something, I guess that's Dawkins point. And my point, which is a variation on that, is that if the evolution is cumulative, I mean, the related points, then you can start going faster. Do you think something like the process of evolution is required to build intelligent systems? So if we... Not logically. So all the stuff that evolution did, a good engineer might be able to do. So for example, evolution made quadrupeds, which distribute the load across a horizontal surface. A good engineer could come up with that idea. I mean, sometimes good engineers come up with ideas by looking at biology. There's lots of ways to get your ideas. Part of what I'm suggesting is we should look at biology a lot more. We should look at the biology of thought and understanding and the biology by which creatures intuitively reason about physics or other agents, or like how do dogs reason about people? Like they're actually pretty good at it. If we could understand, at my college we joked dognition, if we could understand dognition well, and how it was implemented, that might help us with our AI. So do you think it's possible that the kind of timescale that evolution took is the kind of timescale that will be needed to build intelligent systems? Or can we significantly accelerate that process inside a computer? I mean, I think the way that we accelerate that process is we borrow from biology, not slavishly, but I think we look at how biology has solved problems and we say, does that inspire any engineering solutions here? Try to mimic biological systems and then therefore have a shortcut. Yeah, I mean, there's a field called biomimicry and people do that for like material science all the time. We should be doing the analog of that for AI and the analog for that for AI is to look at cognitive science or the cognitive sciences, which is psychology, maybe neuroscience, linguistics, and so forth, look to those for insight. What do you think is a good test of intelligence in your view? So I don't think there's one good test. In fact, I tried to organize a movement towards something called a Turing Olympics and my hope is that Francois is actually gonna take, Francois Chollet is gonna take over this. I think he's interested and I don't, I just don't have place in my busy life at this moment, but the notion is that there'd be many tests and not just one because intelligence is multifaceted. There can't really be a single measure of it because it isn't a single thing. Like just the crudest level, the SAT has a verbal component and a math component because they're not identical. And Howard Gardner has talked about multiple intelligences like kinesthetic intelligence and verbal intelligence and so forth. There are a lot of things that go into intelligence and people can get good at one or the other. I mean, in some sense, like every expert has developed a very specific kind of intelligence and then there are people that are generalists and I think of myself as a generalist with respect to cognitive science, which doesn't mean I know anything about quantum mechanics, but I know a lot about the different facets of the mind. And there's a kind of intelligence to thinking about intelligence. I like to think that I have some of that, but social intelligence, I'm just okay. There are people that are much better at that than I am. Sure, but what would be really impressive to you? I think the idea of a touring Olympics is really interesting especially if somebody like Francois is running it, but to you in general, not as a benchmark, but if you saw an AI system being able to accomplish something that would impress the heck out of you, what would that thing be? Would it be natural language conversation? For me personally, I would like to see a kind of comprehension that relates to what you just said. So I wrote a piece in the New Yorker in I think 2015 right after Eugene Guestman, which was a software package, won a version of the Turing test. And the way that it did this is it be, well, the way you win the Turing test, so called win it, is the Turing test is you fool a person into thinking that a machine is a person, is you're evasive, you pretend to have limitations so you don't have to answer certain questions and so forth. So this particular system pretended to be a 13 year old boy from Odessa who didn't understand English and was kind of sarcastic and wouldn't answer your questions and so forth. And so judges got fooled into thinking briefly with a very little exposure, it was a 13 year old boy, and it docked all the questions Turing was actually interested in, which is like how do you make the machine actually intelligent? So that test itself is not that good. And so in New Yorker, I proposed an alternative, I guess, and the one that I proposed there was a comprehension test. And I must like Breaking Bad because I've already given you one Breaking Bad example and in that article, I have one as well, which was something like if Walter, you should be able to watch an episode of Breaking Bad or maybe you have to watch the whole series to be able to answer the question and say, if Walter White took a hit out on Jesse, why did he do that? So if you could answer kind of arbitrary questions about characters motivations, I would be really impressed with that and he built software to do that. They could watch a film or there are different versions. And so ultimately, I wrote this up with Praveen Paritosh in a special issue of AI Magazine that basically was about the Turing Olympics. There were like 14 tests proposed. The one that I was pushing was a comprehension challenge and Praveen who's at Google was trying to figure out like how we would actually run it and so we wrote a paper together. And you could have a text version too or you could have an auditory podcast version, you could have a written version. But the point is that you win at this test if you can do, let's say human level or better than humans at answering kind of arbitrary questions. Why did this person pick up the stone? What were they thinking when they picked up the stone? Were they trying to knock down glass? And I mean, ideally these wouldn't be multiple choice either because multiple choice is pretty easily gamed. So if you could have relatively open ended questions and you can answer why people are doing this stuff, I would be very impressed. And of course, humans can do this, right? If you watch a well constructed movie and somebody picks up a rock, everybody watching the movie knows why they picked up the rock, right? They all know, oh my gosh, he's gonna hit this character or whatever. We have an example in the book about when a whole bunch of people say, I am Spartacus, you know, this famous scene. The viewers understand, first of all, that everybody or everybody minus one has to be lying. They can't all be Spartacus. We have enough common sense knowledge to know they couldn't all have the same name. We know that they're lying and we can infer why they're lying, right? They're lying to protect someone and to protect things they believe in. You get a machine that can do that. They can say, this is why these guys all got up and said, I am Spartacus. I will sit down and say, AI has really achieved a lot. Thank you. Without cheating any part of the system. Yeah, I mean, if you do it, there are lots of ways you could cheat. You could build a Spartacus machine that works on that film. That's not what I'm talking about. I'm talking about, you can do this with essentially arbitrary films or from a large set. Even beyond films because it's possible such a system would discover that the number of narrative arcs in film is limited to 1930. Well, there's a famous thing about the classic seven plots or whatever. I don't care. If you wanna build in the system, boy meets girl, boy loses girl, boy finds girl. That's fine. I don't mind having some head stories on it. And they acknowledge. Okay, good. I mean, you could build it in innately or you could have your system watch a lot of films again. If you can do this at all, but with a wide range of films, not just one film in one genre. But even if you could do it for all Westerns, I'd be reasonably impressed. Yeah. So in terms of being impressed, just for the fun of it, because you've put so many interesting ideas out there in your book, challenging the community for further steps. Is it possible on the deep learning front that you're wrong about its limitations? That deep learning will unlock, Yann LeCun next year will publish a paper that achieves this comprehension. So do you think that way often as a scientist? Do you consider that your intuition that deep learning could actually run away with it? I'm more worried about rebranding as a kind of political thing. So, I mean, what's gonna happen, I think, is the deep learning is gonna start to encompass symbol manipulation. So I think Hinton's just wrong. Hinton says we don't want hybrids. I think people will work towards hybrids and they will relabel their hybrids as deep learning. We've already seen some of that. So AlphaGo is often described as a deep learning system, but it's more correctly described as a system that has deep learning, but also Monte Carlo tree search, which is a classical AI technique. And people will start to blur the lines in the way that IBM blurred Watson. First, Watson meant this particular system, and then it was just anything that IBM built in their cognitive division. But purely, let me ask, for sure, that's a branding question and that's like a giant mess. I mean, purely, a single neural network being able to accomplish reasonable comprehension. I don't stay up at night worrying that that's gonna happen. And I'll just give you two examples. One is a guy at DeepMind thought he had finally outfoxed me. At Zergilord, I think is his Twitter handle. And he said, he specifically made an example. Marcus said that such and such. He fed it into GP2, which is the AI system that is so smart that OpenAI couldn't release it because it would destroy the world, right? You remember that a few months ago. So he feeds it into GPT2, and my example was something like a rose is a rose, a tulip is a tulip, a lily is a blank. And he got it to actually do that, which was a little bit impressive. And I wrote back and I said, that's impressive, but can I ask you a few questions? I said, was that just one example? Can it do it generally? And can it do it with novel words, which was part of what I was talking about in 1998 when I first raised the example. So a dax is a dax, right? And he sheepishly wrote back about 20 minutes later. And the answer was, well, it had some problems with those. So I made some predictions 21 years ago that still hold. In the world of computer science, that's amazing, right? Because there's a thousand or a million times more memory and computations a million times, do million times more operations per second spread across a cluster. And there's been advances in replacing sigmoids with other functions and so forth. There's all kinds of advances, but the fundamental architecture hasn't changed and the fundamental limit hasn't changed. And what I said then is kind of still true. Then here's a second example. I recently had a piece in Wired that's adapted from the book. And the book went to press before GP2 came out, but we described this children's story and all the inferences that you make in this story about a boy finding a lost wallet. And for fun, in the Wired piece, we ran it through GP2. GPT2, something called talktotransformer.com, and your viewers can try this experiment themselves. Go to the Wired piece that has the link and it has the story. And the system made perfectly fluent text that was totally inconsistent with the conceptual underpinnings of the story, right? This is what, again, I predicted in 1998. And for that matter, Chomsky and Miller made the same prediction in 1963. I was just updating their claim for a slightly new text. So those particular architectures that don't have any built in knowledge, they're basically just a bunch of layers doing correlational stuff. They're not gonna solve these problems. So 20 years ago, you said the emperor has no clothes. Today, the emperor still has no clothes. The lighting's better though. The lighting is better. And I think you yourself are also, I mean. And we found out some things to do with naked emperors. I mean, it's not like stuff is worthless. I mean, they're not really naked. It's more like they're in their briefs than everybody thinks they are. And so like, I mean, they are great at speech recognition, but the problems that I said were hard. I didn't literally say the emperor has no clothes. I said, this is a set of problems that humans are really good at. And it wasn't couched as AI. It was couched as cognitive science. But I said, if you wanna build a neural model of how humans do certain class of things, you're gonna have to change the architecture. And I stand by those claims. So, and I think people should understand you're quite entertaining in your cynicism, but you're also very optimistic and a dreamer about the future of AI too. So you're both, it's just. There's a famous saying about being, people overselling technology in the short run and underselling it in the long run. And so I actually end the book, Ernie Davis and I end our book with an optimistic chapter, which kind of killed Ernie because he's even more pessimistic than I am. He describes me as a contrarian and him as a pessimist. But I persuaded him that we should end the book with a look at what would happen if AI really did incorporate, for example, the common sense reasoning and the nativism and so forth, the things that we counseled for. And we wrote it and it's an optimistic chapter that AI suitably reconstructed so that we could trust it, which we can't now, could really be world changing. So on that point, if you look at the future trajectories of AI, people have worries about negative effects of AI, whether it's at the large existential scale or smaller short term scale of negative impact on society. So you write about trustworthy AI, how can we build AI systems that align with our values, that make for a better world, that we can interact with, that we can trust? The first thing we have to do is to replace deep learning with deep understanding. So you can't have alignment with a system that traffics only in correlations and doesn't understand concepts like bottles or harm. So Asimov talked about these famous laws and the first one was first do no harm. And you can quibble about the details of Asimov's laws, but we have to, if we're gonna build real robots in the real world, have something like that. That means we have to program in a notion that's at least something like harm. That means we have to have these more abstract ideas that deep learning is not particularly good at. They have to be in the mix somewhere. And you could do statistical analysis about probabilities of given harms or whatever, but you have to know what a harm is in the same way that you have to understand that a bottle isn't just a collection of pixels. And also be able to, you're implying that you need to also be able to communicate that to humans so the AI systems would be able to prove to humans that they understand that they know what harm means. I might run it in the reverse direction, but roughly speaking, I agree with you. So we probably need to have committees of wise people, ethicists and so forth. Think about what these rules ought to be and we shouldn't just leave it to software engineers. It shouldn't just be software engineers and it shouldn't just be people who own large mega corporations that are good at technology, ethicists and so forth should be involved. But there should be some assembly of wise people as I was putting it that tries to figure out what the rules ought to be. And those have to get translated into code. You can argue or code or neural networks or something. They have to be translated into something that machines can work with. And that means there has to be a way of working the translation. And right now we don't. We don't have a way. So let's say you and I were the committee and we decide that Asimov's first law is actually right. And let's say it's not just two white guys, which would be kind of unfortunate that we have abroad. And so we've representative sample of the world or however we wanna do this. And the committee decides eventually, okay, Asimov's first law is actually pretty good. There are these exceptions to it. We wanna program in these exceptions. But let's start with just the first one and then we'll get to the exceptions. First one is first do no harm. Well, somebody has to now actually turn that into a computer program or a neural network or something. And one way of taking the whole book, the whole argument that I'm making is that we just don't have to do that yet. And we're fooling ourselves if we think that we can build trustworthy AI if we can't even specify in any kind of, we can't do it in Python and we can't do it in TensorFlow. We're fooling ourselves in thinking that we can make trustworthy AI if we can't translate harm into something that we can execute. And if we can't, then we should be thinking really hard how could we ever do such a thing? Because if we're gonna use AI in the ways that we wanna use it, to make job interviews or to do surveillance, not that I personally wanna do that or whatever. I mean, if we're gonna use AI in ways that have practical impact on people's lives or medicine, it's gotta be able to understand stuff like that. So one of the things your book highlights is that a lot of people in the deep learning community, but also the general public, politicians, just people in all general groups and walks of life have different levels of misunderstanding of AI. So when you talk about committees, what's your advice to our society? How do we grow, how do we learn about AI such that such committees could emerge where large groups of people could have a productive discourse about how to build successful AI systems? Part of the reason we wrote the book was to try to inform those committees. So part of the reason we wrote the book was to inspire a future generation of students to solve what we think are the important problems. So a lot of the book is trying to pinpoint what we think are the hard problems where we think effort would most be rewarded. And part of it is to try to train people who talk about AI, but aren't experts in the field to understand what's realistic and what's not. One of my favorite parts in the book is the six questions you should ask anytime you read a media account. So like number one is if somebody talks about something, look for the demo. If there's no demo, don't believe it. Like the demo that you can try. If you can't try it at home, maybe it doesn't really work that well yet. So if, we don't have this example in the book, but if Sundar Pinchai says we have this thing that allows it to sound like human beings in conversation, you should ask, can I try it? And you should ask how general it is. And it turns out at that time, I'm alluding to Google Duplex when it was announced, it only worked on calling hairdressers, restaurants and finding opening hours. That's not very general, that's narrow AI. And I'm not gonna ask your thoughts about Sophia, but yeah, I understand that's a really good question to ask of any kind of hype top idea. Sophia has very good material written for her, but she doesn't understand the things that she's saying. So a while ago you've written a book on the science of learning, which I think is fascinating, but the learning case studies of playing guitar. That's called Guitar Zero. I love guitar myself, I've been playing my whole life. So let me ask a very important question. What is your favorite song, rock song, to listen to or try to play? Well, those would be different, but I'll say that my favorite rock song to listen to is probably All Along the Watchtower, the Jimi Hendrix version. The Jimi Hendrix version. It feels magic to me. I've actually recently learned it, I love that song. I've been trying to put it on YouTube, myself singing. Singing is the scary part. If you could party with a rock star for a weekend, living or dead, who would you choose? And pick their mind, it's not necessarily about the partying. Thanks for the clarification. I guess John Lennon's such an intriguing person, and I think a troubled person, but an intriguing one. Beautiful. Well, Imagine is one of my favorite songs. Also one of my favorite songs. That's a beautiful way to end it. Gary, thank you so much for talking to me. Thanks so much for having me.
Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43
The following is a conversation with David Feroci. He led the team that built Watson, the IBM question answering system that beat the top humans in the world at the game of Jeopardy. For spending a couple hours with David, I saw a genuine passion, not only for abstract understanding of intelligence, but for engineering it to solve real world problems under real world deadlines and resource constraints. Where science meets engineering is where brilliant, simple ingenuity emerges. People who work adjoining it to have a lot of wisdom earned through failures and eventual success. David is also the founder, CEO, and chief scientist of Elemental Cognition, a company working to engineer AI systems that understand the world the way people do. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with David Ferrucci. Your undergrad was in biology with an eye toward medical school before you went on for the PhD in computer science. So let me ask you an easy question. What is the difference between biological systems and computer systems? In your, when you sit back, look at the stars, and think philosophically. I often wonder whether or not there is a substantive difference. I mean, I think the thing that got me into computer science and into artificial intelligence was exactly this presupposition that if we can get machines to think, or I should say this question, this philosophical question, if we can get machines to think, to understand, to process information the way we do, so if we can describe a procedure, describe a process, even if that process were the intelligence process itself, then what would be the difference? So from a philosophical standpoint, I'm not sure I'm convinced that there is. I mean, you can go in the direction of spirituality, you can go in the direction of the soul, but in terms of what we can experience from an intellectual and physical perspective, I'm not sure there is. Clearly, there are different implementations, but if you were to say, is a biological information processing system fundamentally more capable than one we might be able to build out of silicon or some other substrate, I don't know that there is. How distant do you think is the biological implementation? So fundamentally, they may have the same capabilities, but is it really a far mystery where a huge number of breakthroughs are needed to be able to understand it, or is it something that, for the most part, in the important aspects, echoes of the same kind of characteristics? Yeah, that's interesting. I mean, so your question presupposes that there's this goal to recreate what we perceive as biological intelligence. I'm not sure that's the, I'm not sure that's how I would state the goal. I mean, I think that studying. What is the goal? Good, so I think there are a few goals. I think that understanding the human brain and how it works is important for us to be able to diagnose and treat issues, treat issues for us to understand our own strengths and weaknesses, both intellectual, psychological, and physical. So neuroscience and understanding the brain, from that perspective, there's a clear goal there. From the perspective of saying, I wanna mimic human intelligence, that one's a little bit more interesting. Human intelligence certainly has a lot of things we envy. It's also got a lot of problems, too. So I think we're capable of sort of stepping back and saying, what do we want out of an intelligence? How do we wanna communicate with that intelligence? How do we want it to behave? How do we want it to perform? Now, of course, it's somewhat of an interesting argument because I'm sitting here as a human with a biological brain, and I'm critiquing the strengths and weaknesses of human intelligence and saying that we have the capacity to step back and say, gee, what is intelligence and what do we really want out of it? And that, in and of itself, suggests that human intelligence is something quite enviable, that it can introspect that way. And the flaws, you mentioned the flaws. Humans have flaws. Yeah, but I think that flaws that human intelligence has is extremely prejudicial and biased in the way it draws many inferences. Do you think those are, sorry to interrupt, do you think those are features or are those bugs? Do you think the prejudice, the forgetfulness, the fear, what are the flaws? List them all. What, love? Maybe that's a flaw. You think those are all things that can be gotten, get in the way of intelligence or the essential components of intelligence? Well, again, if you go back and you define intelligence as being able to sort of accurately, precisely, rigorously, reason, develop answers, and justify those answers in an objective way, yeah, then human intelligence has these flaws in that it tends to be more influenced by some of the things you said. And it's largely an inductive process, meaning it takes past data, uses that to predict the future. Very advantageous in some cases, but fundamentally biased and prejudicial in other cases because it's gonna be strongly influenced by its priors, whether they're right or wrong from some objective reasoning perspective, you're gonna favor them because those are the decisions or those are the paths that succeeded in the past. And I think that mode of intelligence makes a lot of sense for when your primary goal is to act quickly and survive and make fast decisions. And I think those create problems when you wanna think more deeply and make more objective and reasoned decisions. Of course, humans capable of doing both. They do sort of one more naturally than they do the other, but they're capable of doing both. You're saying they do the one that responds quickly more naturally. Right. Because that's the thing we kind of need to not be eaten by the predators in the world. For example, but then we've learned to reason through logic, we've developed science, we train people to do that. I think that's harder for the individual to do. I think it requires training and teaching. I think we are, the human mind certainly is capable of it, but we find it more difficult. And then there are other weaknesses, if you will, as you mentioned earlier, just memory capacity and how many chains of inference can you actually go through without like losing your way? So just focus and... So the way you think about intelligence, and we're really sort of floating in this philosophical space, but I think you're like the perfect person to talk about this, because we'll get to Jeopardy and beyond. That's like one of the most incredible accomplishments in AI, in the history of AI, but hence the philosophical discussion. So let me ask, you've kind of alluded to it, but let me ask again, what is intelligence? Underlying the discussions we'll have with Jeopardy and beyond, how do you think about intelligence? Is it a sufficiently complicated problem being able to reason your way through solving that problem? Is that kind of how you think about what it means to be intelligent? So I think of intelligence primarily two ways. One is the ability to predict. So in other words, if I have a problem, can I predict what's gonna happen next? Whether it's to predict the answer of a question or to say, look, I'm looking at all the market dynamics and I'm gonna tell you what's gonna happen next, or you're in a room and somebody walks in and you're gonna predict what they're gonna do next or what they're gonna say next. You're in a highly dynamic environment full of uncertainty, be able to predict. The more variables, the more complex. The more possibilities, the more complex. But can I take a small amount of prior data and learn the pattern and then predict what's gonna happen next accurately and consistently? That's certainly a form of intelligence. What do you need for that, by the way? You need to have an understanding of the way the world works in order to be able to unroll it into the future, right? What do you think is needed to predict? Depends what you mean by understanding. I need to be able to find that function. This is very much what deep learning does, machine learning does, is if you give me enough prior data and you tell me what the output variable is that matters, I'm gonna sit there and be able to predict it. And if I can predict it accurately so that I can get it right more often than not, I'm smart, if I can do that with less data and less training time, I'm even smarter. If I can figure out what's even worth predicting, I'm smarter, meaning I'm figuring out what path is gonna get me toward a goal. What about picking a goal? Sorry, you left again. Well, that's interesting about picking a goal, sort of an interesting thing. I think that's where you bring in what are you preprogrammed to do? We talk about humans, and well, humans are preprogrammed to survive. So it's sort of their primary driving goal. What do they have to do to do that? And that can be very complex, right? So it's not just figuring out that you need to run away from the ferocious tiger, but we survive in a social context as an example. So understanding the subtleties of social dynamics becomes something that's important for surviving, finding a mate, reproducing, right? So we're continually challenged with complex sets of variables, complex constraints, rules, if you will, or patterns. And we learn how to find the functions and predict the things. In other words, represent those patterns efficiently and be able to predict what's gonna happen. And that's a form of intelligence. That doesn't really require anything specific other than the ability to find that function and predict that right answer. That's certainly a form of intelligence. But then when we say, well, do we understand each other? In other words, would you perceive me as intelligent beyond that ability to predict? So now I can predict, but I can't really articulate how I'm going through that process, what my underlying theory is for predicting, and I can't get you to understand what I'm doing so that you can figure out how to do this yourself if you did not have, for example, the right pattern matching machinery that I did. And now we potentially have this breakdown where, in effect, I'm intelligent, but I'm sort of an alien intelligence relative to you. You're intelligent, but nobody knows about it, or I can't. Well, I can see the output. So you're saying, let's sort of separate the two things. One is you explaining why you were able to predict the future, and the second is me being able to, impressing me that you're intelligent, me being able to know that you successfully predicted the future. Do you think that's? Well, it's not impressing you that I'm intelligent. In other words, you may be convinced that I'm intelligent in some form. So how, what would convince? Because of my ability to predict. So I would look at the metrics. When you can't, I'd say, wow. You're right more times than I am. You're doing something interesting. That's a form of intelligence. But then what happens is, if I say, how are you doing that? And you can't communicate with me, and you can't describe that to me, now I may label you a savant. I may say, well, you're doing something weird, and it's just not very interesting to me, because you and I can't really communicate. And so now, so this is interesting, right? Because now this is, you're in this weird place where for you to be recognized as intelligent the way I'm intelligent, then you and I sort of have to be able to communicate. And then my, we start to understand each other, and then my respect and my appreciation, my ability to relate to you starts to change. So now you're not an alien intelligence anymore. You're a human intelligence now, because you and I can communicate. And so I think when we look at animals, for example, animals can do things we can't quite comprehend, we don't quite know how they do them, but they can't really communicate with us. They can't put what they're going through in our terms. And so we think of them as sort of, well, they're these alien intelligences, and they're not really worth necessarily what we're worth. We don't treat them the same way as a result of that. But it's hard because who knows what's going on. So just a quick elaboration on that, the explaining that you're intelligent, the explaining the reasoning that went into the prediction is not some kind of mathematical proof. If we look at humans, look at political debates and discourse on Twitter, it's mostly just telling stories. So your task is, sorry, your task is not to tell an accurate depiction of how you reason, but to tell a story, real or not, that convinces me that there was a mechanism by which you. Ultimately, that's what a proof is. I mean, even a mathematical proof is that. Because ultimately, the other mathematicians have to be convinced by your proof. Otherwise, in fact, there have been. That's the metric for success, yeah. There have been several proofs out there where mathematicians would study for a long time before they were convinced that it actually proved anything, right? You never know if it proved anything until the community of mathematicians decided that it did. So I mean, but it's a real thing, right? And that's sort of the point, right? Is that ultimately, this notion of understanding us, understanding something is ultimately a social concept. In other words, I have to convince enough people that I did this in a reasonable way. I did this in a way that other people can understand and replicate and that it makes sense to them. So human intelligence is bound together in that way. We're bound up in that sense. We sort of never really get away with it until we can sort of convince others that our thinking process makes sense. Did you think the general question of intelligence is then also a social construct? So if we ask questions of an artificial intelligence system, is this system intelligent? The answer will ultimately be a socially constructed. I think, so I think I'm making two statements. I'm saying we can try to define intelligence in this super objective way that says, here's this data. I wanna predict this type of thing, learn this function. And then if you get it right, often enough, we consider you intelligent. But that's more like a sub bond. I think it is. It doesn't mean it's not useful. It could be incredibly useful. It could be solving a problem we can't otherwise solve and can solve it more reliably than we can. But then there's this notion of, can humans take responsibility for the decision that you're making? Can we make those decisions ourselves? Can we relate to the process that you're going through? And now you as an agent, whether you're a machine or another human, frankly, are now obliged to make me understand how it is that you're arriving at that answer and allow me, me or obviously a community or a judge of people to decide whether or not that makes sense. And by the way, that happens with the humans as well. You're sitting down with your staff, for example, and you ask for suggestions about what to do next. And someone says, oh, I think you should buy. And I actually think you should buy this much or whatever or sell or whatever it is. Or I think you should launch the product today or tomorrow or launch this product versus that product, whatever the decision may be. And you ask why. And the person says, I just have a good feeling about it. And you're not very satisfied. Now, that person could be, you might say, well, you've been right before, but I'm gonna put the company on the line. Can you explain to me why I should believe this? Right. And that explanation may have nothing to do with the truth. You just, the ultimate. It's gotta convince the other person. Still be wrong, still be wrong. She's gotta be convincing. But it's ultimately gotta be convincing. And that's why I'm saying it's, we're bound together, right? Our intelligences are bound together in that sense. We have to understand each other. And if, for example, you're giving me an explanation, I mean, this is a very important point, right? You're giving me an explanation, and I'm not good, and then I'm not good at reasoning well, and being objective, and following logical paths and consistent paths, and I'm not good at measuring and sort of computing probabilities across those paths. What happens is collectively, we're not gonna do well. How hard is that problem? The second one. So I think we'll talk quite a bit about the first on a specific objective metric benchmark performing well. But being able to explain the steps, the reasoning, how hard is that problem? I think that's very hard. I mean, I think that that's, well, it's hard for humans. The thing that's hard for humans, as you know, may not necessarily be hard for computers and vice versa. So, sorry, so how hard is that problem for computers? I think it's hard for computers, and the reason why I related to, or saying that it's also hard for humans is because I think when we step back and we say we wanna design computers to do that, one of the things we have to recognize is we're not sure how to do it well. I'm not sure we have a recipe for that. And even if you wanted to learn it, it's not clear exactly what data we use and what judgments we use to learn that well. And so what I mean by that is if you look at the entire enterprise of science, science is supposed to be at about objective reason and reason, right? So we think about, gee, who's the most intelligent person or group of people in the world? Do we think about the savants who can close their eyes and give you a number? We think about the think tanks, or the scientists or the philosophers who kind of work through the details and write the papers and come up with the thoughtful, logical proofs and use the scientific method. I think it's the latter. And my point is that how do you train someone to do that? And that's what I mean by it's hard. How do you, what's the process of training people to do that well? That's a hard process. We work, as a society, we work pretty hard to get other people to understand our thinking and to convince them of things. Now we could persuade them, obviously you talked about this, like human flaws or weaknesses, we can persuade them through emotional means. But to get them to understand and connect to and follow a logical argument is difficult. We try it, we do it, we do it as scientists, we try to do it as journalists, we try to do it as even artists in many forms, as writers, as teachers. We go through a fairly significant training process to do that. And then we could ask, well, why is that so hard? But it's hard. And for humans, it takes a lot of work. And when we step back and say, well, how do we get a machine to do that? It's a vexing question. How would you begin to try to solve that? And maybe just a quick pause, because there's an optimistic notion in the things you're describing, which is being able to explain something through reason. But if you look at algorithms that recommend things that we'll look at next, whether it's Facebook, Google, advertisement based companies, their goal is to convince you to buy things based on anything. So that could be reason, because the best of advertisement is showing you things that you really do need and explain why you need it. But it could also be through emotional manipulation. The algorithm that describes why a certain decision was made, how hard is it to do it through emotional manipulation? And why is that a good or a bad thing? So you've kind of focused on reason, logic, really showing in a clear way why something is good. One, is that even a thing that us humans do? And two, how do you think of the difference in the reasoning aspect and the emotional manipulation? So you call it emotional manipulation, but more objectively is essentially saying, there are certain features of things that seem to attract your attention. I mean, it kind of give you more of that stuff. Manipulation is a bad word. Yeah, I mean, I'm not saying it's good right or wrong. It works to get your attention and it works to get you to buy stuff. And when you think about algorithms that look at the patterns of features that you seem to be spending your money on and say, I'm gonna give you something with a similar pattern. So I'm gonna learn that function because the objective is to get you to click on it or get you to buy it or whatever it is. I don't know, I mean, it is what it is. I mean, that's what the algorithm does. You can argue whether it's good or bad. It depends what your goal is. I guess this seems to be very useful for convincing, for telling a story. For convincing humans, it's good because again, this goes back to what is the human behavior like, how does the human brain respond to things? I think there's a more optimistic view of that too, which is that if you're searching for certain kinds of things, you've already reasoned that you need them. And these algorithms are saying, look, that's up to you to reason whether you need something or not. That's your job. You may have an unhealthy addiction to this stuff or you may have a reasoned and thoughtful explanation for why it's important to you. And the algorithms are saying, hey, that's like, whatever. Like, that's your problem. All I know is you're buying stuff like that. You're interested in stuff like that. Could be a bad reason, could be a good reason. That's up to you. I'm gonna show you more of that stuff. And I think that it's not good or bad. It's not reasoned or not reasoned. The algorithm is doing what it does, which is saying, you seem to be interested in this. I'm gonna show you more of that stuff. And I think we're seeing this not just in buying stuff, but even in social media. You're reading this kind of stuff. I'm not judging on whether it's good or bad. I'm not reasoning at all. I'm just saying, I'm gonna show you other stuff with similar features. And like, and that's it. And I wash my hands from it and I say, that's all that's going on. You know, there is, people are so harsh on AI systems. So one, the bar of performance is extremely high. And yet we also ask them to, in the case of social media, to help find the better angels of our nature and help make a better society. What do you think about the role of AI there? So that, I agree with you. That's the interesting dichotomy, right? Because on one hand, we're sitting there and we're sort of doing the easy part, which is finding the patterns. We're not building, the system's not building a theory that is consumable and understandable to other humans that can be explained and justified. And so on one hand to say, oh, you know, AI is doing this. Why isn't doing this other thing? Well, this other thing's a lot harder. And it's interesting to think about why it's harder. And because you're interpreting the data in the context of prior models. In other words, understandings of what's important in the world, what's not important. What are all the other abstract features that drive our decision making? What's sensible, what's not sensible, what's good, what's bad, what's moral, what's valuable, what isn't? Where is that stuff? No one's applying the interpretation. So when I see you clicking on a bunch of stuff and I look at these simple features, the raw features, the features that are there in the data, like what words are being used or how long the material is or other very superficial features, what colors are being used in the material. Like, I don't know why you're clicking on this stuff you're clicking. Or if it's products, what the price is or what the categories and stuff like that. And I just feed you more of the same stuff. That's very different than kind of getting in there and saying, what does this mean? The stuff you're reading, like why are you reading it? What assumptions are you bringing to the table? Are those assumptions sensible? Does the material make any sense? Does it lead you to thoughtful, good conclusions? Again, there's interpretation and judgment involved in that process that isn't really happening in the AI today. That's harder because you have to start getting at the meaning of the stuff, of the content. You have to get at how humans interpret the content relative to their value system and deeper thought processes. So that's what meaning means is not just some kind of deep, timeless, semantic thing that the statement represents, but also how a large number of people are likely to interpret. So that's again, even meaning is a social construct. So you have to try to predict how most people would understand this kind of statement. Yeah, meaning is often relative, but meaning implies that the connections go beneath the surface of the artifacts. If I show you a painting, it's a bunch of colors on a canvas, what does it mean to you? And it may mean different things to different people because of their different experiences. It may mean something even different to the artist who painted it. As we try to get more rigorous with our communication, we try to really nail down that meaning. So we go from abstract art to precise mathematics, precise engineering drawings and things like that. We're really trying to say, I wanna narrow that space of possible interpretations because the precision of the communication ends up becoming more and more important. And so that means that I have to specify, and I think that's why this becomes really hard, because if I'm just showing you an artifact and you're looking at it superficially, whether it's a bunch of words on a page, or whether it's brushstrokes on a canvas or pixels on a photograph, you can sit there and you can interpret lots of different ways at many, many different levels. But when I wanna align our understanding of that, I have to specify a lot more stuff that's actually not directly in the artifact. Now I have to say, well, how are you interpreting this image and that image? And what about the colors and what do they mean to you? What perspective are you bringing to the table? What are your prior experiences with those artifacts? What are your fundamental assumptions and values? What is your ability to kind of reason, to chain together logical implication as you're sitting there and saying, well, if this is the case, then I would conclude this. And if that's the case, then I would conclude that. So your reasoning processes and how they work, your prior models and what they are, your values and your assumptions, all those things now come together into the interpretation. Getting in sync of that is hard. And yet humans are able to intuit some of that without any pre. Because they have the shared experience. And we're not talking about shared, two people having shared experience. I mean, as a society. That's correct. We have the shared experience and we have similar brains. So we tend to, in other words, part of our shared experiences are shared local experience. Like we may live in the same culture, we may live in the same society and therefore we have similar educations. We have some of what we like to call prior models about the word prior experiences. And we use that as a, think of it as a wide collection of interrelated variables and they're all bound to similar things. And so we take that as our background and we start interpreting things similarly. But as humans, we have a lot of shared experience. We do have similar brains, similar goals, similar emotions under similar circumstances. Because we're both humans. So now one of the early questions you asked, how is biological and computer information systems fundamentally different? Well, one is humans come with a lot of pre programmed stuff. A ton of program stuff. And they're able to communicate because they share that stuff. Do you think that shared knowledge, if we can maybe escape the hard work question, how much is encoded in the hardware? Just the shared knowledge in the software, the history, the many centuries of wars and so on that came to today, that shared knowledge. How hard is it to encode? Do you have a hope? Can you speak to how hard is it to encode that knowledge systematically in a way that could be used by a computer? So I think it is possible to learn to, for a machine to program a machine, to acquire that knowledge with a similar foundation. In other words, a similar interpretive foundation for processing that knowledge. What do you mean by that? So in other words, we view the world in a particular way. So in other words, we have a, if you will, as humans, we have a framework for interpreting the world around us. So we have multiple frameworks for interpreting the world around us. But if you're interpreting, for example, socio political interactions, you're thinking about where there's people, there's collections and groups of people, they have goals, goals largely built around survival and quality of life. There are fundamental economics around scarcity of resources. And when humans come and start interpreting a situation like that, because you brought up like historical events, they start interpreting situations like that. They apply a lot of this fundamental framework for interpreting that. Well, who are the people? What were their goals? What resources did they have? How much power influence did they have over the other? Like this fundamental substrate, if you will, for interpreting and reasoning about that. So I think it is possible to imbue a computer with that stuff that humans like take for granted when they go and sit down and try to interpret things. And then with that foundation, they acquire, they start acquiring the details, the specifics in a given situation, are then able to interpret it with regard to that framework. And then given that interpretation, they can do what? They can predict. But not only can they predict, they can predict now with an explanation that can be given in those terms, in the terms of that underlying framework that most humans share. Now you could find humans that come and interpret events very differently than other humans because they're like using a different framework. The movie Matrix comes to mind where they decided humans were really just batteries, and that's how they interpreted the value of humans as a source of electrical energy. So, but I think that for the most part, we have a way of interpreting the events or the social events around us because we have this shared framework. It comes from, again, the fact that we're similar beings that have similar goals, similar emotions, and we can make sense out of these. These frameworks make sense to us. So how much knowledge is there, do you think? So you said it's possible. Well, there's a tremendous amount of detailed knowledge in the world. You could imagine effectively infinite number of unique situations and unique configurations of these things. But the knowledge that you need, what I refer to as like the frameworks, for you need for interpreting them, I don't think. I think those are finite. You think the frameworks are more important than the bulk of the knowledge? So it's like framing. Yeah, because what the frameworks do is they give you now the ability to interpret and reason, and to interpret and reason, to interpret and reason over the specifics in ways that other humans would understand. What about the specifics? You know, you acquire the specifics by reading and by talking to other people. So I'm mostly actually just even, if we can focus on even the beginning, the common sense stuff, the stuff that doesn't even require reading, or it almost requires playing around with the world or something, just being able to sort of manipulate objects, drink water and so on, all of that. Every time we try to do that kind of thing in robotics or AI, it seems to be like an onion. You seem to realize how much knowledge is really required to perform even some of these basic tasks. Do you have that sense as well? And if so, how do we get all those details? Are they written down somewhere? Do they have to be learned through experience? So I think when, like, if you're talking about sort of the physics, the basic physics around us, for example, acquiring information about, acquiring how that works. Yeah, I mean, I think there's a combination of things going, I think there's a combination of things going on. I think there is like fundamental pattern matching, like what we were talking about before, where you see enough examples, enough data about something and you start assuming that. And with similar input, I'm gonna predict similar outputs. You can't necessarily explain it at all. You may learn very quickly that when you let something go, it falls to the ground. But you can't necessarily explain that. But that's such a deep idea, that if you let something go, like the idea of gravity. I mean, people are letting things go and counting on them falling well before they understood gravity. But that seems to be, that's exactly what I mean, is before you take a physics class or study anything about Newton, just the idea that stuff falls to the ground and then you'd be able to generalize that all kinds of stuff falls to the ground. It just seems like a non, without encoding it, like hard coding it in, it seems like a difficult thing to pick up. It seems like you have to have a lot of different knowledge to be able to integrate that into the framework, sort of into everything else. So both know that stuff falls to the ground and start to reason about sociopolitical discourse. So both, like the very basic and the high level reasoning decision making. I guess my question is, how hard is this problem? And sorry to linger on it because again, and we'll get to it for sure, as what Watson with Jeopardy did is take on a problem that's much more constrained but has the same hugeness of scale, at least from the outsider's perspective. So I'm asking the general life question of to be able to be an intelligent being and reason in the world about both gravity and politics, how hard is that problem? So I think it's solvable. Okay, now beautiful. So what about time travel? Okay, I'm just saying the same answer. Not as convinced. Not as convinced yet, okay. No, I think it is solvable. I mean, I think that it's a learn, first of all, it's about getting machines to learn. Learning is fundamental. And I think we're already in a place that we understand, for example, how machines can learn in various ways. Right now, our learning stuff is sort of primitive in that we haven't sort of taught machines to learn the frameworks. We don't communicate our frameworks because of how shared they are, in some cases we do, but we don't annotate, if you will, all the data in the world with the frameworks that are inherent or underlying our understanding. Instead, we just operate with the data. So if we wanna be able to reason over the data in similar terms in the common frameworks, we need to be able to teach the computer, or at least we need to program the computer to acquire, to have access to and acquire, learn the frameworks as well and connect the frameworks to the data. I think this can be done. I think we can start, I think machine learning, for example, with enough examples, can start to learn these basic dynamics. Will they relate them necessarily to the gravity? Not unless they can also acquire those theories as well and put the experiential knowledge and connect it back to the theoretical knowledge. I think if we think in terms of these class of architectures that are designed to both learn the specifics, find the patterns, but also acquire the frameworks and connect the data to the frameworks. If we think in terms of robust architectures like this, I think there is a path toward getting there. In terms of encoding architectures like that, do you think systems that are able to do this will look like neural networks or representing, if you look back to the 80s and 90s with the expert systems, they're more like graphs, systems that are based in logic, able to contain a large amount of knowledge where the challenge was the automated acquisition of that knowledge. I guess the question is when you collect both the frameworks and the knowledge from the data, what do you think that thing will look like? Yeah, so I mean, I think asking the question, they look like neural networks is a bit of a red herring. I mean, I think that they will certainly do inductive or pattern match based reasoning. And I've already experimented with architectures that combine both that use machine learning and neural networks to learn certain classes of knowledge, in other words, to find repeated patterns in order for it to make good inductive guesses, but then ultimately to try to take those learnings and marry them, in other words, connect them to frameworks so that it can then reason over that in terms other humans understand. So for example, at elemental cognition, we do both. We have architectures that do both, both those things, but also have a learning method for acquiring the frameworks themselves and saying, look, ultimately, I need to take this data. I need to interpret it in the form of these frameworks so they can reason over it. So there is a fundamental knowledge representation, like what you're saying, like these graphs of logic, if you will. There are also neural networks that acquire a certain class of information. Then they then align them with these frameworks, but there's also a mechanism to acquire the frameworks themselves. Yeah, so it seems like the idea of frameworks requires some kind of collaboration with humans. Absolutely. So do you think of that collaboration as direct? Well, and let's be clear. Only for the express purpose that you're designing, you're designing an intelligence that can ultimately communicate with humans in the terms of frameworks that help them understand things. So to be really clear, you can independently create a machine learning system, an intelligence that I might call an alien intelligence that does a better job than you with some things, but can't explain the framework to you. That doesn't mean it might be better than you at the thing. It might be that you cannot comprehend the framework that it may have created for itself that is inexplicable to you. That's a reality. But you're more interested in a case where you can. I am, yeah. My sort of approach to AI is because I've set the goal for myself. I want machines to be able to ultimately communicate, understanding with humans. I want them to be able to acquire and communicate, acquire knowledge from humans and communicate knowledge to humans. They should be using what inductive machine learning techniques are good at, which is to observe patterns of data, whether it be in language or whether it be in images or videos or whatever, to acquire these patterns, to induce the generalizations from those patterns, but then ultimately to work with humans to connect them to frameworks, interpretations, if you will, that ultimately make sense to humans. Of course, the machine is gonna have the strength that it has, the richer, longer memory, but it has the more rigorous reasoning abilities, the deeper reasoning abilities, so it'll be an interesting complementary relationship between the human and the machine. Do you think that ultimately needs explainability like a machine? So if we look, we study, for example, Tesla autopilot a lot, where humans, I don't know if you've driven the vehicle, are aware of what it is. So you're basically the human and machine are working together there, and the human is responsible for their own life to monitor the system, and the system fails every few miles, and so there's hundreds, there's millions of those failures a day, and so that's like a moment of interaction. Do you see? Yeah, that's exactly right. That's a moment of interaction where the machine has learned some stuff, it has a failure, somehow the failure's communicated, the human is now filling in the mistake, if you will, or maybe correcting or doing something that is more successful in that case, the computer takes that learning. So I believe that the collaboration between human and machine, I mean, that's sort of a primitive example and sort of a more, another example is where the machine's literally talking to you and saying, look, I'm reading this thing. I know that the next word might be this or that, but I don't really understand why. I have my guess. Can you help me understand the framework that supports this and then can kind of acquire that, take that and reason about it and reuse it the next time it's reading to try to understand something, not unlike a human student might do. I mean, I remember when my daughter was in first grade and she had a reading assignment about electricity and somewhere in the text it says, and electricity is produced by water flowing over turbines or something like that. And then there's a question that says, well, how is electricity created? And so my daughter comes to me and says, I mean, I could, you know, created and produced are kind of synonyms in this case. So I can go back to the text and I can copy by water flowing over turbines, but I have no idea what that means. Like I don't know how to interpret water flowing over turbines and what electricity even is. I mean, I can get the answer right by matching the text, but I don't have any framework for understanding what this means at all. And framework really is, I mean, it's a set of, not to be mathematical, but axioms of ideas that you bring to the table and interpreting stuff and then you build those up somehow. You build them up with the expectation that there's a shared understanding of what they are. Sure, yeah, it's the social, that us humans, do you have a sense that humans on earth in general share a set of, like how many frameworks are there? I mean, it depends on how you bound them, right? So in other words, how big or small, like their individual scope, but there's lots and there are new ones. I think the way I think about it is kind of in a layer. I think that the architectures are being layered in that. There's a small set of primitives. They allow you the foundation to build frameworks. And then there may be many frameworks, but you have the ability to acquire them. And then you have the ability to reuse them. I mean, one of the most compelling ways of thinking about this is a reasoning by analogy, where I can say, oh, wow, I've learned something very similar. I never heard of this game soccer, but if it's like basketball in the sense that the goal's like the hoop and I have to get the ball in the hoop and I have guards and I have this and I have that, like where are the similarities and where are the differences? And I have a foundation now for interpreting this new information. And then the different groups, like the millennials will have a framework. And then, you know, the Democrats and Republicans. Millennials, nobody wants that framework. Well, I mean, I think, right, I mean, you're talking about political and social ways of interpreting the world around them. And I think these frameworks are still largely, largely similar. I think they differ in maybe what some fundamental assumptions and values are. Now, from a reasoning perspective, like the ability to process the framework, it might not be that different. The implications of different fundamental values or fundamental assumptions in those frameworks may reach very different conclusions. So from a social perspective, the conclusions may be very different. From an intelligence perspective, I just followed where my assumptions took me. Yeah, the process itself will look similar. But that's a fascinating idea that frameworks really help carve how a statement will be interpreted. I mean, having a Democrat and a Republican framework and then read the exact same statement and the conclusions that you derive will be totally different from an AI perspective is fascinating. What we would want out of the AI is to be able to tell you that this perspective, one perspective, one set of assumptions is gonna lead you here, another set of assumptions is gonna lead you there. And in fact, to help people reason and say, oh, I see where our differences lie. I have this fundamental belief about that. I have this fundamental belief about that. Yeah, that's quite brilliant. From my perspective, NLP, there's this idea that there's one way to really understand a statement, but that probably isn't. There's probably an infinite number of ways to understand a statement, depending on the question. There's lots of different interpretations, and the broader the content, the richer it is. And so you and I can have very different experiences with the same text, obviously. And if we're committed to understanding each other, we start, and that's the other important point, if we're committed to understanding each other, we start decomposing and breaking down our interpretation to its more and more primitive components until we get to that point where we say, oh, I see why we disagree. And we try to understand how fundamental that disagreement really is. But that requires a commitment to breaking down that interpretation in terms of that framework in a logical way. Otherwise, and this is why I think of AI as really complimenting and helping human intelligence to overcome some of its biases and its predisposition to be persuaded by more shallow reasoning in the sense that we get over this idea, well, I'm right because I'm Republican, or I'm right because I'm Democratic, and someone labeled this as Democratic point of view, or it has the following keywords in it. And if the machine can help us break that argument down and say, wait a second, what do you really think about this, right? So essentially holding us accountable to doing more critical thinking. We're gonna have to sit and think about this fast. That's, I love that. I think that's really empowering use of AI for the public discourse is completely disintegrating currently as we learn how to do it on social media. That's right. So one of the greatest accomplishments in the history of AI is Watson competing in the game of Jeopardy against humans. And you were a lead in that, a critical part of that. Let's start at the very basics. What is the game of Jeopardy? The game for us humans, human versus human. Right, so it's to take a question and answer it. The game of Jeopardy. It's just the opposite. Actually, well, no, but it's not, right? It's really not. It's really to get a question and answer, but it's what we call a factoid question. So this notion of like, it really relates to some fact that two people would argue whether the facts are true or not. In fact, most people wouldn't. Jeopardy kind of counts on the idea that these statements have factual answers. And the idea is to, first of all, determine whether or not you know the answer, which is sort of an interesting twist. So first of all, understand the question. You have to understand the question. What is it asking? And that's a good point because the questions are not asked directly, right? They're all like, the way the questions are asked is nonlinear. It's like, it's a little bit witty. It's a little bit playful sometimes. It's a little bit tricky. Yeah, they're asked in exactly numerous witty, tricky ways. Exactly what they're asking is not obvious. It takes inexperienced humans a while to go, what is it even asking? And it's sort of an interesting realization that you have when somebody says, oh, what's, Jeopardy is a question answering show. And then he's like, oh, like, I know a lot. And then you read it and you're still trying to process the question and the champions have answered and moved on. There are three questions ahead by the time you figured out what the question even meant. So there's definitely an ability there to just parse out what the question even is. So that was certainly challenging. It's interesting historically though, if you look back at the Jeopardy games much earlier, you know, early games. Like 60s, 70s, that kind of thing. The questions were much more direct. They weren't quite like that. They got sort of more and more interesting, the way they asked them that sort of got more and more interesting and subtle and nuanced and humorous and witty over time, which really required the human to kind of make the right connections in figuring out what the question was even asking. So yeah, you have to figure out the questions even asking. Then you have to determine whether or not you think you know the answer. And because you have to buzz in really quickly, you sort of have to make that determination as quickly as you possibly can. Otherwise you lose the opportunity to buzz in. You mean... Even before you really know if you know the answer. I think a lot of humans will assume, they'll process it very superficially. In other words, what's the topic? What are some keywords? And just say, do I know this area or not before they actually know the answer? Then they'll buzz in and think about it. So it's interesting what humans do. Now, some people who know all things, like Ken Jennings or something, or the more recent big Jeopardy player, I mean, they'll just buzz in. They'll just assume they know all of Jeopardy and they'll just buzz in. Watson, interestingly, didn't even come close to knowing all of Jeopardy, right? Watson really... Even at the peak, even at its best. Yeah, so for example, I mean, we had this thing called recall, which is like how many of all the Jeopardy questions, how many could we even find the right answer for anywhere? Like, can we come up with, we had a big body of knowledge, something in the order of several terabytes. I mean, from a web scale, it was actually very small, but from like a book scale, we're talking about millions of books, right? So the equivalent of millions of books, encyclopedias, dictionaries, books, it's still a ton of information. And I think it was only 85% was the answer anywhere to be found. So you're already down at that level just to get started, right? So, and so it was important to get a very quick sense of do you think you know the right answer to this question? So we had to compute that confidence as quickly as we possibly could. So in effect, we had to answer it and at least spend some time essentially answering it and then judging the confidence that our answer was right and then deciding whether or not we were confident enough to buzz in. And that would depend on what else was going on in the game. Because there was a risk. So like if you're really in a situation where I have to take a guess, I have very little to lose, then you'll buzz in with less confidence. So that was accounted for the financial standings of the different competitors. Correct. How much of the game was left? How much time was left? Where you were in the standing, things like that. How many hundreds of milliseconds that we're talking about here? Do you have a sense of what is? We targeted, yeah, we targeted. So, I mean, we targeted answering in under three seconds and. Buzzing in. So the decision to buzz in and then the actual answering are those two different stages? Yeah, they were two different things. In fact, we had multiple stages, whereas like we would say, let's estimate our confidence, which was sort of a shallow answering process. And then ultimately decide to buzz in and then we may take another second or something to kind of go in there and do that. But by and large, we were saying like, we can't play the game. We can't even compete if we can't on average answer these questions in around three seconds or less. So you stepped in. So there's these three humans playing a game and you stepped in with the idea that IBM Watson would be one of, replace one of the humans and compete against two. Can you tell the story of Watson taking on this game? Sure. It seems exceptionally difficult. Yeah, so the story was that it was coming up, I think to the 10 year anniversary of Big Blue, not Big Blue, Deep Blue. IBM wanted to do sort of another kind of really fun challenge, public challenge that can bring attention to IBM research and the kind of the cool stuff that we were doing. I had been working in AI at IBM for some time. I had a team doing what's called open domain factoid question answering, which is, we're not gonna tell you what the questions are. We're not even gonna tell you what they're about. Can you go off and get accurate answers to these questions? And it was an area of AI research that I was involved in. And so it was a very specific passion of mine. Language understanding had always been a passion of mine. One sort of narrow slice on whether or not you could do anything with language was this notion of open domain and meaning I could ask anything about anything. Factoid meaning it essentially had an answer and being able to do that accurately and quickly. So that was a research area that my team had already been in. And so completely independently, several IBM executives, like what are we gonna do? What's the next cool thing to do? And Ken Jennings was on his winning streak. This was like, whatever it was, 2004, I think, was on his winning streak. And someone thought, hey, that would be really cool if the computer can play Jeopardy. And so this was like in 2004, they were shopping this thing around and everyone was telling the research execs, no way. Like, this is crazy. And we had some pretty senior people in the field and they're saying, no, this is crazy. And it would come across my desk and I was like, but that's kind of what I'm really interested in doing. But there was such this prevailing sense of this is nuts. We're not gonna risk IBM's reputation on this. We're just not doing it. And this happened in 2004, it happened in 2005. At the end of 2006, it was coming around again. And I was coming off of a, I was doing the open domain question answering stuff, but I was coming off a couple other projects. I had a lot more time to put into this. And I argued that it could be done. And I argue it would be crazy not to do this. Can I, you can be honest at this point. So even though you argued for it, what's the confidence that you had yourself privately that this could be done? Was, we just told the story, how you tell stories to convince others. How confident were you? What was your estimation of the problem at that time? So I thought it was possible. And a lot of people thought it was impossible. I thought it was possible. The reason why I thought it was possible was because I did some brief experimentation. I knew a lot about how we were approaching open domain factoid question answering. I've been doing it for some years. I looked at the Jeopardy stuff. I said, this is gonna be hard for a lot of the points that we mentioned earlier. Hard to interpret the question. Hard to do it quickly enough. Hard to compute an accurate confidence. None of this stuff had been done well enough before. But a lot of the technologies we're building were the kinds of technologies that should work. But more to the point, what was driving me was, I was in IBM research. I was a senior leader in IBM research. And this is the kind of stuff we were supposed to do. In other words, we were basically supposed to. This is the moonshot. This is the. We were supposed to take things and say, this is an active research area. It's our obligation to kind of, if we have the opportunity, to push it to the limits. And if it doesn't work, to understand more deeply why we can't do it. And so I was very committed to that notion saying, folks, this is what we do. It's crazy not to do this. This is an active research area. We've been in this for years. Why wouldn't we take this grand challenge and push it as hard as we can? At the very least, we'd be able to come out and say, here's why this problem is way hard. Here's what we tried and here's how we failed. So I was very driven as a scientist from that perspective. And then I also argued, based on what we did a feasibility study, why I thought it was hard but possible. And I showed examples of where it succeeded, where it failed, why it failed, and sort of a high level architecture approach for why we should do it. But for the most part, at that point, the execs really were just looking for someone crazy enough to say yes, because for several years at that point, everyone had said, no, I'm not willing to risk my reputation and my career on this thing. Clearly you did not have such fears. Okay. I did not. So you dived right in. And yet, for what I understand, it was performing very poorly in the beginning. So what were the initial approaches and why did they fail? Well, there were lots of hard aspects to it. I mean, one of the reasons why prior approaches that we had worked on in the past failed was because the questions were difficult to interpret. Like, what are you even asking for, right? Very often, like if the question was very direct, like what city, or what, even then it could be tricky, but what city or what person, is often when it would name it very clearly, you would know that. And if there were just a small set of them, in other words, we're gonna ask about these five types. Like, it's gonna be an answer, and the answer will be a city in this state or a city in this country. The answer will be a person of this type, right? Like an actor or whatever it is. But it turns out that in Jeopardy, there were like tens of thousands of these things. And it was a very, very long tail, meaning that it just went on and on. And so even if you focused on trying to encode the types at the very top, like there's five that were the most, let's say five of the most frequent, you still cover a very small percentage of the data. So you couldn't take that approach of saying, I'm just going to try to collect facts about these five or 10 types or 20 types or 50 types or whatever. So that was like one of the first things, like what do you do about that? And so we came up with an approach toward that. And the approach looked promising, and we continued to improve our ability to handle that problem throughout the project. The other issue was that right from the outside, I said, we're not going to, I committed to doing this in three to five years. So we did it in four. So I got lucky. But one of the things that that, putting that like stake in the ground was, and I knew how hard the language understanding problem was. I said, we're not going to actually understand language to solve this problem. We are not going to interpret the question and the domain of knowledge that the question refers to and reason over that to answer these questions. Obviously we're not going to be doing that. At the same time, simple search wasn't good enough to confidently answer with a single correct answer. First of all, that's like brilliant. That's such a great mix of innovation and practical engineering three, four, eight. So you're not trying to solve the general NLU problem. You're saying, let's solve this in any way possible. Oh, yeah. No, I was committed to saying, look, we're going to solving the open domain question answering problem. We're using Jeopardy as a driver for that. That's a big benchmark. Good enough, big benchmark, exactly. And now we're. How do we do it? We could just like, whatever, like just figure out what works because I want to be able to go back to the academic science community and say, here's what we tried. Here's what worked. Here's what didn't work. Great. I don't want to go in and say, oh, I only have one technology. I have a hammer. I'm only going to use this. I'm going to do whatever it takes. I'm like, I'm going to think out of the box and do whatever it takes. One, and I also, there was another thing I believed. I believed that the fundamental NLP technologies and machine learning technologies would be adequate. And this was an issue of how do we enhance them? How do we integrate them? How do we advance them? So I had one researcher who came to me who had been working on question answering with me for a very long time, who had said, we're going to need Maxwell's equations for question answering. And I said, if we need some fundamental formula that breaks new ground in how we understand language, we're screwed. We're not going to get there from here. Like I am not counting. My assumption is I'm not counting on some brand new invention. What I'm counting on is the ability to take everything it has done before to figure out an architecture on how to integrate it well and then see where it breaks and make the necessary advances we need to make until this thing works. Push it hard to see where it breaks and then patch it up. I mean, that's how people change the world. I mean, that's the Elon Musk approach to the rockets, SpaceX, that's the Henry Ford and so on. I love it. And I happen to be, in this case, I happen to be right, but like we didn't know. But you kind of have to put a stake in the rest of how you're going to run the project. So yeah, and backtracking to search. So if you were to do, what's the brute force solution? What would you search over? So you have a question, how would you search the possible space of answers? Look, web search has come a long way even since then. But at the time, first of all, I mean, there were a couple of other constraints around the problem, which is interesting. So you couldn't go out to the web. You couldn't search the internet. In other words, the AI experiment was, we want a self contained device. If the device is as big as a room, fine, it's as big as a room, but we want a self contained device. You're not going out to the internet. You don't have a lifeline to anything. So it had to kind of fit in a shoe box, if you will, or at least a size of a few refrigerators, whatever it might be. See, but also you couldn't just get out there. You couldn't go off network, right, to kind of go. So there was that limitation. But then we did, but the basic thing was go do web search. Problem was, even when we went and did a web search, I don't remember exactly the numbers, but somewhere in the order of 65% of the time, the answer would be somewhere, you know, in the top 10 or 20 documents. So first of all, that's not even good enough to play Jeopardy. You know, the words, even if you could pull the, even if you could perfectly pull the answer out of the top 20 documents, top 10 documents, whatever it was, which we didn't know how to do. But even if you could do that, you'd be, and you knew it was right, unless you had enough confidence in it, right? So you'd have to pull out the right answer. You'd have to have confidence it was the right answer. And then you'd have to do that fast enough to now go buzz in and you'd still only get 65% of them right, which doesn't even put you in the winner's circle. Winner's circle, you have to be up over 70 and you have to do it really quick and you have to do it really quickly. But now the problem is, well, even if I had somewhere in the top 10 documents, how do I figure out where in the top 10 documents that answer is and how do I compute a confidence of all the possible candidates? So it's not like I go in knowing the right answer and I have to pick it. I don't know the right answer. I have a bunch of documents, somewhere in there is the right answer. How do I, as a machine, go out and figure out which one's right? And then how do I score it? So, and now how do I deal with the fact that I can't actually go out to the web? First of all, if you pause on that, just think about it. If you could go to the web, do you think that problem is solvable if you just pause on it? Just thinking even beyond jeopardy, do you think the problem of reading text defined where the answer is? Well, we solved that in some definition of solves given the jeopardy challenge. How did you do it for jeopardy? So how do you take a body of work in a particular topic and extract the key pieces of information? So now forgetting about the huge volumes that are on the web, right? So now we have to figure out, we did a lot of source research. In other words, what body of knowledge is gonna be small enough, but broad enough to answer jeopardy? And we ultimately did find the body of knowledge that did that. I mean, it included Wikipedia and a bunch of other stuff. So like encyclopedia type of stuff. I don't know if you can speak to it. Encyclopedias, dictionaries, different types of semantic resources, like WordNet and other types of semantic resources like that, as well as like some web crawls. In other words, where we went out and took that content and then expanded it based on producing, statistically producing seeds, using those seeds for other searches and then expanding that. So using these like expansion techniques, we went out and had found enough content and we're like, okay, this is good. And even up until the end, we had a thread of research. It was always trying to figure out what content could we efficiently include. I mean, there's a lot of popular, like what is the church lady? Well, I think was one of the, like what, where do you, I guess that's probably an encyclopedia, so. So that was an encyclopedia, but then we would take that stuff and we would go out and we would expand. In other words, we'd go find other content that wasn't in the core resources and expand it. The amount of content, we grew it by an order of magnitude, but still, again, from a web scale perspective, this is very small amount of content. It's very select. We then took all that content, we preanalyzed the crap out of it, meaning we parsed it, broke it down into all those individual words and then we did semantic, syntactic and semantic parses on it, had computer algorithms that annotated it and we indexed that in a very rich and very fast index. So we have a relatively huge amount of, let's say the equivalent of, for the sake of argument, two to 5 million bucks. We've now analyzed all that, blowing up its size even more because now we have all this metadata and then we richly indexed all of that and by the way, in a giant in memory cache. So Watson did not go to disk. So the infrastructure component there, if you could just speak to it, how tough it, I mean, I know 2000, maybe this is 2008, nine, that's kind of a long time ago. How hard is it to use multiple machines? How hard is the infrastructure component, the hardware component? So we used IBM hardware. We had something like, I forgot exactly, but close to 3000 cores completely connected. So you had a switch where every CPU was connected to every other CPU. And they were sharing memory in some kind of way. Large shared memory, right? And all this data was preanalyzed and put into a very fast indexing structure that was all in memory. And then we took that question, we would analyze the question. So all the content was now preanalyzed. So if I went and tried to find a piece of content, it would come back with all the metadata that we had precomputed. How do you shove that question? How do you connect the big knowledge base with the metadata and that's indexed to the simple little witty confusing question? Right. So therein lies the Watson architecture, right? So we would take the question, we would analyze the question. So which means that we would parse it and interpret it a bunch of different ways. We'd try to figure out what is it asking about? So we had multiple strategies to kind of determine what was it asking for. That might be represented as a simple string, a character string, or something we would connect back to different semantic types that were from existing resources. So anyway, the bottom line is we would do a bunch of analysis in the question. And question analysis had to finish and had to finish fast. So we do the question analysis because then from the question analysis, we would now produce searches. So we would, and we had built using open source search engines, we modified them, but we had a number of different search engines we would use that had different characteristics. We went in there and engineered and modified those search engines, ultimately to now take our question analysis, produce multiple queries based on different interpretations of the question and fire out a whole bunch of searches in parallel. And they would come back with passages. So these are passive search algorithms. They would come back with passages. And so now let's say you had a thousand passages. Now for each passage, you parallelize again. So you went out and you parallelize the search. Each search would now come back with a whole bunch of passages. Maybe you had a total of a thousand or 5,000 whatever passages. For each passage now, you'd go and figure out whether or not there was a candidate, we'd call it candidate answer in there. So you had a whole bunch of other algorithms that would find candidate answers, possible answers to the question. And so you had candidate answer, called candidate answer generators, a whole bunch of those. So for every one of these components, the team was constantly doing research coming up, better ways to generate search queries from the questions, better ways to analyze the question, better ways to generate candidates. And speed, so better is accuracy and speed. Correct, so right, speed and accuracy for the most part were separated. We handle that sort of in separate ways. Like I focus purely on accuracy, end to end accuracy. Are we ultimately getting more questions and producing more accurate confidences? And then a whole nother team that was constantly analyzing the workflow to find the bottlenecks. And then figuring out how to both parallelize and drive the algorithm speed. But anyway, so now think of it like, you have this big fan out now, right? Because you had multiple queries, now you have thousands of candidate answers. For each candidate answer, you're gonna score it. So you're gonna use all the data that built up. You're gonna use the question analysis, you're gonna use how the query was generated, you're gonna use the passage itself, and you're gonna use the candidate answer that was generated, and you're gonna score that. So now we have a group of researchers coming up with scores. There are hundreds of different scores. So now you're getting a fan out of it again from however many candidate answers you have to all the different scores. So if you have 200 different scores and you have a thousand candidates, now you have 200,000 scores. And so now you gotta figure out, how do I now rank these answers based on the scores that came back? And I wanna rank them based on the likelihood that they're a correct answer to the question. So every scorer was its own research project. What do you mean by scorer? So is that the annotation process of basically a human being saying that this answer has a quality of? Think of it, if you wanna think of it, what you're doing, you know, if you wanna think about what a human would be doing, human would be looking at a possible answer, they'd be reading the, you know, Emily Dickinson, they'd be reading the passage in which that occurred, they'd be looking at the question, and they'd be making a decision of how likely it is that Emily Dickinson, given this evidence in this passage, is the right answer to that question. Got it. So that's the annotation task. That's the annotation process. That's the scoring task. But scoring implies zero to one kind of continuous. That's right. You give it a zero to one score. So it's not a binary. No, you give it a score. Give it a zero to, yeah, exactly, zero to one score. But humans give different scores, so you have to somehow normalize and all that kind of stuff that deal with all that complexity. It depends on what your strategy is. We both, we... It could be relative, too. It could be... We actually looked at the raw scores as well as standardized scores, because humans are not involved in this. Humans are not involved. Sorry, so I'm misunderstanding the process here. This is passages. Where is the ground truth coming from? Ground truth is only the answers to the questions. So it's end to end. It's end to end. So I was always driving end to end performance. It's a very interesting, a very interesting engineering approach, and ultimately scientific research approach, always driving end to end. Now, that's not to say we wouldn't make hypotheses that individual component performance was related in some way to end to end performance. Of course we would, because people would have to build individual components. But ultimately, to get your component integrated to the system, you have to show impact on end to end performance, question answering performance. So there's many very smart people working on this, and they're basically trying to sell their ideas as a component that should be part of the system. That's right. And they would do research on their component, and they would say things like, I'm gonna improve this as a candidate generator, or I'm gonna improve this as a question score, or as a passive scorer, I'm gonna improve this, or as a parser, and I can improve it by 2% on its component metric, like a better parse, or a better candidate, or a better type estimation, whatever it is. And then I would say, I need to understand how the improvement on that component metric is gonna affect the end to end performance. If you can't estimate that, and can't do experiments to demonstrate that, it doesn't get in. That's like the best run AI project I've ever heard. That's awesome. Okay, what breakthrough would you say, like, I'm sure there's a lot of day to day breakthroughs, but was there like a breakthrough that really helped improve performance? Like where people began to believe, or is it just a gradual process? Well, I think it was a gradual process, but one of the things that I think gave people confidence that we can get there was that, as we follow this procedure of different ideas, build different components, plug them into the architecture, run the system, see how we do, do the error analysis, start off new research projects to improve things. And the very important idea that the individual component work did not have to deeply understand everything that was going on with every other component. And this is where we leverage machine learning in a very important way. So while individual components could be statistically driven machine learning components, some of them were heuristic, some of them were machine learning components, the system has a whole combined all the scores using machine learning. This was critical because that way you can divide and conquer. So you can say, okay, you work on your candidate generator, or you work on this approach to answer scoring, you work on this approach to type scoring, you work on this approach to passage search or to pass a selection and so forth. But when we just plug it in, and we had enough training data to say, now we can train and figure out how do we weigh all the scores relative to each other based on the predicting the outcome, which is right or wrong on Jeopardy. And we had enough training data to do that. So this enabled people to work independently and to let the machine learning do the integration. Beautiful, so yeah, the machine learning is doing the fusion, and then it's a human orchestrated ensemble of different approaches. That's great. Still impressive that you were able to get it done in a few years. That's not obvious to me that it's doable, if I just put myself in that mindset. But when you look back at the Jeopardy challenge, again, when you're looking up at the stars, what are you most proud of, looking back at those days? I'm most proud of my, my commitment and my team's commitment to be true to the science, to not be afraid to fail. That's beautiful because there's so much pressure, because it is a public event, it is a public show, that you were dedicated to the idea. That's right. Do you think it was a success? In the eyes of the world, it was a success. By your, I'm sure, exceptionally high standards, is there something you regret you would do differently? It was a success. It was a success for our goal. Our goal was to build the most advanced open domain question answering system. We went back to the old problems that we used to try to solve, and we did dramatically better on all of them, as well as we beat Jeopardy. So we won at Jeopardy. So it was a success. It was, I worry that the community or the world would not understand it as a success because it came down to only one game. And I knew statistically speaking, this can be a huge technical success, and we could still lose that one game. And that's a whole nother theme of this, of the journey. But it was a success. It was not a success in natural language understanding, but that was not the goal. Yeah, that was, but I would argue, I understand what you're saying in terms of the science, but I would argue that the inspiration of it, right? The, not a success in terms of solving natural language understanding. There was a success of being an inspiration to future challenges. Absolutely. That drive future efforts. What's the difference between how human being compete in Jeopardy and how Watson does it? That's important in terms of intelligence. Yeah, so that actually came up very early on in the project also. In fact, I had people who wanted to be on the project who were early on, who sort of approached me once I committed to do it, had wanted to think about how humans do it. And they were, from a cognition perspective, like human cognition and how that should play. And I would not take them on the project because another assumption or another stake I put in the ground was, I don't really care how humans do this. At least in the context of this project. I need to build in the context of this project. In NLU and in building an AI that understands how it needs to ultimately communicate with humans, I very much care. So it wasn't that I didn't care in general. In fact, as an AI scientist, I care a lot about that, but I'm also a practical engineer and I committed to getting this thing done and I wasn't gonna get distracted. I had to kind of say, like, if I'm gonna get this done, I'm gonna chart this path. And this path says, we're gonna engineer a machine that's gonna get this thing done. And we know what search and NLP can do. We have to build on that foundation. If I come in and take a different approach and start wondering about how the human mind might or might not do this, I'm not gonna get there from here in the timeframe. I think that's a great way to lead the team. But now that it's done and there's one, when you look back, analyze what's the difference actually. So I was a little bit surprised actually to discover over time, as this would come up from time to time and we'd reflect on it, and talking to Ken Jennings a little bit and hearing Ken Jennings talk about how he answered questions, that it might've been closer to the way humans answer questions than I might've imagined previously. Because humans are probably in the game of Jeopardy! at the level of Ken Jennings, are probably also cheating their way to winning, right? Not cheating, but shallow. Well, they're doing shallow analysis. They're doing the fastest possible. They're doing shallow analysis. So they are very quickly analyzing the question and coming up with some key vectors or cues, if you will. And they're taking those cues and they're very quickly going through like their library of stuff, not deeply reasoning about what's going on. And then sort of like a lots of different, like what we would call these scores, would kind of score that in a very shallow way and then say, oh, boom, you know, that's what it is. And so it's interesting as we reflected on that. So we may be doing something that's not too far off from the way humans do it, but we certainly didn't approach it by saying, how would a human do this? Now in elemental cognition, like the project I'm leading now, we ask those questions all the time because ultimately we're trying to do something that is to make the intelligence of the machine and the intelligence of the human very compatible. Well, compatible in the sense they can communicate with one another and they can reason with this shared understanding. So how they think about things and how they build answers, how they build explanations becomes a very important question to consider. So what's the difference between this open domain, but cold constructed question answering of Jeopardy and more something that requires understanding for shared communication with humans and machines? Yeah, well, this goes back to the interpretation of what we were talking about before. Jeopardy, the system's not trying to interpret the question and it's not interpreting the content it's reusing with regard to any particular framework. I mean, it is parsing it and parsing the content and using grammatical cues and stuff like that. So if you think of grammar as a human framework in some sense, it has that, but when you get into the richer semantic frameworks, what do people, how do they think, what motivates them, what are the events that are occurring and why are they occurring and what causes what else to happen and where are things in time and space? And like when you start thinking about how humans formulate and structure the knowledge that they acquire in their head and wasn't doing any of that. What do you think are the essential challenges of like free flowing communication, free flowing dialogue versus question answering even with the framework of the interpretation dialogue? Yep. Do you see free flowing dialogue as a fundamentally more difficult than question answering even with shared interpretation? So dialogue is important in a number of different ways. I mean, it's a challenge. So first of all, when I think about the machine that, when I think about a machine that understands language and ultimately can reason in an objective way that can take the information that it perceives through language or other means and connect it back to these frameworks, reason and explain itself, that system ultimately needs to be able to talk to humans or it needs to be able to interact with humans. So in some sense it needs to dialogue. That doesn't mean that it, sometimes people talk about dialogue and they think, you know, how do humans talk to like, talk to each other in a casual conversation and you can mimic casual conversations. We're not trying to mimic casual conversations. We're really trying to produce a machine whose goal is to help you think and help you reason about your answers and explain why. So instead of like talking to your friend down the street about having a small talk conversation with your friend down the street, this is more about like you would be communicating to the computer on Star Trek where like, what do you wanna think about? Like, what do you wanna reason about? I'm gonna tell you the information I have. I'm gonna have to summarize it. I'm gonna ask you questions. You're gonna answer those questions. I'm gonna go back and forth with you. I'm gonna figure out what your mental model is. I'm gonna now relate that to the information I have and present it to you in a way that you can understand it and then we could ask followup questions. So it's that type of dialogue that you wanna construct. It's more structured, it's more goal oriented, but it needs to be fluid. In other words, it has to be engaging and fluid. It has to be productive and not distracting. So there has to be a model of, in other words, the machine has to have a model of how humans think through things and discuss them. So basically a productive, rich conversation unlike this podcast. I'd like to think it's more similar to this podcast. I wasn't joking. I'll ask you about humor as well, actually. But what's the hardest part of that? Because it seems we're quite far away as a community from that still to be able to, so one is having a shared understanding. That's, I think, a lot of the stuff you said with frameworks is quite brilliant. But just creating a smooth discourse. It feels clunky right now. Which aspects of this whole problem that you just specified of having a productive conversation is the hardest? And that we're, or maybe any aspect of it you can comment on because it's so shrouded in mystery. So I think to do this you kind of have to be creative in the following sense. If I were to do this as purely a machine learning approach and someone said learn how to have a good, fluent, structured knowledge acquisition conversation, I'd go out and say, okay, I have to collect a bunch of data of people doing that. People reasoning well, having a good, structured conversation that both acquires knowledge efficiently as well as produces answers and explanations as part of the process. And you struggle. I don't know. To collect the data. To collect the data because I don't know how much data is like that. Okay, there's one, there's a humorous commentary on the lack of rational discourse. But also even if it's out there, say it was out there, how do you actually annotate, like how do you collect an accessible example? Right, so I think any problem like this where you don't have enough data to represent the phenomenon you want to learn, in other words you want, if you have enough data you could potentially learn the pattern. In an example like this it's hard to do. This is sort of a human sort of thing to do. What recently came out at IBM was the debater projects and it's interesting, right, because now you do have these structured dialogues, these debate things where they did use machine learning techniques to generate these debates. Dialogues are a little bit tougher in my opinion than generating a structured argument where you have lots of other structured arguments like this, you could potentially annotate that data and you could say this is a good response, this is a bad response in a particular domain. Here I have to be responsive and I have to be opportunistic with regard to what is the human saying. So I'm goal oriented in saying I want to solve the problem, I want to acquire the knowledge necessary, but I also have to be opportunistic and responsive to what the human is saying. So I think that it's not clear that we could just train on the body of data to do this, but we could bootstrap it. In other words, we can be creative and we could say, what do we think the structure of a good dialogue is that does this well? And we can start to create that. If we can create that more programmatically, at least to get this process started and I can create a tool that now engages humans effectively, I could start generating data, I could start the human learning process and I can update my machine, but I could also start the automatic learning process as well, but I have to understand what features to even learn over. So I have to bootstrap the process a little bit first. And that's a creative design task that I could then use as input into a more automatic learning task. So some creativity in bootstrapping. What elements of a conversation do you think you would like to see? So one of the benchmarks for me is humor, right? That seems to be one of the hardest. And to me, the biggest contrast is sort of Watson. So one of the greatest sketches, comedy sketches of all time, right, is the SNL celebrity Jeopardy with Alex Trebek and Sean Connery and Burt Reynolds and so on, with Sean Connery commentating on Alex Trebek's while they're alive. And I think all of them are in the negative pointwise. So they're clearly all losing in terms of the game of Jeopardy, but they're winning in terms of comedy. So what do you think about humor in this whole interaction in the dialogue that's productive? Or even just what humor represents to me is the same idea that you're saying about framework, because humor only exists within a particular human framework. So what do you think about humor? What do you think about things like humor that connect to the kind of creativity you mentioned that's needed? I think there's a couple of things going on there. So I sort of feel like, and I might be too optimistic this way, but I think that there are, we did a little bit about with puns in Jeopardy. We literally sat down and said, how do puns work? And it's like wordplay, and you could formalize these things. So I think there's a lot aspects of humor that you could formalize. You could also learn humor. You could just say, what do people laugh at? And if you have enough, again, if you have enough data to represent the phenomenon, you might be able to weigh the features and figure out what humans find funny and what they don't find funny. The machine might not be able to explain why the human is funny unless we sit back and think about that more formally. I think, again, I think you do a combination of both. And I'm always a big proponent of that. I think robust architectures and approaches are always a little bit combination of us reflecting and being creative about how things are structured, how to formalize them, and then taking advantage of large data and doing learning and figuring out how to combine these two approaches. I think there's another aspect to humor though, which goes to the idea that I feel like I can relate to the person telling the story. And I think that's an interesting theme in the whole AI theme, which is, do I feel differently when I know it's a robot? And when I imagine that the robot is not conscious the way I'm conscious, when I imagine the robot does not actually have the experiences that I experience, do I find it funny? Or do, because it's not as related, I don't imagine that the person's relating it to it the way I relate to it. I think this also, you see this in the arts and in entertainment where, sometimes you have savants who are remarkable at a thing, whether it's sculpture or it's music or whatever, but the people who get the most attention are the people who can evoke a similar emotional response, who can get you to emote, right? About the way they are. In other words, who can basically make the connection from the artifact, from the music or the painting of the sculpture to the emotion and get you to share that emotion with them. And then, and that's when it becomes compelling. So they're communicating at a whole different level. They're just not communicating the artifact. They're communicating their emotional response to the artifact. And then you feel like, oh wow, I can relate to that person, I can connect to that, I can connect to that person. So I think humor has that aspect as well. So the idea that you can connect to that person, person being the critical thing, but we're also able to anthropomorphize objects pretty, robots and AI systems pretty well. So we're almost looking to make them human. So maybe from your experience with Watson, maybe you can comment on, did you consider that as part, well, obviously the problem of jeopardy doesn't require anthropomorphization, but nevertheless. Well, there was some interest in doing that. And that's another thing I didn't want to do because I didn't want to distract from the actual scientific task. But you're absolutely right. I mean, humans do anthropomorphize and without necessarily a lot of work. I mean, you just put some eyes and a couple of eyebrow movements and you're getting humans to react emotionally. And I think you can do that. So I didn't mean to suggest that, that that connection cannot be mimicked. I think that connection can be mimicked and can produce that emotional response. I just wonder though, if you're told what's really going on, if you know that the machine is not conscious, not having the same richness of emotional reactions and understanding that it doesn't really share the understanding, but it's essentially just moving its eyebrow or drooping its eyes or making them bigger, whatever it's doing, just getting the emotional response, will you still feel it? Interesting. I think you probably would for a while. And then when it becomes more important that there's a deeper share of understanding, it may run flat, but I don't know. I'm pretty confident that majority of the world, even if you tell them how it works, well, it will not matter, especially if the machine herself says that she is conscious. That's very possible. So you, the scientist that made the machine is saying that this is how the algorithm works. Everybody will just assume you're lying and that there's a conscious being there. So you're deep into the science fiction genre now, but yeah. I don't think it's, it's actually psychology. I think it's not science fiction. I think it's reality. I think it's a really powerful one that we'll have to be exploring in the next few decades. I agree. It's a very interesting element of intelligence. So what do you think, we've talked about social constructs of intelligences and frameworks and the way humans kind of interpret information. What do you think is a good test of intelligence in your view? So there's the Alan Turing with the Turing test. Watson accomplished something very impressive with Jeopardy. What do you think is a test that would impress the heck out of you that you saw that a computer could do? They would say, this is crossing a kind of threshold that gives me pause in a good way. My expectations for AI are generally high. What does high look like by the way? So not the threshold, test is a threshold. What do you think is the destination? What do you think is the ceiling? I think machines will in many measures will be better than us, will become more effective. In other words, better predictors about a lot of things than ultimately we can do. I think where they're gonna struggle is what we talked about before, which is relating to communicating with and understanding humans in deeper ways. And so I think that's a key point, like we can create the super parrot. What I mean by the super parrot is given enough data, a machine can mimic your emotional response, can even generate language that will sound smart and what someone else might say under similar circumstances. Like I would just pause on that, like that's the super parrot, right? So given similar circumstances, moves its faces in similar ways, changes its tone of voice in similar ways, produces strings of language that would similar that a human might say, not necessarily being able to produce a logical interpretation or understanding that would ultimately satisfy a critical interrogation or a critical understanding. I think you just described me in a nutshell. So I think philosophically speaking, you could argue that that's all we're doing as human beings to work super parrots. So I was gonna say, it's very possible, you know, humans do behave that way too. And so upon deeper probing and deeper interrogation, you may find out that there isn't a shared understanding because I think humans do both. Like humans are statistical language model machines and they are capable reasoners. You know, they're both. And you don't know which is going on, right? So, and I think it's an interesting problem. We talked earlier about like where we are in our social and political landscape. Can you distinguish someone who can string words together and sound like they know what they're talking about from someone who actually does? Can you do that without dialogue, without interrogative or probing dialogue? So it's interesting because humans are really good in their own mind, justifying or explaining what they hear because they project their understanding onto yours. So you could say, you could put together a string of words and someone will sit there and interpret it in a way that's extremely biased to the way they wanna interpret it. They wanna assume that you're an idiot and they'll interpret it one way. They will assume you're a genius and they'll interpret it another way that suits their needs. So this is tricky business. So I think to answer your question, as AI gets better and better, better and better mimic, you recreate the super parrots, we're challenged just as we are with, we're challenged with humans. Do you really know what you're talking about? Do you have a meaningful interpretation, a powerful framework that you could reason over and justify your answers, justify your predictions and your beliefs, why you think they make sense. Can you convince me what the implications are? So can you reason intelligently and make me believe that the implications of your prediction and so forth? So what happens is it becomes reflective. My standard for judging your intelligence depends a lot on mine. But you're saying there should be a large group of people with a certain standard of intelligence that would be convinced by this particular AI system. Then they'll pass. There should be, but I think depending on the content, one of the problems we have there is that if that large community of people are not judging it with regard to a rigorous standard of objective logic and reason, you still have a problem. Like masses of people can be persuaded. The millennials, yeah. To turn their brains off. Right, okay. Sorry. By the way, I have nothing against the millennials. No, I don't, I'm just, just. So you're a part of one of the great benchmarks, challenges of AI history. What do you think about AlphaZero, OpenAI5, AlphaStar accomplishments on video games recently, which are also, I think, at least in the case of Go, with AlphaGo and AlphaZero playing Go, was a monumental accomplishment as well. What are your thoughts about that challenge? I think it was a giant landmark for AI. I think it was phenomenal. I mean, it was one of those other things nobody thought like solving Go was gonna be easy, particularly because it's hard for, particularly hard for humans. Hard for humans to learn, hard for humans to excel at. And so it was another measure, a measure of intelligence. It's very cool. I mean, it's very interesting what they did. And I loved how they solved the data problem, which again, they bootstrapped it and got the machine to play itself, to generate enough data to learn from. I think that was brilliant. I think that was great. And of course, the result speaks for itself. I think it makes us think about, again, it is, okay, what's intelligence? What aspects of intelligence are important? Can the Go machine help me make me a better Go player? Is it an alien intelligence? Am I even capable of, like again, if we put in very simple terms, it found the function, it found the Go function. Can I even comprehend the Go function? Can I talk about the Go function? Can I conceptualize the Go function, like whatever it might be? So one of the interesting ideas of that system is that it plays against itself, right? But there's no human in the loop there. So like you're saying, it could have by itself created an alien intelligence. How? Toward a Go, imagine you're sentencing, you're a judge and you're sentencing people, or you're setting policy, or you're making medical decisions, and you can't explain, you can't get anybody to understand what you're doing or why. So it's an interesting dilemma for the applications of AI. Do we hold AI to this accountability that says humans have to be willing to take responsibility for the decision? In other words, can you explain why you would do the thing? Will you get up and speak to other humans and convince them that this was a smart decision? Is the AI enabling you to do that? Can you get behind the logic that was made there? Do you think, sorry to land on this point, because it's a fascinating one. It's a great goal for AI. Do you think it's achievable in many cases? Or, okay, there's two possible worlds that we have in the future. One is where AI systems do like medical diagnosis or things like that, or drive a car without ever explaining to you why it fails when it does. That's one possible world and we're okay with it. Or the other where we are not okay with it and we really hold back the technology from getting too good before it's able to explain. Which of those worlds are more likely, do you think, and which are concerning to you or not? I think the reality is it's gonna be a mix. I'm not sure I have a problem with that. I mean, I think there are tasks that are perfectly fine with machines show a certain level of performance and that level of performance is already better than humans. So for example, I don't know that I take driverless cars. If driverless cars learn how to be more effective drivers than humans but can't explain what they're doing, but bottom line, statistically speaking, they're 10 times safer than humans, I don't know that I care. I think when we have these edge cases when something bad happens and we wanna decide who's liable for that thing and who made that mistake and what do we do about that? And I think those edge cases are interesting cases. And now do we go to designers of the AI and the AI says, I don't know if that's what it learned to do and it says, well, you didn't train it properly. You were negligent in the training data that you gave that machine. Like, how do we drive down the reliability? So I think those are interesting questions. So the optimization problem there, sorry, is to create an AI system that's able to explain the lawyers away. There you go. I think it's gonna be interesting. I mean, I think this is where technology and social discourse are gonna get like deeply intertwined and how we start thinking about problems, decisions and problems like that. I think in other cases it becomes more obvious where it's like, why did you decide to give that person a longer sentence or deny them parole? Again, policy decisions or why did you pick that treatment? Like that treatment ended up killing that guy. Like, why was that a reasonable choice to make? And people are gonna demand explanations. Now there's a reality though here. And the reality is that it's not, I'm not sure humans are making reasonable choices when they do these things. They are using statistical hunches, biases, or even systematically using statistical averages to make calls. This is what happened to my dad and if you saw the talk I gave about that. But they decided that my father was brain dead. He had went into cardiac arrest and it took a long time for the ambulance to get there and he was not resuscitated right away and so forth. And they came and they told me he was brain dead and why was he brain dead? Because essentially they gave me a purely statistical argument under these conditions with these four features, 98% chance he's brain dead. I said, but can you just tell me not inductively, but deductively go there and tell me his brain's not functioning is the way for you to do that. And the protocol in response was, no, this is how we make this decision. I said, this is inadequate for me. I understand the statistics and I don't know how, there's a 2% chance he's still alive. I just don't know the specifics. I need the specifics of this case and I want the deductive logical argument about why you actually know he's brain dead. So I wouldn't sign the do not resuscitate. And I don't know, it was like they went through lots of procedures, it was a big long story, but the bottom was a fascinating story by the way, but how I reasoned and how the doctors reasoned through this whole process. But I don't know, somewhere around 24 hours later or something, he was sitting up in bed with zero brain damage. I mean, what lessons do you draw from that story, that experience? That the data that's being used to make statistical inferences doesn't adequately reflect the phenomenon. So in other words, you're getting shit wrong, I'm sorry, but you're getting stuff wrong because your model is not robust enough and you might be better off not using statistical inference and statistical averages in certain cases when you know the model's insufficient and that you should be reasoning about the specific case more logically and more deductibly and hold yourself responsible and hold yourself accountable to doing that. And perhaps AI has a role to say the exact thing what you just said, which is perhaps this is a case you should think for yourself, you should reason deductively. Well, so it's hard because it's hard to know that. You'd have to go back and you'd have to have enough data to essentially say, and this goes back to how do we, this goes back to the case of how do we decide whether the AI is good enough to do a particular task and regardless of whether or not it produces an explanation. And what standard do we hold for that? So if you look more broadly, for example, as my father, as a medical case, the medical system ultimately helped him a lot throughout his life, without it, he probably would have died much sooner. So overall, it sort of worked for him in sort of a net, net kind of way. Actually, I don't know that that's fair. But maybe not in that particular case, but overall, like the medical system overall does more good than bad. Yeah, the medical system overall was doing more good than bad. Now, there's another argument that suggests that wasn't the case, but for the sake of argument, let's say like that's, let's say a net positive. And I think you have to sit there and there and take that into consideration. Now you look at a particular use case, like for example, making this decision, have you done enough studies to know how good that prediction really is? And have you done enough studies to compare it, to say, well, what if we dug in in a more direct, let's get the evidence, let's do the deductive thing and not use statistics here, how often would that have done better? So you have to do the studies to know how good the AI actually is. And it's complicated because it depends how fast you have to make the decision. So if you have to make the decision super fast, you have no choice. If you have more time, right? But if you're ready to pull the plug, and this is a lot of the argument that I had with a doctor, I said, what's he gonna do if you do it, what's gonna happen to him in that room if you do it my way? You know, well, he's gonna die anyway. So let's do it my way then. I mean, it raises questions for our society to struggle with, as the case with your father, but also when things like race and gender start coming into play when certain, when judgments are made based on things that are complicated in our society, at least in the discourse. And it starts, you know, I think I'm safe to say that most of the violent crimes committed by males, so if you discriminate based, you know, it's a male versus female saying that if it's a male, more likely to commit the crime. This is one of my very positive and optimistic views of why the study of artificial intelligence, the process of thinking and reasoning logically and statistically, and how to combine them is so important for the discourse today, because it's causing a, regardless of what state AI devices are or not, it's causing this dialogue to happen. This is one of the most important dialogues that in my view, the human species can have right now, which is how to think well, how to reason well, how to understand our own cognitive biases and what to do about them. That has got to be one of the most important things we as a species can be doing, honestly. We are, we've created an incredibly complex society. We've created amazing abilities to amplify noise faster than we can amplify signal. We are challenged. We are deeply, deeply challenged. We have, you know, big segments of the population getting hit with enormous amounts of information. Do they know how to do critical thinking? Do they know how to objectively reason? Do they understand what they are doing, nevermind what their AI is doing? This is such an important dialogue to be having. And, you know, we are fundamentally, our thinking can be and easily becomes fundamentally bias. And there are statistics and we shouldn't blind our, we shouldn't discard statistical inference, but we should understand the nature of statistical inference. As a society, as you know, we decide to reject statistical inference to favor understanding and deciding on the individual. Yes. We consciously make that choice. So even if the statistics said, even if the statistics said males are more likely to have, you know, to be violent criminals, we still take each person as an individual and we treat them based on the logic and the knowledge of that situation. We purposefully and intentionally reject the statistical inference. We do that out of respect for the individual. For the individual. Yeah, and that requires reasoning and thinking. Correct. Looking forward, what grand challenges would you like to see in the future? Because the Jeopardy challenge, you know, captivated the world. AlphaGo, AlphaZero captivated the world. Deep Blue certainly beating Kasparov. Gary's bitterness aside captivated the world. What do you think, do you have ideas for next grand challenges for future challenges of that? You know, look, I mean, I think there are lots of really great ideas for grand challenges. I'm particularly focused on one right now, which is, you know, can you demonstrate that they understand, that they could read and understand, that they can acquire these frameworks and communicate, you know, reason and communicate with humans. So it is kind of like the Turing test, but it's a little bit more demanding than the Turing test. It's not enough to convince me that you might be human because you could, you know, you can parrot a conversation. I think, you know, the standard is a little bit higher, is for example, can you, you know, the standard is higher. And I think one of the challenges of devising this grand challenge is that we're not sure what intelligence is, we're not sure how to determine whether or not two people actually understand each other and in what depth they understand it, you know, to what depth they understand each other. So the challenge becomes something along the lines of, can you satisfy me that we have a shared understanding? So if I were to probe and probe and you probe me, can machines really act like thought partners where they can satisfy me that we have a shared, our understanding is shared enough that we can collaborate and produce answers together and that, you know, they can help me explain and justify those answers. So maybe here's an idea. So we'll have AI system run for president and convince. That's too easy. I'm sorry, go ahead. Well, no, you have to convince the voters that they should vote. So like, I guess what does winning look like? Again, that's why I think this is such a challenge because we go back to the emotional persuasion. We go back to, you know, now we're checking off an aspect of human cognition that is in many ways weak or flawed, right, we're so easily manipulated. Our minds are drawn for often the wrong reasons, right? Not the reasons that ultimately matter to us, but the reasons that can easily persuade us. I think we can be persuaded to believe one thing or another for reasons that ultimately don't serve us well in the longterm. And a good benchmark should not play with those elements of emotional manipulation. I don't think so. And I think that's where we have to set the higher standard for ourselves of what, you know, what does it mean? This goes back to rationality and it goes back to objective thinking. And can you produce, can you acquire information and produce reasoned arguments and to those reasoned arguments pass a certain amount of muster and is it, and can you acquire new knowledge? You know, can you, for example, can you reason, I have acquired new knowledge, can you identify where it's consistent or contradictory with other things you've learned? And can you explain that to me and get me to understand that? So I think another way to think about it perhaps is can a machine teach you, can it help you understand something that you didn't really understand before where it's taking you, so you're not, again, it's almost like can it teach you, can it help you learn and in an arbitrary space so it can open those domain space? So can you tell the machine, and again, this borrows from some science fiction, but can you go off and learn about this topic that I'd like to understand better and then work with me to help me understand it? That's quite brilliant. What, the machine that passes that kind of test, do you think it would need to have self awareness or even consciousness? What do you think about consciousness and the importance of it maybe in relation to having a body, having a presence, an entity? Do you think that's important? You know, people used to ask me if Watson was conscious and I used to say, he's conscious of what exactly? I mean, I think, you know, maybe it depends what it is that you're conscious of. I mean, like, so, you know, did it, if you, you know, it's certainly easy for it to answer questions about, it would be trivial to program it to answer questions about whether or not it was playing Jeopardy. I mean, it could certainly answer questions that would imply that it was aware of things. Exactly, what does it mean to be aware and what does it mean to be conscious of? It's sort of interesting. I mean, I think that we differ from one another based on what we're conscious of. But wait, wait a minute, yes, for sure. There's degrees of consciousness in there, so. Well, and there's just areas. Like, it's not just degrees, what are you aware of? Like, what are you not aware of? But nevertheless, there's a very subjective element to our experience. Let me even not talk about consciousness. Let me talk about another, to me, really interesting topic of mortality, fear of mortality. Watson, as far as I could tell, did not have a fear of death. Certainly not. Most, most humans do. Wasn't conscious of death. It wasn't, yeah. So there's an element of finiteness to our existence that I think, like you mentioned, survival, that adds to the whole thing. I mean, consciousness is tied up with that, that we are a thing. It's a subjective thing that ends. And that seems to add a color and flavor to our motivations in a way that seems to be fundamentally important for intelligence, or at least the kind of human intelligence. Well, I think for generating goals, again, I think you could have, you could have an intelligence capability and a capability to learn, a capability to predict. But I think without, I mean, again, you get fear, but essentially without the goal to survive. So you think you can just encode that without having to really? I think you could encode. I mean, you could create a robot now, and you could say, you know, plug it in, and say, protect your power source, you know, and give it some capabilities, and it'll sit there and operate to try to protect its power source and survive. I mean, so I don't know that that's philosophically a hard thing to demonstrate. It sounds like a fairly easy thing to demonstrate that you can give it that goal. Will it come up with that goal by itself? I think you have to program that goal in. But there's something, because I think, as we touched on, intelligence is kind of like a social construct. The fact that a robot will be protecting its power source would add depth and grounding to its intelligence in terms of us being able to respect it. I mean, ultimately, it boils down to us acknowledging that it's intelligent. And the fact that it can die, I think, is an important part of that. The interesting thing to reflect on is how trivial that would be. And I don't think, if you knew how trivial that was, you would associate that with being intelligence. I mean, I literally put in a statement of code that says you have the following actions you can take. You give it a bunch of actions, like maybe you mount a laser gun on it, or you give it the ability to scream or screech or whatever. And you say, if you see your power source threatened, then you could program that in, and you're gonna take these actions to protect it. You know, you could train it on a bunch of things. So, and now you're gonna look at that and you say, well, you know, that's intelligence, which is protecting its power source? Maybe, but that's, again, this human bias that says, the thing I identify, my intelligence and my conscious, so fundamentally with the desire, or at least the behaviors associated with the desire to survive, that if I see another thing doing that, I'm going to assume it's intelligent. What timeline, year, will society have something that would, that you would be comfortable calling an artificial general intelligence system? Well, what's your intuition? Nobody can predict the future, certainly not the next few months or 20 years away, but what's your intuition? How far away are we? I don't know. It's hard to make these predictions. I mean, I would be guessing, and there's so many different variables, including just how much we want to invest in it and how important we think it is, what kind of investment we're willing to make in it, what kind of talent we end up bringing to the table, the incentive structure, all these things. So I think it is possible to do this sort of thing. I think it's, I think trying to sort of ignore many of the variables and things like that, is it a 10 year thing, is it a 23 year? Probably closer to a 20 year thing, I guess. But not several hundred years. No, I don't think it's several hundred years. I don't think it's several hundred years. But again, so much depends on how committed we are to investing and incentivizing this type of work. And it's sort of interesting. Like, I don't think it's obvious how incentivized we are. I think from a task perspective, if we see business opportunities to take this technique or that technique to solve that problem, I think that's the main driver for many of these things. From a general intelligence, it's kind of an interesting question. Are we really motivated to do that? And like, we just struggled ourselves right now to even define what it is. So it's hard to incentivize when we don't even know what it is we're incentivized to create. And if you said mimic a human intelligence, I just think there are so many challenges with the significance and meaning of that. That there's not a clear directive. There's no clear directive to do precisely that thing. So assistance in a larger and larger number of tasks. So being able to, a system that's particularly able to operate my microwave and making a grilled cheese sandwich. I don't even know how to make one of those. And then the same system will be doing the vacuum cleaning. And then the same system would be teaching my kids that I don't have math. I think that when you get into a general intelligence for learning physical tasks, and again, I wanna go back to your body question because I think your body question was interesting, but you wanna go back to learning the abilities to physical tasks. You might have, we might get, I imagine in that timeframe, we will get better and better at learning these kinds of tasks, whether it's mowing your lawn or driving a car or whatever it is. I think we will get better and better at that where it's learning how to make predictions over large bodies of data. I think we're gonna continue to get better and better at that. And machines will outpace humans in a variety of those things. The underlying mechanisms for doing that may be the same, meaning that maybe these are deep nets, there's infrastructure to train them, reusable components to get them to do different classes of tasks, and we get better and better at building these kinds of machines. You could argue that the general learning infrastructure in there is a form of a general type of intelligence. I think what starts getting harder is this notion of, can we effectively communicate and understand and build that shared understanding? Because of the layers of interpretation that are required to do that, and the need for the machine to be engaged with humans at that level in a continuous basis. So how do you get the machine in the game? How do you get the machine in the intellectual game? Yeah, and to solve AGI, you probably have to solve that problem. You have to get the machine, so it's a little bit of a bootstrapping thing. Can we get the machine engaged in the intellectual game, but in the intellectual dialogue with the humans? Are the humans sufficiently in intellectual dialogue with each other to generate enough data in this context? And how do you bootstrap that? Because every one of those conversations, every one of those conversations, those intelligent interactions, require so much prior knowledge that it's a challenge to bootstrap it. So the question is, and how committed? So I think that's possible, but when I go back to, are we incentivized to do that? I know we're incentivized to do the former. Are we incentivized to do the latter significantly enough? Do people understand what the latter really is well enough? Part of the elemental cognition mission is to try to articulate that better and better through demonstrations and through trying to craft these grand challenges and get people to say, look, this is a class of intelligence. This is a class of AI. Do we want this? What is the potential of this? What's the business potential? What's the societal potential to that? And to build up that incentive system around that. Yeah, I think if people don't understand yet, I think they will. I think there's a huge business potential here. So it's exciting that you're working on it. We kind of skipped over, but I'm a huge fan of physical presence of things. Do you think Watson had a body? Do you think having a body adds to the interactive element between the AI system and a human, or just in general to intelligence? So I think going back to that shared understanding bit, humans are very connected to their bodies. I mean, one of the challenges in getting an AI to kind of be a compatible human intelligence is that our physical bodies are generating a lot of features that make up the input. So in other words, our bodies are the tool we use to affect output, but they also generate a lot of input for our brains. So we generate emotion, we generate all these feelings, we generate all these signals that machines don't have. So machines don't have this as the input data and they don't have the feedback that says, I've gotten this emotion or I've gotten this idea, I now want to process it, and then it then affects me as a physical being, and I can play that out. In other words, I could realize the implications of that, implications again, on my mind body complex, I then process that, and the implications again, our internal features are generated, I learn from them, they have an effect on my mind body complex. So it's interesting when we think, do we want a human intelligence? Well, if we want a human compatible intelligence, probably the best thing to do is to embed it in a human body. Just to clarify, and both concepts are beautiful, is humanoid robots, so robots that look like humans is one, or did you mean actually sort of what Elon Musk was working with Neuralink, really embedding intelligence systems to ride along human bodies? No, I mean riding along is different. I meant like if you want to create an intelligence that is human compatible, meaning that it can learn and develop a shared understanding of the world around it, you have to give it a lot of the same substrate. Part of that substrate is the idea that it generates these kinds of internal features, like sort of emotional stuff, it has similar senses, it has to do a lot of the same things with those same senses, right? So I think if you want that, again, I don't know that you want that. That's not my specific goal, I think that's a fascinating scientific goal, I think it has all kinds of other implications. That's sort of not the goal. I want to create, I think of it as I create intellectual thought partners for humans, so that kind of intelligence. I know there are other companies that are creating physical thought partners, physical partners for humans, but that's kind of not where I'm at. But the important point is that a big part of what we process is that physical experience of the world around us. On the point of thought partners, what role does an emotional connection, or forgive me, love, have to play in that thought partnership? Is that something you're interested in, put another way, sort of having a deep connection, beyond intellectual? With the AI? Yeah, with the AI, between human and AI. Is that something that gets in the way of the rational discourse? Is that something that's useful? I worry about biases, obviously. So in other words, if you develop an emotional relationship with a machine, all of a sudden you start, are more likely to believe what it's saying, even if it doesn't make any sense. So I worry about that. But at the same time, I think the opportunity to use machines to provide human companionship is actually not crazy. And intellectual and social companionship is not a crazy idea. Do you have concerns, as a few people do, Elon Musk, Sam Harris, about long term existential threats of AI, and perhaps short term threats of AI? We talked about bias, we talked about different misuses, but do you have concerns about thought partners, systems that are able to help us make decisions together as humans, somehow having a significant negative impact on society in the long term? I think there are things to worry about. I think giving machines too much leverage is a problem. And what I mean by leverage is, is too much control over things that can hurt us, whether it's socially, psychologically, intellectually, or physically. And if you give the machines too much control, I think that's a concern. You forget about the AI, just once you give them too much control, human bad actors can hack them and produce havoc. So that's a problem. And you'd imagine hackers taking over the driverless car network and creating all kinds of havoc. But you could also imagine given the ease at which humans could be persuaded one way or the other, and now we have algorithms that can easily take control over that and amplify noise and move people one direction or another. I mean, humans do that to other humans all the time. And we have marketing campaigns, we have political campaigns that take advantage of our emotions or our fears. And this is done all the time. But with machines, machines are like giant megaphones. We can amplify this in orders of magnitude and fine tune its control so we can tailor the message. We can now very rapidly and efficiently tailor the message to the audience, taking advantage of their biases and amplifying them and using them to persuade them in one direction or another in ways that are not fair, not logical, not objective, not meaningful. And humans, machines empower that. So that's what I mean by leverage. Like it's not new, but wow, it's powerful because machines can do it more effectively, more quickly and we see that already going on in social media and other places. That's scary. And that's why I go back to saying one of the most important That's why I go back to saying one of the most important public dialogues we could be having is about the nature of intelligence and the nature of inference and logic and reason and rationality and us understanding our own biases, us understanding our own cognitive biases and how they work and then how machines work and how do we use them to compliment basically so that in the end we have a stronger overall system. That's just incredibly important. I don't think most people understand that. So like telling your kids or telling your students, this goes back to the cognition. Here's how your brain works. Here's how easy it is to trick your brain, right? There are fundamental cognitive, you should appreciate the different types of thinking and how they work and what you're prone to and what do you prefer? And under what conditions does this make sense versus does that make sense? And then say, here's what AI can do. Here's how it can make this worse and here's how it can make this better. And then that's where the AI has a role is to reveal that trade off. So if you imagine a system that is able to beyond any definition of the Turing test to the benchmark, really an AGI system as a thought partner that you one day will create, what question, what topic of discussion, if you get to pick one, would you have with that system? What would you ask and you get to find out the truth together? So you threw me a little bit with finding the truth at the end, but because the truth is a whole nother topic. But I think the beauty of it, I think what excites me is the beauty of it is if I really have that system, I don't have to pick. So in other words, I can go to and say, this is what I care about today. And that's what we mean by like this general capability, go out, read this stuff in the next three milliseconds. And I wanna talk to you about it. I wanna draw analogies, I wanna understand how this affects this decision or that decision. What if this were true? What if that were true? What knowledge should I be aware of that could impact my decision? Here's what I'm thinking is the main implication. Can you prove that out? Can you give me the evidence that supports that? Can you give me evidence that supports this other thing? Boy, would that be incredible? Would that be just incredible? Just a long discourse. Just to be part of whether it's a medical diagnosis or whether it's the various treatment options or whether it's a legal case or whether it's a social problem that people are discussing, like be part of the dialogue, one that holds itself and us accountable to reasons and objective dialogue. I get goosebumps talking about it, right? It's like, this is what I want. So when you create it, please come back on the podcast and we can have a discussion together and make it even longer. This is a record for the longest conversation in the world. It was an honor, it was a pleasure, David. Thank you so much for talking to me. Thanks so much, a lot of fun.
David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI | Lex Fridman Podcast #44
The following is a conversation with Michio Kaku. He's a theoretical physicist, futurist, and professor at the City College of New York. He's the author of many fascinating books that explore the nature of our reality and the future of our civilization. They include Einstein's Cosmos, Physics of the Impossible, Future of the Mind, Parallel Worlds, and his latest, The Future of Humanity, Terraforming Mars Interstellar Travel, Immortality, and Our Destiny Beyond Earth. I think it's beautiful and important when a scientific mind can fearlessly explore through conversation subjects just outside of our understanding. That, to me, is where artificial intelligence is today, just outside of our understanding, a place we have to reach for if we're to uncover the mysteries of the human mind and build human level and superhuman level AI systems that transform our world for the better. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Michio Kaku. You've mentioned that we just might make contact with aliens or at least hear from them within this century. Can you elaborate on your intuition behind that optimism? Well, this is pure speculation, of course. Of course. Given the fact that we've already identified 4,000 exoplanets orbiting other stars, and we have a census of the Milky Way galaxy for the first time, we know that on average, every single star, on average, has a planet going around it, and about one fifth or so of them have Earth sized planets going around them. So just do the math. We're talking about out of 100 billion stars in the Milky Way galaxy, we're talking about billions of potential Earth sized planets. And to believe that we're the only one is, I think, rather ridiculous, given the odds. And how many galaxies are there? Within sight of the Hubble Space Telescope, there are about 100 billion galaxies. So do the math. How many stars are there in the visible universe? 100 billion galaxies, times 100 billion stars per galaxy. We're talking about a number beyond human imagination. And to believe that we're the only ones, I think, is rather ridiculous. So you've talked about different types of, type zero, one, two, three, four, and five, even, of the Kardashev scale of the different kind of civilizations. What do you think it takes, if it is indeed a ridiculous notion that we're alone in the universe, what do you think it takes to reach out? First, to reach out through communication and connect. Well, first of all, we have to understand the level of sophistication of an alien life form if we make contact with them. I think in this century, we'll probably pick up signals, signals from an extraterrestrial civilization. We'll pick up there, I love Lucy, and there, leave it to Beaver. Just ordinary day to day transmissions that they emit. And the first thing we wanna do is to A, decipher their language, of course, but B, figure out at what level they are advanced on the Kardashev scale. I'm a physicist. We rank things by two parameters, energy and information. That's how we rank black holes. That's how we rank stars. That's how we rank civilizations in outer space. So a type one civilization is capable of harnessing planetary power. They control the weather, for example, earthquakes, volcanoes. They can modify the course of geological events, sort of like Flash Gordon or Buck Rogers. Type two would be stellar. They play with stars, entire stars. They use the entire energy output of a star, sort of like Star Trek. The Federation of Planets have colonized the nearby stars. So a type two would be somewhat similar to Star Trek. Type three would be galactic. They roam the galactic space lanes. And type three would be like Star Wars, a galactic civilization. Now, one day I was giving this talk in London at the planetarium there, and the little boy comes up to me and he says, professor, you're wrong. You're wrong, there's type four. And I told him, look, kid, there are planets, stars, and galaxies. That's it, folks. And he kept persisting and saying, no, there's type four, the power of the continuum. And I thought about it for a moment. And I said to myself, is there an extra galactic source of energy, the continuum of Star Trek? And the answer is yes, there could be a type four. And that's dark energy. We now know that 73% of the energy of the universe is dark energy. Dark matter represents maybe 23% or so, and we only represent 4%. We're the oddballs. And so you begin to realize that, yeah, there could be type four, maybe even type five. So type four, you're saying being able to harness sort of like dark energy, something that permeates the entire universe. So be able to plug into the entire universe as a source of energy. That's right. And dark energy is the energy of the Big Bang. It's why the galaxies are being pushed apart. It's the energy of nothing. The more nothing you have, the more dark energy that's repulsive. And so the acceleration of the universe is accelerating because the more you have, the more you can have. And that, of course, is by definition an exponential curve. It's called a de Sitter expansion, and that's the current state of the universe. And then type five, would that be able to seek energy sources somehow outside of our universe? And how crazy is that idea? Yeah, type five will be the multiverse. Multiverse, okay. I'm a quantum physicist, and we quantum physicists don't believe that the Big Bang happened once. That would violate the Heisenberg uncertainty principle. And that means that there could be multiple bangs happening all the time. Even as we speak today, universes are being created, and that fits the data. The inflationary universe is a quantum theory. So there's a certain finite probability that universes are being created all the time. And for me, this is actually rather aesthetically pleasing because I was raised as a Presbyterian, but my parents were Buddhists. And there's two diametrically opposed ideas about the universe. In Buddhism, there's only nirvana. There's no beginning, there's no end, there's only timelessness. But in Christianity, there is the instant when God said, let there be light. In other words, an instant of creation. So I've had these two mutually exclusive ideas in my head, and I now realize that it's possible to meld them into a single theory. Either the universe had a beginning or it didn't, right? Wrong. You see, our universe had a beginning. Our universe had an instant where somebody might have said, let there be light. But there are other bubble universes out there in a bubble bath of universes. And that means that these universes are expanding into a dimension beyond our three dimensional comprehension. In other words, hyperspace. In other words, 11 dimensional hyperspace. So nirvana would be this timeless 11 dimensional hyperspace where big bangs are happening all the time. So we can now combine two mutually exclusive theories of creation. And Stephen Hawking, for example, even in his last book, even said that this is an argument against the existence of God. He said there is no God because there was not enough time for God to create the universe because the big bang happened in an instant of time. Therefore, there was no time available for him to create the universe. But you see, the multiverse idea means that there was a time before time. And there are multiple times, each bubble has its own time. And so it means that there could actually be a universe before the beginning of our universe. So if you think of a bubble bath, when two bubbles collide, or when two bubbles fission to create a baby bubble, that's called the big bang. So the big bang is nothing but the collision of universes or the budding of universes. That's such a beautiful picture of our incredibly mysterious existence. So is that humbling to you? Exciting, the idea of multiverses? I don't even know how to even begin to wrap my mind around it. It's exciting for me because what I do for a living is string theory. That's my day job. I get paid by the city of New York to work on string theory. And you see, string theory is a multiverse theory. So people say, first of all, what is string theory? String theory simply says that all the particles we see in nature, the electron, the proton, the quarks, what have you, are nothing but vibrations on a musical string, on a tiny, tiny little string. You know, G. Robert Oppenheimer, the creator of the atomic bomb, was so frustrated in the 1950s with all these subatomic particles being created in our atom smashers that he announced, he announced one day that the Nobel Prize in physics should go to the physicist who does not discover a new particle that year. Well, today we think they're nothing but musical notes on these tiny little vibrating strings. So what is physics? Physics is the harmonies you can write on vibrating strings. What is chemistry? Chemistry is the melodies you can play on these strings. What is the universe? The universe is a symphony of strings. And then what is the mind of God that Albert Einstein so eloquently wrote about for the last 30 years of his life? The mind of God would be cosmic music, resonating through 11 dimensional hyperspace. So beautifully put. What do you think is the mind of Einstein's God? Do you think there's a why that we could untangle from this universe of strings? Why are we here? What is the meaning of it all? Well, Steven Weinberg, winner of the Nobel Prize, once said that the more we learn about the universe, the more we learn that it's pointless. Well, I don't know. I don't profess to understand the great secrets of the universe. However, let me say two things about what the giants of physics have said about this question. Einstein believed in two types of God. One was the God of the Bible, the personal God, the God that answers prayers, walks on waters, performs miracles, smites the Philistines. That's the personal God that he didn't believe in. He believed in the God of Spinoza, the God of order, simplicity, harmony, beauty. The universe could have been ugly. The universe could have been messy, random, but it's gorgeous. You realize that on a single sheet of paper, we can write down all the known laws of the universe. It's amazing, on one sheet of paper, Einstein's equation is one inch long, string theory is a lot longer, and so it's a standard model, but you could put all these equations on one sheet of paper. It didn't have to be that way. It could have been messy. And so, Einstein thought of himself as a young boy entering this huge library for the first time, being overwhelmed by the simplicity, elegance, and beauty of this library, but all he could do was read the first page of the first volume. Well, that library is the universe, with all sorts of mysterious, magical things that we have yet to find. And then Galileo was asked about this. Galileo said that the purpose of science, the purpose of science is to determine how the heavens go. The purpose of religion is to determine how to go to heaven. So in other words, science is about natural law, and religion is about ethics, how to be a good person, how to go to heaven. As long as we keep these two things apart, we're in great shape. The problem occurs when people from the natural sciences begin to pontificate about ethics, and people from religion begin to pontificate about natural law. That's where we get into big trouble. You think they're fundamentally distinct, morality and ethics and our idea of what is right and what is wrong. That's something that's outside the reach of string theory and physics. That's right. If you talk to a squirrel about what is right and what is wrong, there's no reference frame for a squirrel, and realize that aliens from outer space, if they ever come visit us, they'll try to talk to us like we talk to squirrels in the forest, but eventually we get bored talking to the squirrels because they don't talk back to us. Same thing with aliens from outer space. They come down to earth, they'll be curious about us to a degree, but after a while they just get bored because we have nothing to offer them. So our sense of right and wrong, what does that mean compared to a squirrel's sense of right and wrong? Now we of course do have an ethics that keeps civilizations in line, enriches our life and makes civilization possible. And I think that's a good thing, but it's not mandated by a law of physics. So if aliens do, alien species were to make contact, forgive me for staying on aliens for a bit longer. Do you think they're more likely to be friendly, to befriend us or to destroy us? Well, I think for the most part, they'll pretty much ignore us. If you're a deer in the forest, who do you fear the most? Do you fear the hunter with his gigantic 16 gauge shotgun? Or do you fear the guy with a briefcase and glasses? Well, the guy with the briefcase could be a developer about to basically flatten the entire forest, destroying your livelihood. So instinctively you may be afraid of the hunter, but actually the problem with deers in the forest is that they should fear developers because developers look at deer as simply getting in the way. I mean, in War of the Worlds by H.G. Wells, the aliens did not hate us. If you read the book, the aliens did not have evil intentions toward homo sapiens. No, we were in the way. So I think we have to realize that alien civilizations may view us quite differently than in science fiction novels. However, I personally believe, and I cannot prove any of this, I personally believe that they're probably gonna be peaceful because there's nothing that they want from our world. I mean, what are they gonna take us? What are they gonna take us for, gold? No, gold is a useless metal for the most part. It's silver, I mean, it's gold in color, but that only affects homo sapiens. Squirrels don't care about gold. And so gold is a rather useless element. Rare earths maybe, platinum based elements, rare earths for the electronics, yeah, maybe. But other than that, we have nothing to offer them. I mean, think about it for a moment. People love Shakespeare and they love the arts and poetry, but outside of the earth, they mean nothing, absolutely nothing. I mean, when I write down an equation in string theory, I would hope that on the other side of the galaxy, there's an alien writing down that very same equation in different notation, but that alien on the other side of the galaxy, Shakespeare, poetry, Hemingway, it would mean nothing to him or her or it. When you think about entities that's out there, extraterrestrial, do you think they would naturally look something that even is recognizable to us as life? Or would they be radically different? Well, how did we become intelligent? Basically three things made us intelligent. One is our eyesight, stereo eyesight. We have the eyes of a hunter, stereo vision so we lock in on targets. And who is smarter, predator or prey? Predators are smarter than prey. They have their eyes at the front of their face, like lions, tigers, while rabbits have eyes to the side of their face. Why is that? Hunters have to zero in on the target. They have to know how to ambush. They have to know how to hide, camouflage, sneak up, stealth, deceit. That takes a lot of intelligence. Rabbits, all they have to do is run. So that's the first criterion, stereo eyesight of some sort. Second is the thumb. The opposable thumb of some sort could be a claw or a tentacle. So hand eye coordination. Hand eye coordination is the way we manipulate the environment. And then three, language. Because mama bear never tells baby bear to avoid the human hunter. Bears just learn by themselves. They never hand out information from one generation to the next. So these are the three basic ingredients of intelligence. Eyesight of some sort, an opposable thumb or tentacle or claw of some sort, and language. Now ask yourself a simple question. How many animals have all three? Just us. It's just us. I mean, the primates, they have a language, yeah, they may get up to maybe 20 words, but a baby learns a word a day, several words a day a baby learns. And a typical adult knows about almost 5,000 words. While the maximum number of words that you can teach a gorilla in any language, including their own language, is about 20 or so. And so we see the difference in intelligence. So when we meet aliens from outer space, chances are they will have been descended from predators of some sort. They'll have some way to manipulate the environment and communicate their knowledge to the next generation. That's it, folks. So functionally, that would be similar. That would, we would be able to recognize them. Well, not necessarily, because I think even with Homo sapiens, we are eventually going to perhaps become part cybernetic and genetically enhanced. Already, robots are getting smarter and smarter. Right now, robots have the intelligence of a cockroach. But in the coming years, our robots will be as smart as a mouse, then maybe as smart as a rabbit. If we're lucky, maybe as smart as a cat or a dog. And by the end of the century, who knows for sure, our robots will be probably as smart as a monkey. Now, at that point, of course, they could be dangerous. You see, monkeys are self aware. They know they are monkeys. They may have a different agenda than us. While dogs, dogs are confused. You see, dogs think that we are a dog, that we're the top dog. They're the underdog. That's why they whimper and follow us and lick us all the time. We're the top dog. Monkeys have no illusion at all. They know we are not monkeys. And so I think that in the future, we'll have to put a chip in their brain to shut them off once our robots have murderous thoughts. But that's in a hundred years. In 200 years, the robots will be smart enough to remove that fail safe chip in their brain and then watch out. At that point, I think rather than compete with our robots, we should merge with them. We should become part cybernetic. So I think when we meet alien life from outer space, they may be genetically and cybernetically enhanced. Genetically and cybernetically enhanced. Wow, so let's talk about that full range. In the near term and 200 years from now, how promising in the near term in your view is brain machine interfaces? So starting to allow computers to talk directly to the brains, Elon Musk is working on that with Neuralink and there's other companies working on this idea. Do you see promise there? Do you see hope for near term impact? Well, every technology has pluses and minuses. Already we can record memories. I have a book, The Future of the Mind, where I detail some of these breakthroughs. We can now record simple memories of mice and send these memories on the internet. Eventually, we're gonna do this with primates at Wake Forest University and also in Los Angeles. And then after that, we'll have a memory chip for Alzheimer's patients. We'll test it out in Alzheimer's patients because of course, when Alzheimer's patients lose their memory, they wander. They create all sorts of havoc, wandering around, oblivious to their surroundings and they'll have a chip. They'll push the button and memories, memories will come flooding into their hippocampus and the chip telling them where they live and who they are. And so a memory chip is definitely in the cards. And I think this will eventually affect human civilization. What is the future of the internet? The future of the internet is brain net. Brain net is when we send emotions, feelings, sensations on the internet. And we will telepathically communicate with other humans this way. This is gonna affect everything. Look at entertainment. Remember the silent movies? Charlie Chaplin was very famous during the era of silent movies. But when the talkies came in, nobody wanted to see Charlie Chaplin anymore because he never talked in the movies. And so a whole generation of actors lost their job and a new series of actors came in. Next, we're gonna have the movies replaced by brain net because in the future, people will say, who wants to see a screen with images? That's it. Sound and image, that's called the movies. In our entertainment industry, this multi billion dollar industry is based on screens with moving images and sound. But what happens when emotions, feelings, sensations, memories can be conveyed on the internet? It's gonna change everything. Human relations will change because you'll be able to empathize and feel the suffering of other people. We'll be able to communicate telepathically. And this is coming. You described brain net and future of the mind. This is an interesting concept. Do you think, so you mentioned entertainment, but what kind of effect would it have on our personal relationships? Hopefully it will deepen it. You realize that for most of human history, for over 90% of human history, we only knew maybe 20, 100 people. That's it, folks. That was your tribe. That was everybody you knew in the universe was only maybe 50 or 100. With the coming of towns, of course it expanded to a few thousand. With the coming of the telephone, all of a sudden you could reach thousands of people with a telephone. And now with the internet, you can reach the entire population of the planet Earth. And so I think this is a normal progression. And you think that kind of sort of connection to the rest of the world, and then adding sensations like being able to share telepathically emotions and so on that would just further deepen our connection to our fellow humans. That's right. In fact, I disagree with many scientists on this question. Most scientists would say that technology is neutral. A double edged sword, one side of the sword can cut against people. The other side of the sword can cut against ignorance and disease. I disagree. I think technology does have a moral direction. Look at the internet. The internet spreads knowledge, awareness, and that creates empowerment. People act on knowledge. When they begin to realize that they don't have to live that way, they don't have to suffer under a dictatorship, that there are other ways of living under freedom, then they begin to take things, take power. And that spreads democracy. And democracies do not war with other democracies. I'm a scientist. I believe in data. So let's take a sheet of paper and write down every single war you had to learn since you were in elementary school. Every single war, hundreds of them. Kings, queens, emperors, dictators. All these wars were between kings, queens, emperors, and dictators. Never between two major democracies. And so I think with the spread of this technology and which would accelerate with the coming of brain net, it means that, well, we will still have wars. Wars, of course, is politics by other means, but they'll be less intense and less frequent. Do you have worries of longer term existential risk from technology, from AI? So I think that's a wonderful vision of a future where war is a distant memory, but now there's another agent. There's somebody else that's able to create conflict, that's able to create harm, AI systems. So do you have worry about such AI systems? Well, yes, that is an existential risk, but again, I think an existential risk, not for this century. I think our grandkids are gonna have to confront this question as robots gradually approach the intelligence of a dog, a cat, and finally that of a monkey. However, I think we will digitize ourselves as well. Not only are we gonna merge with our technology, we'll also digitize our personality, our memories, our feelings. You realize during the Middle Ages, there was something called dualism. Dualism meant that the soul was separate from the body. When the body died, the soul went to heaven. That's dualism. Then in the 20th century, neuroscience came in and said, bah, humbug. Every time we look at the brain, it's just neurons. That's it, folks, period, end of story. Bunch of neurons firing. Now we're going back to dualism. Now we realize that we can digitize human memories, feelings, sensations, and create a digital copy of ourselves, and that's called the Connectome Project. Billions of dollars are now being spent to do not just the genome project of sequencing the genes of our body, but the Connectome Project, which is to map the entire connections of the human brain. And even before then, already in Silicon Valley, today, at this very moment, you can contact Silicon Valley companies that are willing to digitize your relatives because some people want to talk to their parents. There are unresolved issues with their parents, and one day, yes, firms will digitize people, and you'll be able to talk to them a reasonable facsimile. We leave a digital trail. Our ancestors did not. Our ancestors were lucky if they had one line, just one line in a church book, saying the date they were baptized and the date they died. That's it. That was their entire digital memory. I mean, their entire digital existence summarized in just a few letters of the alphabet, a whole life. Now we digitize everything. Every time you sneeze, you digitize it. You put it on the internet. And so I think that we are gonna digitize ourselves and give us digital immortality. We'll not only have biologic genetic immortality of some sort, but also digital immortality. And what are we gonna do with it? I think we should send it into outer space. If you digitize the human brain and put it on a laser beam and shoot it to the moon, you're on the moon in one second. Shoot it to Mars, you're on Mars in 20 minutes. Shoot it to Pluto, you're on Pluto in eight hours. Think about it for a moment. You can have breakfast in New York and for a morning snack, vacation on the moon, then zap your way to Mars by noontime, journey through the asteroid belt of the afternoon, and then come back for dinner in New York at night. All in a day's work at the speed of light. Now, this means that you don't need booster rockets. You don't need weightlessness problems. You don't need to worry about meteorites. And what's on the moon? On the moon, there is a mainframe that downloads your laser beam's information. And where does it download the information into? An avatar. Now, what does that avatar look like? Anything you want. Think about it for a moment. You could be Superman, Superwoman, on the moon, on Mars, traveling throughout the universe at the speed of light, downloading your personality into any vehicle you want. Now, let me stick my neck out. So far, everything I've been saying is well within the laws of physics. Well within the laws of physics. Now, let me go outside the laws of physics again. Here we go. I think this already exists. I think outside the Earth, there could be a super highway a laser highway of laser porting with billions of souls of aliens zapping their way across the galaxy. Now, let me ask you a question. Are we smart enough to determine whether such a thing exists or not? No, this could exist right outside the orbit of the planet Earth. And we're too stupid in our technology to even prove it or disprove it. We would need the aliens on this laser super highway to help us out, to send us a human interpretable signal. I mean, it ultimately boils down to the language of communication, but that's an exciting possibility that actually the sky is filled with aliens. The aliens could already be here. And we're just so oblivious that we're too stupid to know it. See, they don't have to be in alien form with little green men. They can be in any form they want in an avatar of their creation. Well, in fact, they could very well be. They can even look like us. Exactly. We'd never know. One of us could be an alien. You know, in the zoo, did you know that we sometimes have zookeepers that imitate animals? We create a fake animal and we put it in so that the animal is not afraid of this fake animal. And of course, these animals brains, their brain is about as big as a walnut. They accept these dummies as if they were real. So an alien civilization in outer space would say, oh yeah, human brains are so tiny. We could put a dummy on their world, an avatar, and they'd never know it. That would be an entertaining thing to watch from the alien perspective. So you kind of implied that with a digital form of our being, but also biologically, do you think one day technology will allow individual human beings to become immortal besides just through the ability to digitize our essence? Yeah, I think that artificial intelligence will give us the key to genetic immortality. You see, in the coming decades, everyone's gonna have their gene sequence. We'll have billions of genomes of old people, billions of genomes of young people. And what are we gonna do with it? We're gonna run it through an AI machine, which has pattern recognition, to look for the age genes. In other words, the fountain of youth that emperors, kings, and queens lusted over. The fountain of youth will be found by artificial intelligence. Artificial intelligence will identify where these age genes are located. First of all, what is aging? We now know what aging is. Aging is the buildup of errors. That's all aging is, the buildup of genetic errors. This means that cells eventually become slower, sluggish, they go into senescence, and they die. In fact, that's why we die. We die because of the buildup of mistakes in our genome, in our cellular activity. But you see, in the future, we'll be able to fix those genes with CRISPR type technologies, and perhaps even live forever. So let me ask you a question. Where does aging take place in a car? Given a car, where does aging take place? Well, it's obvious, the engine, right? A, that's where you have a lot of moving parts. B, that's where you have combustion. Well, where in the cell do we have combustion? The mitochondria. We now know where aging takes place. And if we cure many of the mistakes that build up in the mitochondria of the cell, we could become immortal. Let me ask you, if you yourself could become immortal, would you? Damn straight. No, I think about it for a while, because of course, it depends on how you become immortal. You know, there's a famous myth of Tithonus. It turns out that years ago, in the Greek mythology, there was the saga of Tithonus and Aurora. Aurora was the goddess of the dawn, and she fell in love with a mortal, a human called Tithonus. And so Aurora begged Zeus to grant her the gift of immortality to give to her lover. So Zeus took pity on Aurora and made Tithonus immortal. But you see, Aurora made a mistake, a huge mistake. She asked for immortality, but she forgot to ask for eternal youth. So poor Tithonus got older and older and older every year, decrepit, a bag of bones, but he could never die. Never die. Quality of life is important. So I think immortality is a great idea, as long as you also have immortal youth as well. Now, I personally believe, and I cannot prove this, but I personally believe that our grandkids may have the option of reaching the age of 30 and then stopping. They may like being age 30, because you have wisdom, you have all the benefits of age and maturity, and you still live forever with a healthy body. Our descendants may like being 30 for several centuries. Is there an aspect of human existence that is meaningful only because we're mortal? Well, every waking moment, we don't think about it this way, but every waking moment, actually, we are aware of our death and our mortality. Think about it for a moment. When you go to college, you realize that you are in a period of time where soon you will reach middle age and have a career. And after that, you'll retire and then you'll die. And so even as a youth, even as a child, without even thinking about it, you are aware of your own death, because it sets limits to your lifespan. I gotta graduate from high school. I gotta graduate from college. Why? Because you're gonna die. Because unless you graduate from high school, unless you graduate from college, you're not gonna enter old age with enough money to retire and then die. And so, yeah, people think about it unconsciously, because it affects every aspect of your being. The fact that you go to high school, college, get married, have kids, there's a clock, a clock ticking even without your permission. It gives a sense of urgency. Do you yourself, I mean, there's so much excitement and passion in the way you talk about physics and the way you talk about technology in the future. Do you yourself meditate on your own mortality? Do you think about this clock that's ticking? Well, I try not to, because it then begins to affect your behavior. You begin to alter your behavior to match your expectation of when you're gonna die. So let's talk about youth, and then let's talk about death, okay? When I interview scientists on radio, I often ask them, what made the difference? How old were you? What changed your life? And they always say more or less the same thing. No, these are Nobel Prize winners, directors of major laboratories, very distinguished scientists. They always say, when I was 10, when I was 10, something happened. It was a visit to the planetarium. It was a telescope. For Steven Weinberg, winner of the Nobel Prize, it was the chemistry kit. For Heinz Pagels, it was a visit to the planetarium. For Isidor Rabi, it was a book about the planets. For Albert Einstein, it was a compass. Something happened, which gives them this existential shock. Because you see, before the age of 10, everything is mommy and daddy, mommy and dad. That's your universe, mommy and daddy. Around the age of 10, you begin to wonder, what's beyond mommy and daddy? And that's when you have this epiphany, when you realize, oh my God, there's a universe out there, a universe of discovery. And that sensation stays with you for the rest of your life. You still remember that shock that you felt gazing at the universe. And then you hit the greatest destroyer of scientists known to science. The greatest destroyer of scientists known to science is junior high school. When you hit junior high school, folks, it's all over. It's all over. Because in junior high school, people say, hey, stupid. I mean, you like that nerdy stuff. And your friends shun you. All of a sudden, people think you're a weirdo. And scientists made boring. Richard Feynman, the Nobel Prize winner, when he was a child, his father would take him into the forest. And the father would teach him everything about birds, why they're shaped the way they are, their wings, the coloration, the shape of their beak, everything about birds. So one day, a bully comes up to the future Nobel Prize winner and says, hey, Dick, what's the name of that bird over there? Well, he didn't know. He knew everything about that bird except its name. So he said, I don't know. And then the bully said, what's the matter, Dick? You stupid or something? And then in that instant, he got it. He got it. He realized that for most people, science is giving names to birds. That's what science is. You know lots of names of obscure things. Hey, people say, you're smart. You're smart. You know all the names of the dinosaurs. You know all the names of the plants. No, that's not science at all. Science is about principles, concepts, physical pictures. That's what science is all about. My favorite quote from Einstein is that, unless you can explain the theory to a child, the theory is probably worthless. Meaning that all great theories are not big words. All great theories are simple concepts, principles, basic physical pictures. Relativity is all about clocks, meter sticks, rocket ships and locomotives. Newton's laws of gravity are all about balls and spinning wheels and things like that. That's what physics and science is all about, not memorizing things. And that stays with you for the rest of your life. So even in old age, I've noticed that these scientists, when they sit back, they still remember. They still remember that flush, that flush of excitement they felt with that first telescope, that first moment when they encountered the universe. That keeps them going. That keeps them going. By the way, I should point out that when I was eight, something happened to me as well. When I was eight years old, it was in all the papers that a great scientist had just died. And they put a picture of his desk on the front page. That's it, just a simple picture of the front page of the newspapers of his desk. That desk had a book on it, which was opened. And the caption said more or less, this is the unfinished manuscript from the greatest scientists of our time. So I said to myself, well, why couldn't he finish it? What's so hard that you can't finish it if you're a great scientist? It's a homework problem, right? You go home, you solve it, or you ask your mom, why couldn't he solve it? So to me, this was a murder mystery. This was greater than any adventure story. I had to know why the greatest scientists of our time couldn't finish something. And then over the years, I found out the guy had a name, Albert Einstein, and that book was The Theory of Everything. It was unfinished. Well, today I can read that book. I can see all the dead ends and false starts that he made. And I began to realize that he lost his way because he didn't have a physical picture to guide him on the third try. On the first try, he talked about clocks and lightning bolts and meter sticks, and that gave us special relativity, which gave us the atomic bomb. The second great picture was gravity with balls rolling on curved surfaces. And that gave us the Big Bang, creation of the universe, black holes. On the third try, he missed it. He had no picture at all to guide him. In fact, there's a quote I have where he said, I'm still looking. I'm still looking for that picture. He never found it. Well, today we think that picture is strength theory. The strength theory can unify gravity and this mysterious thing that Einstein didn't like, which is quantum mechanics, or couldn't quite pin down and make sense of. That's right. Mother nature has two hands, a left hand and a right hand. The left hand is a theory of the small. The right hand is a theory of the big. The theory of the small is the quantum theory, the theory of atoms and quarks. The theory of the big is relativity, the theory of black holes, big bangs. The problem is the left hand does not talk to the right hand. They hate each other. The left hand is based on discrete particles. The right hand is based on smooth surfaces. How do you put these two things together into a single theory? They hate each other. The greatest minds of our time, the greatest minds of our time worked on this problem and failed. Today, the only theory that has survived every challenge so far is string theory. That doesn't mean string theory is correct. It could very well be wrong, but right now it's the only game in town. Some people come up to me and say, ''Professor, I don't believe in string theory. Give me an alternative.'' And I tell them there is none. Get used to it. It's the best theory we got. It's the only theory we have. It's the only theory we have. Do you see, you know, the strings kind of inspire a view, as did atoms and particles and quarks, but especially strings inspire a view of a universe as a kind of information processing system, as a computer of sorts. Do you see the universe in this way? No. Some people think, in fact, the whole universe is a computer of some sort. And they believe that perhaps everything, therefore, is a simulation. Yes. I don't think so. I don't think that there is a super video game where we are nothing but puppets dancing on the screen and somebody hit the play button and here we are talking about simulations. No. Even Newtonian mechanics says that the weather, the simple weather is so complicated with trillions upon trillions of atoms that it cannot be simulated in a finite amount of time. In other words, the smallest object which can describe the weather and simulate the weather is the weather itself. The smallest object that can simulate a human is the human itself. And if you had quantum mechanics, it becomes almost impossible to simulate it with a conventional computer. This quantum mechanics deals with all possible universes, parallel universes, a multiverse of universes. And so the calculation just spirals out of control. Now, so far, there's only one way where you might be able to argue that the universe is a simulation. And this is still being debated by quantum physicists. It turns out that if you throw the encyclopedia into a black hole, the information is not lost. Eventually it winds up on the surface of the black hole. Now, the surface of the black hole is finite. In fact, you can calculate the maximum amount of information you can store in a black hole. It's a finite number. It's a calculable number, believe it or not. Now, if the universe were made out of black holes, which is the maximum universe you can conceive of, each universe, each black hole has a finite amount of information. Therefore, ergo, da da! Ergo, the total amount of information in a universe is finite. This is mind boggling. This, I consider mind boggling, that all possible universes are countable and all possible universes can be summarized in a number, a number you can write on a sheet of paper, all possible universes, and it's a finite number. Now, it's huge. It's a number beyond human imagination. It's a number based on what is called a Planck length, but it's a number. And so if a computer could ever simulate that number, then the universe would be a simulation. So theoretically, because the amount of information is finite, well, there necessarily must be able to exist a computer. It's just, from an engineering perspective, maybe impossible to build. Yes, no computer can build a universe capable of simulating the entire universe, except the universe itself. So that's your intuition, that our universe is very efficient, and so there's no shortcuts. Right, two reasons why I believe the universe is not a simulation. First, the calculational numbers are just incredible. No finite Turing machine can simulate the universe. And second, why would any super intelligent being simulate humans? If you think about it, most humans are kind of stupid. I mean, we do all sorts of crazy, stupid things, right? And we call it art, we call it humor. We call it human civilization. So why should an advanced civilization go through all that effort just to simulate Saturday Night Live? Well, that's a funny idea, but it's also, do you think it's possible that the act of creation cannot anticipate humans? You simply set the initial conditions and set a bunch of physical laws, and just for the fun of it, see what happens. You launch the thing, so you're not necessarily simulating everything. You're not simulating every little bit in the sense that you could predict what's going to happen, but you set the initial conditions, set the laws, and see what kind of fun stuff happens. Well, in some sense, that's how life got started. In the 1950s, Stanley did what is called the Miller experiment. He put a bunch of hydrogen gas, methane, toxic gases with liquid and a spark in a small glass beaker. And then he just walked away for a few weeks, came back a few weeks later, and bingo. Out of nothing and chaos came amino acids. If he had left it there for a few years, he might have gotten protein, protein molecules for free. That's probably how life got started, as a accident. And if he had left it there for perhaps a few million years, DNA might have formed in that beaker. And so we think that, yeah, DNA, life, all that could have been an accident if you wait long enough. And remember, our universe is roughly 13.8 billion years old. That's plenty of time for lots of random things to happen, including life itself. Yeah, we could be just a beautiful little random moment. And there could be an infinite number of those throughout the history of the universe, many creatures like us. We perhaps are not the epitome of what the universe is created for. Thank God. Let's hope not. Just look around. Yeah. Look to your left, look to your right. When do you think the first human will step foot on Mars? I think it's a good chance in the 2030s that we will be on Mars. In fact, there's no physics reason why we can't do it. It's an engineering problem. It's a very difficult and dangerous engineering problem, but it is an engineering problem. And in my book, Future of Humanity, I even speculate beyond that, that by the end of this century, we'll probably have the first starships. The first starships will not look like the Enterprise at all. They'll probably be small computer chips that are fired by laser beams with parachutes. And like what Stephen Hawking advocated, the Breakthrough Starshot program could send ships to the nearby stars, traveling at 20% the speed of light, reaching Alpha Centauri in about 20 years time. Beyond that, we should have fusion power. Fusion power is, in some sense, one of the ultimate sources of energy, but it's unstable. And we don't have fusion power today. Now, why is that? First of all, stars form almost for free. You get a bunch of gas large enough, it becomes a star. I mean, you don't even have to do anything to it, and it becomes a star. Why is fusion so difficult to put on the Earth? Because in outer space, stars are monopoles. They are pole, single poles that are spherically symmetric. And it's very easy to get spherically symmetric configurations of gas to compress into a star. It just happens naturally all by itself. The problem is magnetism is bipolar. You have a North Pole and a South Pole. And it's like trying to squeeze a long balloon. Take a long balloon and try to squeeze it. You squeeze one side, it bulges out the other side. Well, that's the problem with fusion machines. We use magnetism with a North Pole and a South Pole to squeeze gas, and all sorts of anomalies and horrible configurations can take place because we're not squeezing something uniformly like in a star. Stars, in some sense, are for free. Fusion on the Earth is very difficult. But I think it's inevitable. And it'll eventually give us unlimited power from seawater. So seawater will be the ultimate source of energy for the planet Earth. Why? What's the intuition there? Because we'll extract hydrogen from seawater, burn hydrogen in a fusion reactor to give us unlimited energy without the meltdown, without the nuclear waste. Why do we have meltdowns? We have meltdowns because in the fusion reactors, every time you split the uranium atom, you get nuclear waste. Tons of it. 30 tons of nuclear waste per reactor per year. And it's hot. It's hot for thousands, millions of years. That's why we have meltdowns. But you see, the waste product of a fusion reactor is helium gas. Helium gas is actually commercially valuable. You can make money selling helium gas. And so the waste product of a fusion reactor is helium, not nuclear waste that we find in a commercial fission plant. And that controlling, mastering and controlling fusion allows us to, converts us into a type one, I guess, civilization, right? Yeah, probably the backbone of a type one civilization will be fusion power. We, by the way, are type zero. We don't even rate on this scale. We get our energy from dead plants, for God's sake, oil and coal. But we are about 100 years from being type one. Get a calculator. In fact, Carl Sagan calculated that we are about 0.7, fairly close to a 1.0. For example, what is the internet? The internet is the beginning of the first type one technology to enter into our century. The first planetary technology is the internet. What is the language of type one? On the internet already, English and Mandarin Chinese are the most dominant languages on the internet. And what about the culture? We're seeing a type one sports, soccer, the Olympics, a type one music, youth culture, rock and roll, rap music, type one fashion, Gucci, Chanel, a type one economy, the European Union, NAFTA, what have you. So we're beginning to see the beginnings of a type one culture in a type one civilization. And inevitably, it will spread beyond this planet. So you talked about sending at 20% the speed of light on a chip into Alpha Centauri. But in a slightly nearer term, what do you think about the idea when we still have to send our biological bodies the colonization of planets, colonization of Mars? Do you see us becoming a two planet species ever or anytime soon? Well, just remember the dinosaurs did not have a space program. And that's why they're not here today. How come there are no dinosaurs in this room today? Because they didn't have a space program. We do have a space program, which means that we have an insurance policy. Now, I don't think we should bankrupt the Earth or deplete the Earth to go to Mars. That's too expensive and not practical. But we need a settlement, a settlement on Mars in case something bad happens to the planet Earth. And that means we have to terraform Mars. Now, to terraform Mars, if we could raise the temperature of Mars by six degrees, six degrees, then the polar ice caps begin to melt, releasing water vapor. Water vapor is a greenhouse gas. It causes even more melting of the ice caps. So it becomes a self fulfilling prophecy. It feeds on itself. It becomes autocatalytic. And so once you hit six degrees, rising of the temperature on Mars by six degrees, it takes off. And we melt the polar ice caps. And liquid water once again flows in the rivers, the canals, the channels, and the oceans of Mars. Mars once had an ocean, we think, about the size of the United States. And so that is a possibility. Now, how do we get there? How do we raise the temperature of Mars by six degrees? Elon Musk would like to detonate hydrogen warheads on the polar ice caps. Well, I'm not sure about that. Because we don't know that much about the effects of detonating hydrogen warheads to melt the polar ice caps. And who wants to glow in the dark at night reading the newspaper? So I think there are other ways to do it with solar satellites. You can have satellites orbiting Mars that beam sunlight onto the polar ice caps, melting the polar ice caps. Mars has plenty of water. It's just frozen. I think you paint an inspiring and a wonderful picture of the future. I think you've inspired and educated thousands, if not millions. Michio, it's been an honor. Thank you so much for talking today. My pleasure.
Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Lex Fridman Podcast #45
The following is a conversation with Gary Kasparov. He's considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, he dominated the chess world, ranking world number one for most of those 19 years. While he has many historical matches against human chess players, in the long arc of history he may be remembered for his match against the machine, IBM's Deep Blue. His initial victories and eventual loss to Deep Blue captivated the imagination of the world, of what role artificial intelligence systems may play in our civilization's future. That excitement inspired an entire generation of AI researchers, including myself, to get into the field. Gary is also a pro democracy political thinker and leader, a fearless human rights activist, and author of several books, including How Life Imitates Chess, which is a book on strategy and decision making, Winter is Coming, which is a book articulating his opposition to the Putin regime, and Deep Thinking, which is a book on the role of both artificial intelligence and human intelligence in defining our future. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Gary Kasparov. As perhaps the greatest chess player of all time, when you look introspectively at your psychology throughout your career, what was the bigger motivator, the love of winning or the hatred of losing? Tough question. Have to confess I never heard it before, which is again, congratulations. It's quite an accomplishment. Losing was always painful. For me, it was almost like a physical pain because I knew that if I lost the game, it's just because I made a mistake. So I always believed that the result of the game had to be decided by the quality of my play. Okay, you may say it sounds arrogant, but it helped me to move forward because I always knew that there was room for improvement. So it's the... Was there the fear of the mistake? Actually, fear of mistake guarantees mistakes. And the difference between top players at the very top is that it's the ability to make a decision without predictable consequences. You don't know what's happening. It's just intuitively. I can go this way or that way. And there are always hesitations. People are like, you are just at the crossroad. You can go right, you can go left, you can go straight. You can turn and go back. And the consequences are just very uncertain. Yes, you have certain ideas what happens on the right or on the left or on just if you go straight, but it's not enough to make well calculated choice. And when you play chess at the very top, it's about your inner strength. So I can make this decision. I will stand firm and I'm not going to waste my time because I have full confidence that I will go through. Going back to your original question is, I would say neither. It's just, it's love for winning, hate for losing. There were important elements, psychological elements, but the key element, I would say the driving force was always my passion for making a difference. It's just, I can move forward and I can always, I can always enjoy not just playing, but creating something new. Creating something new. How do you think about that? It's just finding new ideas in the openings, some original plan in the middle game. It's actually, that helped me to make the transition from the game of chess where I was on the very top to another life where I knew I would not be number one. I would not be necessarily on the top, but I could still be very active and productive by my ability to make a difference, by influencing people, say joining the democratic movement in Russia or talking to people about human machine relations. There's so many things where I knew my influence may not be as decisive as in chess, but still strong enough to help people to make their choices. So you can still create something new that makes a difference in the world outside of chess. But wait, you've kind of painted a beautiful picture of your motivations in chess to create something new, to look for those moments of some brilliant new ideas. But were you haunted by something? See, you make it seem like to be at the level you're at, you can get away without having demons, without having fears, without being driven by some of the darker forces. I mean, you sound almost religious. The darker forces, spiritual demons. I mean, do you have a call for a priest? That's what I'm dressing as. Now, just let's go back to these crucial chess moments where I had to make big decisions. As I said, it was all about my belief from very early days that I can make all the difference by playing well or by making mistakes. So yes, I always had an opponent across the chess board, opposite me. But no matter how strong the opponent was, whether it just was ordinary player or another world champion like Anatoly Karpov, having all respect for my opponent, I still believe that it's up to me to make the difference. And I knew I was not invincible. I made mistakes. I made some blunders. And with age, I made more blunders. So I knew it. But it's still, it's very much for me to be decisive factor in the game. I mean, even now, look, I just, my latest chess experience was horrible. I mean, I played Caruana, Fabi Caruana, this number two, number two, number three player in the world these days. We played this 960 with the Fischer, so called Fischer random chess, reshuffling pieces. Yeah, I lost very badly, but it's because I made mistakes. I mean, I had so many winning positions. I mean, 15 years ago, I would have crushed him. So, and it's, you know, while I lost, I was not so much upset. I mean, I know, as I said in the interview, I can fight any opponent, but not my biological clock. So it's fighting time is always a losing proposition. But even today at age 56, you know, I knew that, you know, I could play great game. I couldn't finish it because I didn't have enough energy or just, you know, I couldn't have the same level of concentration. But, you know, in number of games where I completely outplayed one of the top players in the world, I mean, gave me a certain amount of pleasure. That is, even today, I haven't lost my touch. Not the same, you know. Okay, the jaws are not as strong and the teeth are not as sharp, but I could get to him just, you know, almost, you know, on the ropes. Still got it. Still got it. And it's, you know, and it's, I think it's, my wife said it well. I mean, she said, look, Gary, it's somehow, it's not just fighting your biological clock. It's just, you know, maybe it's a signal because, you know, the goddess of chess, since you spoke great about demons. The goddess of chess, Keisha, maybe she didn't want you to win because, you know, if you could beat number two, number three player in the world, I mean, that's one of the top players who just recently played World Championship match. If you could beat him, that would be really bad for the game of chess. But just, what people will say, oh, look, the game of chess, you know, it's not making any progress. The game is just, you know, it's totally devalued because, look, the guy coming out of retirement, you know, just, you know, winning games, maybe that was good for chess, not good for you. But it's, look, I've been following your logic. We should always look for, you know, demons, you know, superior forces and other things that could, you know, if not dominate our lives, but somehow, you know, play a significant role in the outcome. Yeah, so the goddess of chess had to send a message. Yeah, that's okay. So Gary, you should do something else. Time. Now for a question that you have heard before, but give me a chance. You've dominated the chess world for 20 years, even still got it. Is there a moment, you said, you always look to create something new. Is there games or moments where you're especially proud of in terms of your brilliance of a new creative move? You've talked about Mikhail Tal as somebody who was aggressive and creative chess player in your own game. Look, you mentioned Mikhail Tal. It's very aggressive, very sharp player, famous for his combinations and sacrifices, even called magician from Riga, so for his very unique style. But any world champion, you know, it's, yeah, was a creator. Some of them were so flamboyant and flash like Tal. Some of them were no just, you know, less discerned at the chess board like Tigran Petrosian, but every world champion, every top player brought something into the game of chess. And each contribution was priceless because it's not just about sacrifices. Of course, amateurs, they enjoy, you know, the brilliant games where pieces being sacrificed. It's all just, you know, it's all piece of hanging. And it's all of a sudden, you know, being material down, a rook down, or just, you know, queen down. The weaker side delivers the final blow on just, you know, mating opponent's king. But there are other kinds of beauty. I mean, it's a slow positional maneuvering, you know, looking for weaknesses and just, and gradually, you know, strangling your opponent and eventually delivering sort of a positional masterpiece. So I think I made more difference in the game of chess than I could have imagined when I started playing. And the reason I thought it was time for me to leave was just, I mean, I knew that I was not, I was not, no longer the position to bring the same kind of contribution, the same kind of new knowledge into the game. So, and going back, I could immediately look at my games against Anatoly Karpov. It's not just I won the match in 1985 and became a world champion at age 22, but there were at least two games in that match. Of course, the last one, game 24, that was decisive game of the match, I won and became world champion. But also the way I won, it was a very sharp game and I found a unique maneuver that was absolutely new and it became some sort of just a typical now, though just when the move was made, was made on the board and put on display, a lot of people thought it was ugly. And another game, game 16 in the match where I just also managed to outplay Karpov completely with black pieces, just paralyzing his entire army in its own camp. Technically or psychologically, or was that a mix of both in game 16? Yeah, I think it was a big blow to Karpov. I think it was a big psychological victory for a number of reasons. One, the score was equal at the time and the world champion by the rules could retain his title in case of a tie. So we still have, before game 16, we have nine games to go. And also it was some sort of a bluff because neither me nor Karpov saw the refutation of this opening idea. And I think it says for Karpov, it was double blow because not that he lost the game, I should triple blow. He lost the game, it was a brilliant game and I played impeccably after just this opening bluff. And then they discovered that it was a bluff. So it's the, again, I didn't know, I was not bluffing. So that's why it happens very often. Some ideas could be refuted. And it's just, what I found out, and this is again, going back to your spiritual theme is that you could spend a lot of time working. And when I say you could, it's in the 80s, in the 90s. It doesn't happen these days because everybody has a computer. You could immediately see if it works or it doesn't work. Machine shows your refutation in a split of a second. But many of the analysis in the 80s or in the 90s, they were not perfect simply because we're humans and just you analyze the game, you look for some fresh ideas. And then just it happens that there was something that you missed because the level of the concentration at the chess board is different from when you analyze the game, just moving the pieces around. And, but somehow if you spend a lot of time at the chess board preparing, so in your studies with your coaches, hours and hours and hours, and nothing of what you found could, had materialized on the chess board. Somehow these hours help, I don't know why, always helped you. It's as if the amount of work you did could be transformed into some sort of spiritual energy that helped you to come up with other great ideas during the board. Again, even if there was no direct connection between your preparation and your victory in the game, there was always some sort of invisible connection between the amount of work you did, your dedication to actually, and your passion to discover new ideas, and your ability during the game at the chess board, when the clock was ticking, we still had ticking clock, not digital clock at the time. So to come up with some brilliance. And I also can mention many games from the 90s. So it's the, obviously all amateurs would pick up my game against Veselin Topalov in 1999 and V. Konzai. Again, because it was a brilliant game, the Black King traveled from its own camp to into White's camp across the entire board. It doesn't happen often, trust me, as you know, in the games with professional players, top professional players. So that's why visually it was one of the most impressive victories. But I could bring to your attention many other games that were not so impressive for amateurs, not so beautiful, just because it's sacrifice is always beautiful, you sacrifice pieces. And then eventually you have very few resources left and you use them just to crush your opponent basically. You have to make the king because you have almost nothing left at your disposal. But up to the very end, again, less and less, but still up to the very end, I always had games with some sort of interesting ideas and games that gave me great satisfaction. But I think it's what happened from 2005 up to these days was also a very big accomplishment since I had to find myself to sort of relocate myself. Yeah, rechannel the creative energies. Exactly, and to find something where I feel comfortable, even confident that my participation still makes the difference. Beautifully put. So let me ask perhaps a silly question, but sticking on chest for just a little longer. Where do you put Magnus Carlsen, the current world champion in the list of all time greats? In terms of style, moments of brilliance, consistency. It's a tricky question. The moment you start ranking world champions. Yeah, you lose something? I think it's not fair because any new generation knows much more about the game than the previous one. So when people say, oh, Gary was the greatest, Fischer was the greatest, Magnus was the greatest, it disregard the fact that the great players of the past, whether it was Alaskia, Capoplank, Alokian, I mean, they knew so little about chess by today's standards. I mean, today, just any kid that spent a few years with his or her chess computer knows much more about the game simply just because you have access to this information. And it has been discovered generation after generation. We added more and more knowledge to the game of chess. It's about the gap between the world champion and the rest of the field. So it's the, now, if you look at the gap, then probably Fischer could be on top, but very short period of time. Then you should also add a time factor. I was on top, not as big as Fischer, but much longer. So, and also, unlike Fischer, I succeeded in beating next generation. Here's the question. Let's see if you still got the fire, speaking of the next generation, because you did succeed beating the next generation. It's close. Okay, Anand, Short, Anand, the sheer of, Kramnik is already 12 years younger. So that's the next. But still yet, I competed with them and I just, I beat most of them. And I was still dominant when I left at age of 41. So back to Magnus. Magnus, I mean, consistency is phenomenal. The reason Magnus is on top, and it seems unbeatable today, Magnus is a lethal combination of Fischer and Karpov, which is very, it's very unusual because Fischer's style was very dynamic, just fighting to the last point, just using every resource available. Karpov was very different. It's just an unparalleled ability to use every piece with a maximum effect. Just its minimal resources always produce maximum effect. So now imagine that you merge these two styles. So it's like, you know, it's squeezing every stone for a drop of water, but doing it, you know, just, you know, for 50, 60, 70, 80 moves. I mean, Magnus could go on as long as Fischer with all his passion and energy. And at the same time being as meticulous and deadly as Karpov by just, you know, using every little advantage. So, and he has good, you know, very good health. It's important. I mean, physical conditions are, by the way, very important. So a lot of people don't recognize it. Their latest study shows that chess players burn thousands of calories during the game. So that puts him on the top of this field of the world champions. But again, it's the discussion that is, I saw recently on the internet, whether Garry Kasparov of his peak, let's say late eighties, could beat Magnus Carlsen today. I mean, it's certainly irrelevant because Garry Kasparov in 1989, okay, has played great chess, but still I knew very little about chess compared to Magnus Carlsen in 2019, who by the way, learned from me as well. So that's why, yeah. I'm extremely cautious in making any judgment that involves, you know, time gaps. You ask, you know, soccer fans. So who is your favorite? Pele, Maradona, or Messi? Yeah. Yeah, who's your favorite? Messi. Messi. Yeah, why? Because? Maybe Maradona, maybe. Not because you're younger, but that's simple. Your instinctive answer is correct because you saw, you didn't see Maradona in action. I saw all of them in action. So that's why, but since, you know, when I was, you know, just following it, you know, just Pele and Maradona, they were just, you know, they were big stars and it's, Messi's already just, I was gradually losing interest in just other things. So I remember Pele in 1970, the final match Brazil Italy. So that's the first World Cup soccer I watched. So that's the, and actually my answer when I just, when I just, you know, because I was asked this question as well. So I say that it's just, while it's impossible to make a choice, I would still probably go with Maradona for simple reason. The Brazilian team in 1970 could have won without Pele. It was absolutely great. Still could have won, maybe, but it is, Argentinian team in 1986 without Maradona would not be in the final. So this is, and Messi, he still hasn't won a title. You could argue for that for an hour, but you could say, if you ask Maradona, if you look in his eyes, especially, let's say Gary Kasparov in 1989, he would have said, I was sure as hell would beat Magnus Carlsen. Just simply because. The confidence, the fire. Simply because, again, they saw me in action. So this, again, it's the age factor that's important. Definitely with the passion and energy and being equipped with all modern ideas. But again, then you make, you know, a very just important assumption that you could empower Gary Kasparov in 1989 with all ideas that have been accumulated over 30 years. That would not be Gary Kasparov. That would be someone else. Because again, I belong to 1989. I was way ahead of the field. And I beat Karpov several times in the World Championship matches. And I crossed 2800, which, by the way, if you look at the, in the rating, which is just, even today, so this is the rating that I retire. So it's still, you know, it's just, it's a top two, three. So that's Caruana and Ding. It's about the same rating now. And I crossed 2800 in 1990. Well, just you look at the inflation. When I crossed 2800 in 1990, there was only one player in 2700 category, and not only Karpov. Now we had more than 50. So just, when you see this, so if you add inflation, so I think my 2851, it could probably, could be more valuable as Magnus 2882, which was his highest rating. But anyway, again, too many hypotheticals. You're lost to IBM Deep Blue in 1997. In my eyes, that is one of the most seminal moments in the history. Again, I apologize for being romanticizing the notion, but in the history of our civilization, because humans, as the civilizations, for centuries saw chess as, you know, the peak of what man can accomplish of intellectual mastery, right? And that moment when a machine could beat a human being was inspiring to just an entire, anyone who cares about science, innovation, an entire generation of AI researchers. And yet, to you that loss, at least if reading your face, was, seemed like a tragedy, extremely painful. Like you said, physically painful. Why? When you look back at your psychology of that loss, why was it so painful? Were you not able to see the seminal nature of that moment? Or was that exactly why it was that painful? As I already said, losing was painful, physically painful. And the match I lost in 1997 was not the first match I lost to a machine. It was the first match I lost, period. Yeah. That's... Oh, wow. So... Oh, wow. Yeah, it's... Right. Yeah, that makes all the difference to me. Yes. First time I lost, it's just... Now, I lost, and the reason I was so angry that I just, you know, I had suspicions that my loss was not just a result of my bad play. Yes. So though I played quite poorly, you know, just when you started looking at the games today, I made tons of mistakes. But, you know, I had all reasons to believe that, you know, there were other factors that had nothing to do with the game of chess. And that's why I was angry. But look, it was 22 years ago. It's water under the bridge. We can analyze this match, and this is with everything you said. I agree with probably one exception, is that considering chess, you know, as the sort of, as a pinnacle of intellectual activities, was our mistake. Because, you know, we just thought, oh, it's a game of the highest intellect, and it's just, you know, you have to be so, you know, intelligent, and you could see things that, you know, the ordinary mortals could not see. It's a game, and all machines had to do with this game is just to make fewer mistakes, not to solve the game. Because the game cannot be solved. I mean, according to Kovalevich Shannon, the number of legal moves is 10 to the 46th power. Too many zeros, so just for any computer to finish the job, you know, in next few billion years. But it doesn't have to. It's all about making fewer mistakes. And I think that's the, this match, this match actually, and what's happened afterwards with other games, with Go, with Shoggy, with video games. It's a demonstration that machines will always be humans in what I call closed systems. The moment you build a closed system, no matter how the system's called, chess, Go, Shoggy, Dota, machines will prevail simply because they will bring down a number of mistakes. Machines don't have to solve it, they just have to, the way they outplay us, it's not by just being more intelligent, it's just by doing something else, but eventually it's just, it's capitalizing on our mistakes. When you look at the chess machines ratings today, and compare this to Magnus Carlsen, it's the same as comparing Ferrari to Usain Bolt. It's the, the gap is, I mean, by chess standards, is insane, 34, 3500 to 2800, 2850 on Magnus. It's like difference between Magnus and an ordinary player from an open international tournament. It's not because machine understanding is better than Magnus Carlsen, but simply because it's steady. Machine has steady hand. And I think that is what we, we, we, we have to learn from 1997 experience, and from further encounters with computers, and sort of the current state of affairs with AlphaZero, beating other machines. The idea that we can compete with computers in so called intellectual fields, it was wrong from the very beginning. It's just, it's, by the way, the 1997 match was not the first victory of machines over AlphaZero. Or grandmasters. Or grandmasters. No, actually it's, I played against first decent chess computers from late, from late 80s. So I played with the prototype of Deep Blue called Deep Thought in 1989, two rapid chess games in New York, I won handily to both games. We played against new chess engines like Fritz, and other programs. And then it's the, it was Israeli program Junior that appeared in 1995. Yeah, so there were, there were several programs. I, you know, I lost few games in Blitz. I lost one match against the computer chess engine 1994 rapid chess. So I lost one game to Deep Blue in 1996 match, the man, the match I won. Some people, you know, tend to forget about it that I won the first match. Yes. But it's, it's, we, we made a very important psychological mistake thinking that the reason we lost Blitz matches, five, five minutes games. The reason we lost some of the rapid chess matches, 25 minutes chess. Because we didn't have enough time. If you play a longer match, we will not make the same mistakes. Nonsense. So this, yeah, we had more time, but we still make mistakes. And machine also has more time. And machines, machine will always, you know, will always be steady and consistent compared to humans instabilities and inconsistencies. And today we are at the point where yes, nobody talks about, you know, humans playing as machines. Now machines can offer handicap to top players and still, you know, will, will, will be favored. I think we're just learning that it's, it's, it's no longer human versus machines. It's about human working with machines. That's what I recognized in 1998, just after leaking my wounds and spending one year in just, you know, ruminating so the, so what's happened in this match. And I knew that though we still could play against the machines. I had two more matches in, in 2003, playing both Deep Fritz and Deep Junior. Both matches ended as a tie. Though these machines were not weaker, at least actually probably stronger than Deep Blue. And by the way, today chess app on your mobile phone is probably stronger than Deep Blue. I'm not speaking about chess engines that are so much superior. And by the way, when you analyze games we played against Deep Blue in 1997 on your chess engine, they'll be laughing. So this is, and it's also shows that's how chess changed because chess commentators, they look at some of our games like game four, game five, brilliant idea. Now you ask Stockfish, you ask Houdini, you ask Commodore, all the leading chess engines. Within 30 seconds, they will show you how many mistakes both Gary and Deep Blue made in the game that was trumpeted as the, as a great chess match in 1997. Well, okay. So you've made an interesting, if you can untangle that comment. So now in retrospect, it was a mistake to see chess as the peak of human intellect. Nevertheless, that was done for centuries. So by the way, in Europe, because you know, you move to the far East, they will go, they had show games. But games, games. Again, some of the games like, you know, board games. Yes. Yeah, I agree. So if I push back a little bit, so now you say that, okay, but it was a mistake to see chess as the epitome and now, and then now there's other things maybe like language, that conversation, like some of the things that in your view is still way out of reach of computers, but inside humans. Do you think, can you talk about what those things might be? And do you think just like chess, they might fall? Soon with the same set of approaches, if you look at AlphaZero, the same kind of learning approaches as the machines grow in size. No, no, it's not about growing in size. It's about, again, it's about understanding the difference between closed system and open ended system. So you think that key difference, so the board games are closed in terms of the rule set, the actions, the state space, everything is just constrained. You think once you open it, the machines are lost? Not lost, but again, the effectiveness is very different because machine does not understand the moment it's reaching territory of diminishing returns. It's the, to put it in a different way, machine doesn't know how to ask right questions. It can ask questions, but it will never tell you which questions are relevant. So there's the, it's like about the, it's the, it's a direction. So these, it's, I think it's in human machine relations, we have to consider, so our role and people, many people feel uncomfortable that this, the territory that belongs to us is shrinking. I'm saying, so what, you know, this is, eventually we'll belong to the last few decimal points, but it's like having, so a very powerful gun, that's, and all you can do there is slightly, you know, alter direction of the bullet. Maybe, you know, 0.1 degree of this angle, but that means a mile away, 10 meters of target. So that's, we have to recognize that is a certain unique human qualities that machines in a foreseeable future will not be able to reproduce. And the effectiveness of this cooperation, collaboration depends on our understanding what exactly we can bring into the game. So the greatest danger is when we try to interfere with machine superior knowledge. So that's why I always say that sometimes you'd rather have, by reading these pictures in radiology, you may probably prefer an experienced nurse than rather than having top professor, because she will not try to interfere with machines understanding. So it's very important to know that if machines knows how to do better things in 95%, 96% of territory, we should not touch it because it's happened. It's like in chess, recognize they do it better. See where we can make the difference. You mentioned AlphaZero, I mean, AlphaZero is, it's actually a first step into what you may call AI, because everything that's being called AI today, it's just, it's one or another variation of what Claude Shannon characterized as a brute force. It's a type A machine, whether it's Deep Blue, whether it's Watson, and all these modern technologies that are being trumpeted as AI, it's still brute force. It's the, all they do, it's they do optimization. It's this, they are, you know, they keep, you know, improving the way to process human generated data. Now, AlphaZero is the first step towards, you know, machine produced knowledge. Which is, by the way, it's quite ironic that the first company that championed that was IBM. Oh, it's in backgammon. Interesting, in backgammon. Yes, you should look at IBM, it's a newer gammon. It's the scientist called Cesaro. He's still working at IBM. They had it in the early 90s. It's the program that played, you know, the AlphaZero type, so just trying to come up with own strategies. But because of success of Deep Blue, this project had been not abandoned, but just, you know, it was put on hold. And now we just, you know, it's, you know, everybody talks about this, the machines generated knowledge, so as revolutionary. And it is, but there's still, you know, many open ended questions. Yes, AlphaZero generates its own data. Many ideas that AlphaZero generated in chess were quite intriguing. So I looked at these games with, not just with interest, but with, you know, it was quite exciting to learn how machine could actually, you know, juggle all the pieces and just play positions with a broken material balance, sacrificing material, always being ahead of other programs, you know, one or two moves ahead by foreseeing the consequences, not overcalculating because machines, other machines were at least as powerful in calculating, but it's having this unique knowledge based on discovered patterns after playing 60 million games. Almost something that feels like intuition. Exactly, but there's one problem. Yeah. Now, the simple question, if AlphaZero faces superior point, let's say another powerful computer accompanied by a human who could help just to discover certain problems, because I already, I look at many AlphaZero games. I visited their lab, you know, spoke to Demis Hassabis and his team, and I know there's certain weaknesses there. Now, if these weaknesses are exposed, the question is how many games will it take for AlphaZero to correct it? The answer is hundreds of thousands. Even if it keeps losing, it can, it's just because the whole system is based. So it's now, imagine so this is, you can have a human by just making a few tweaks. So humans are still more flexible. And as long as we recognize what is our role, where we can play sort of, so the most valuable part in this collaboration. So it's, it will help us to understand what are the next steps in human machine collaboration. Beautifully put. So let's talk about the thing that machines certainly don't know how to do yet, which is morality. Machines and morality. It's another question that, you know, just it's being asked all the time these days. And I think it's another phantom that is haunting a general public because it's just being fed with this, you know, illusions is that how can we avoid machines, you know, having bias, being prejudiced? You cannot, because it's like looking in the mirror and complaining about what you see. If you have certain bias in the society, machine will just follow it. It's just, it's, you know, you look at the mirror, you don't like what you see there. You can, you know, you can break it. You can try to distort it. Or you can try to actually change something. Just by yourself. By yourself, yes. So it's very important to understand is that you cannot expect machines to improve the ills of our society. And moreover machines will simply, you know, just, you know, amplify it. Yes. Yeah. But the thing is people are more comfortable with other people doing injustice, with being biased. We're not comfortable with machines having the same kind of bias. So that's an interesting standard that we place on machines. With autonomous vehicles, they have to be much safer. With automated systems. Of course they're much safer. Statistically, they're much safer than. It's not of course. Why would, it's not of course. It's not given. Autonomous vehicles, you have to work really hard to make them safer. I think it just, it goes without saying is the outcome of this, I would call it competition with comparison is very clear. But the problem is not about being, you know, safer. It's the 40,000 people or so every year died in car accidents in the United States. And it's statistics. One accident with autonomous vehicle and it's front page of a newspaper. Yes. So it's, again, it's about psychology. So it's while people, you know, kill each other in car accidents because they make mistakes, they make more mistakes. For me, it's not a question. Of course we make more mistakes because we're human. Yes, machines are old. And by the way, no machine will ever reach 100% perfection. That's another important fake story that is being fed to the public. If machine doesn't reach 100% performance, it's not safe. No, all you can ask any computer, whether it's, you know, playing chess or doing the stock market calculations or driving your autonomous vehicle, it's to make fewer mistakes. And yes, I know it's not, you know, it's not easy for us to accept because ah, if, you know, if you have two humans, you know, colliding in their cars, okay, it's like, if one of these cars is autonomous vehicle, and by the way, even if it's humans fault, terrible. How could you allow a machine to run without a driver at the wheel? So, you know, let's linger that for a second, that double standard, the way you felt with your first loss against Deep Blue, were you treating the machine differently than you would have a human? Or, so what do you think about that difference between the way we see machines and humans? No, it's the, at that time, you know, for me it was a match. And that's why I was angry because I believed that the match was not, you know, fairly organized. So it's, definitely there were unfair advantages for IBM and I wanted to play another match, like a rubber match. So your anger or displeasure was aimed more like at the humans behind IBM versus the actual pure algorithm. Absolutely, look, I knew at the time, and by the way, I was, objectively speaking, I was stronger at that time. So that probably added to my anger because I knew I could beat the machine. Yeah. Yeah, so that's, and that's the, and as I lost, and I knew I was not well prepared. So because they, I have to give them credit. They did some good work from 1996 and I, but I still could beat the machine. So I made too many mistakes. Also, this is the whole, it's this, the publicity around the match. So I underestimated the effect, you know, just it's, and being called the, you know, the brain's last stand, you know, okay, no pressure. Okay, well, let me ask. So I was born also in the Soviet Union. What lessons do you draw from the rise and fall of the Soviet Union in the 20th century? When you just look at this nation that is now pushing forward into what Russia is, if you look at the long arc of history of the 20th century, what do we take away? What do we take away from that? I think the lesson of history is clear. Undemocratic systems, totalitarian regimes, systems that are based on controlling their citizens and just every aspect of their life, not offering opportunities to, for private initiative, central planning systems, they're doomed. They just, you know, they cannot be driving force for innovation, so they, in the history timeline, I mean, they could cause certain, you know, distortion of the concept of progress. They, by the way, they may call themselves progressive, but we know that the damage that they caused to humanity is just, it's yet to be measured. But at the end of the day, they fail. They fail, and the end of the Cold War was a great triumph of the free world. It's not that the free world is perfect. It's very important to recognize the fact that, I always like to mention, you know, one of my favorite books, The Lord of the Rings, that there's no absolute good, but there is an absolute evil. Good, you know, comes in many forms, but we all, you know, it's being humans or being even, you know, humans from fairy tales or just some sort of mythical creatures. It's the, you can always find spots on the songs. So this is conducting war and just, and fighting for justice. There are always things that, you know, can be easily criticized. And human history is the, is a never ending quest for perfection. But we know that there is absolute evil. We know it's, for me, it's no clear, it's, I mean, nobody argues about Hitler being absolute evil, but I think it's very important to recognize Stalin was absolute evil. Communism caused more damage than any other ideology in the 20th century. And unfortunately, while we all know that fascism was condemned, but there was no Nuremberg for communism. And that's why we could see, you know, still the successors of Stalin are feeling far more comfortable. And Putin is one of them. You highlight a few interesting connections actually between Stalin and Hitler. I mean, in terms of the adjusting or clarifying the history of World War II, which is very interesting. Of course, we don't have time. So let me ask. You can ask, you know, I just recently delivered a speech in Toronto at 80th anniversary of Molotov Ribbentrop Pact. It's something that I believe, you know, just, you know, has, must be taught in the schools that the World War II had been started by two dictators by signing these criminal treaty, collusion of two tyrants in August 1939 that led to the beginning of the World War II. And the fact is that eventually Stalin had no choice but to join allies because Hitler attacked him. So it just doesn't, you know, eliminate the fact that Stalin helped Hitler to start World War II. And he was one of the beneficiaries at early stage by annexing a part of Eastern Europe. And as a result of the World War II, he annexed almost entire Eastern Europe. And for many Eastern European nations, the end of the World War II was the beginning of communist occupation. So Putin, you've talked about as a man who stands between Russia and democracy, essentially today. You've been a strong opponent and critic of Putin. Let me ask again, how much does fear enter your mind and heart? So in 2007, there's this interesting comment from Oleg Kalugin, KGB general. He said that I do not talk details. People who knew them are all dead now because they were vocal. I'm quiet. There's only one man who's vocal and he may be in trouble. World Chess champion Kasparov. He has been very outspoken in his attacks on Putin. And I believe he's probably next on the list. So clearly your life has been and perhaps continues to be in danger. How do you think about having the views you have, the ideas you have, being in opposition as you are in this kind of context when your life could be in danger? That's the reason I live in New York. So it was not my first choice, but I knew I had to leave Russia at one point. And among other places, New York is the safest. Is it safe? No. It's the, I know what happened, what is happening with many of Putin's enemies. But at the end of the day, I mean, what can I do? I could be very proactive by trying to change things I can influence. But here are a few facts. I cannot stop doing what I've been doing for a long time. It's the right thing to do. I grew up with my family teaching me sort of the wisdom of Soviet dissidents, do what you must and so be. I could try to be cautious by not traveling to certain places where my security could be at risk. There are so many invitations to speak at different locations in the world. And I have to say that many countries are just now are not destinations that I can afford to travel. My mother still lives in Moscow. I meet her a few times a year. She was devastated when I had to leave Russia because since my father died in 1971, so she was 33 and she dedicated her entire life to her only son. But she recognized in just a year or so since I left Russia that it was the only chance for me to continue my normal life. So just to, I mean, to be relatively safe and to do what she taught me to do to make the difference. Do you think you will ever return to Russia or let me ask a different way? When? Even sooner than many people think because I think Putin's regime is facing unsurmountable difficulties. And again, I read enough historical books to know that dictatorships, they end suddenly. It's just on Sunday, dictator feels comfortable. He believes he's popular on Monday morning, he's bust. The good news and bad news. I mean, the bad news is that I don't know when and how Putin rule ends. The good news, he also doesn't know. Okay, well put. Let me ask a question that seems to preoccupy the American mind from the perspective of Russia. One, did Russia interfere in the 2016 U.S. election, government sanction and future? Two, will Russia interfere in the 2020 U.S. election? And what does that interference look like? It's very old. We had such an intelligent conversation. And you are ruining everything by asking such a stupid question. It's insulting for my intellect. Of course they did interfere. Of course they did absolutely everything to elect Trump. I mean, they said it many times. It is just, you know, I met enough KGB colonels in my life to tell you that, you know, just the way Putin looks at Trump, this is the way. Look, and I don't have to hear what he says, what Trump says, it just is, I don't need to go through congressional investigations. The way Putin looks at Trump is the way the KGB officers looked at the assets. It's just, and following to 2020, of course they will do absolutely everything to help Trump to survive. Because I think the damage that Trump's reelections could cause to America and to the free world, it's just, it's beyond one's imagination. I think basically if Trump is reelected, he will ruin NATO, because he's already heading in this direction, but now he's just, he's still limited by the reelection hurdles. If he's still in the office after November, 2020, okay, January, 2021, I don't want to think about it. My problem is not just Trump, because Trump is basically, it's a symptom. But the problem is that I don't see, it's just, it's the, in American political horizon, politicians who could take on Trump for all damage that he's doing for the free world. Not just things that has happened that went wrong in America. So there's the, it seems to me that the campaign, political campaign on the Democratic side is fixed on certain important, but still secondary issues. Because when you have the foundation of the republic in jeopardy, I mean, you cannot talk about healthcare. I mean, I understand how important it is, but it's still secondary because the entire framework of American political life is at risk. And you have Vladimir Putin just, it's having, fortunately, free hands by attacking America and other free countries. And by the way, we have so much evidence about Russian interference in Brexit, in elections in almost every European country. And thinking that they will be shy of attacking America in 2020, now with Trump in the office, yeah. I think it's, yeah, it definitely diminishes the intellectual quality of our conversation. I do what I can. Last question. If you can go back, just look at the entirety of your life, you accomplished more than most humans will ever do. If you could go back and relive a single moment in your life, what would that moment be? There are moments in my life when I think about what could be done differently, but. No, experience happiness and joy and pride. Just a touch once again. I know, I know, but it's the, it's the, I made many mistakes in my life. So I just, it's the, I know that at the end of the day, it's, I believe in the butterfly effect. So it's the, it's the, I knew moments where I could, now if I'm there at that point in 89 and 93, you pick up a year, I could improve my actions by not doing this stupid thing. But then how do you know that I will have all other accomplishments? I just, I'm, I'm afraid that, you know, we just have to just follow this, if you may call wisdom before is gump, you know, it's the life is this, you know, it's, this is, it's a box of, of, of, of chocolate and you don't know what's inside, but you have to go one by one. So it's the, I'm, I'm happy with who I am and where I am today. And I am very proud, not only with my chess accomplishments, but that I made this transition. And since I left chess, you know, I built my own reputation that had some influence on the game of chess, but not, it's not, you know, directly derived from, from, from the game. I'm grateful for my wife. So help me to build this life. We actually married in 2005. It was my third marriage. That's why I said I'd made mistakes in my life. But, and by the way, I'm close with two kids from my previous marriages. So that's, that's the, I'm, you know, I managed to sort of to balance my life and, and here in, I live in New York. So we have our two kids born here in New York. It's, it's new life and it's, you know, it's, it's busy. Sometimes I wish I could, you know, I could limit my engagement in many other things that are still, you know, taking time and energy, but life is exciting. And as long as I can feel that I have energy, I have strengths, I have passion to make the difference, I'm happy. I think that's a beautiful moment to end on. Gary, thank you very much for talking today. Thank you.
Garry Kasparov: Chess, Deep Blue, AI, and Putin | Lex Fridman Podcast #46
The following is a conversation with Sean Carroll, Part 2, the second time we've spoken on the podcast. You can get the link to the first time in the description. This time we focus on quantum mechanics and the many worlds interpretation that he details elegantly in his new book titled Something Deeply Hidden. I own and enjoy both the eBook and audiobook versions of it. Listening to Sean read about entanglement, complementarity, and the emergence of space time reminds me of Bob Ross teaching the world how to paint on his old television show. If you don't know who Bob Ross is, you're truly missing out. Look him up. He'll make you fall in love with painting. Sean Carroll is the Bob Ross of theoretical physics. He's the author of several popular books, a host of a great podcast called Mindscape, and is a theoretical physicist at Caltech and the Santa Fe Institute, specializing in quantum mechanics, arrow of time, cosmology, and gravitation. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now here's my conversation with Sean Carroll. Isaac Newton developed what we now call classical mechanics that you describe very nicely in your new book, as you do with a lot of basic concepts in physics. So with classical mechanics, I can throw a rock and can predict the trajectory of that rock's flight. But if we could put ourselves back into Newton's time, his theories work to predict things, but as I understand, he himself thought that they were, their interpretations of those predictions were absurd. Perhaps he just said it for religious reasons and so on, but in particular, sort of a world of interaction without contact, so action at a distance. It didn't make sense to him on a sort of a human interpretation level. Does it make sense to you that things can affect other things at a distance? It does, but that was one of Newton's worries. You're actually right in a slightly different way about the religious worries. He was smart enough, this is off the topic but still fascinating, Newton almost invented chaos theory as soon as he invented classical mechanics. He realized that in the solar system, so he was able to explain how planets move around the Sun, but typically you would describe the orbit of the Earth ignoring the effects of Jupiter and Saturn and so forth, just doing the Earth and the Sun. He kind of knew, even though he couldn't do the math, that if you included the effects of Jupiter and Saturn and the other planets, the solar system would be unstable, like the orbits of the planets would get out of whack. So he thought that God would intervene occasionally to sort of move the planets back into orbit, which is the only way you could explain how they were there presumably forever. But the worries about classical mechanics were a little bit different, the worry about gravity in particular. It wasn't a worry about classical mechanics, it was a worry about gravity. How in the world does the Earth know that there's something called the Sun, 93 million miles away, that is exerting gravitational force on it? And he literally said, you know, I leave that for future generations to think about because I don't know what the answer is. And in fact, people under emphasized this, but future generations figured it out. Pierre Simone Laplace in circa 1800 showed that you could rewrite Newtonian gravity as a field theory. So instead of just talking about the force due to gravity, you can talk about the gravitational field or the gravitational potential field, and then there's no action at a distance. It's exactly the same theory empirically, it makes exactly the same predictions. But what's happening is instead of the Sun just reaching out across the void, there is a gravitational field in between the Sun and the Earth that obeys an equation, Laplace's equation, cleverly enough, and that tells us exactly what the field does. So even in Newtonian gravity, you don't need action at a distance. Now what many people say is that Einstein solved this problem because he invented general relativity. And in general relativity, there's certainly a field in between the Earth and the Sun. But also there's the speed of light as a limit. In Laplace's theory, which was exactly Newton's theory, just in a different mathematical language, there could still be instantaneous action across the universe, whereas in general relativity, if you shake something here, its gravitational impulse radiates out at the speed of light and we call that a gravitational wave and we can detect those. So but I really, it rubs me the wrong way to think that we should presume the answer should look one way or the other. Like if it turned out that there was action at a distance in physics and that was the best way to describe things, then I would do it that way. It's actually a very deep question because when we don't know what the right laws of physics are, when we're guessing at them, when we're hypothesizing at what they might be, we are often guided by our intuitions about what they should be. I mean, Einstein famously was very guided by his intuitions and he did not like the idea of action at a distance. We don't know whether he was right or not. It depends on your interpretation of quantum mechanics and it depends on even how you talk about quantum mechanics within any one interpretation. So if you see every force as a field or any other interpretation of action at a distance, just stepping back to sort of caveman thinking, like do you really, can you really sort of understand what it means for a force to be a field that's everywhere? So if you look at gravity, like what do you think about? I think so. Is this something that you've been conditioned by society to think that, to map the fact that science is extremely well predictive of something to believing that you actually understand it? Like you can intuitively, the degree that human beings can understand anything that you actually understand it. Or are you just trusting the beauty and the power of the predictive power of science? That depends on what you mean by this idea of truly understanding something, right? You know, I mean, can I truly understand Fermat's last theorem? You know, it's easy to state it, but do I really appreciate what it means for incredibly large numbers, right? I think yes, I think I do understand it, but like if you want to just push people on well, but your intuition doesn't go to the places where Andrew Wiles needed to go to prove Fermat's last theorem, then I can say fine, but I still think I understand the theorem. And likewise, I think that I do have a pretty good intuitive understanding of fields pervading space time, whether it's the gravitational field or the electromagnetic field or whatever, the Higgs field. Of course, one's intuition gets worse and worse as you get trickier in the quantum field theory and all sorts of new phenomena that come up in quantum field theory. So our intuitions aren't perfect, but I think it's also okay to say that our intuitions get trained, right? Like, you know, I have different intuitions now than I had when I was a baby. That's okay. That's not, an intuition is not necessarily intrinsic to who we are. We can train it a little bit. So that's where I'm going to bring in Noam Chomsky for a second, who thinks that our cognitive abilities are sort of evolved through time, and so they're biologically constrained. And so there's a clear limit, as he puts it, to our cognitive abilities, and it's a very harsh limit. But you actually kind of said something interesting in nature versus nurture thing here, is we can train our intuitions to sort of build up the cognitive muscles to be able to understand some of these tricky concepts. So do you think there's limits to our understanding that's deeply rooted, hardcoded into our biology that we can't overcome? There could be limits to things like our ability to visualize, okay? But when someone like Ed Witten proves a theorem about, you know, 100 dimensional mathematical spaces, he's not visualizing it. He's doing the math. That doesn't stop him from understanding the result. I think, and I would love to understand this better, but my rough feeling, which is not very educated, is that, you know, there's some threshold that one crosses in abstraction when one becomes kind of like a Turing machine, right? One has the ability to contain in one's brain logical, formal, symbolic structures and manipulate them. And that's a leap that we can make as human beings that dogs and cats haven't made. And once you get there, I'm not sure that there are any limits to our ability to understand the scientific world at all. Maybe there are. There's certainly limits in our ability to calculate things, right? You know, people are not very good at taking cube roots of million digit numbers in their head. But that's not an element of understanding. It's certainly not a limit in principle. So of course, as a human, you would say there doesn't feel to be limits to our understanding. But sort of, have you thought that the universe is actually a lot simpler than it appears to us? And we just will never be able to, like, it's outside of our, okay. So us, our cognitive abilities combined with our mathematical prowess and whatever kind of experimental simulation devices we can put together, is there limits to that? Is it possible there's limits to that? Well, of course it's possible that there are limits to that. Is there any good reason to think that we're anywhere close to the limits is a harder question. Look, imagine asking this question 500 years ago to the world's greatest thinkers, right? Like are we approaching the limits of our ability to understand the natural world? And by definition, there are questions about the natural world that are most interesting to us that are the ones we don't quite yet understand, right? So there's always, we're always faced with these puzzles we don't yet know. And I don't know what they would have said 500 years ago, but they didn't even know about classical mechanics, much less quantum mechanics. So we know that they were nowhere close to how well they could do, right? They could do enormously better than they were doing at the time. I see no reason why the same thing isn't true for us today. So of all the worries that keep me awake at night, the human mind's inability to rationally comprehend the world is low on the list. Well put. So one interesting philosophical point that quantum mechanics bring up is the, that you talk about the distinction between the world as it is and the world as we observe it. So staying at the human level for a second, how big is the gap between what our perception system allows us to see and the world as it is outside our mind's eye sort of, sort of not at the quantum mechanical level, but as just our, these particular tools we have, which is the few senses and cognitive abilities to process those senses. Well, that last phrase, having the cognitive abilities to process them carries a lot, right? I mean, there is our sort of intuitive understanding of the world. You don't need to teach people about gravity for them to know that apples fall from trees, right? That's something that we figure out pretty quickly. Project permanence, things like that, the three dimensionality of space, even if we don't have the mathematical language to say that, we kind of know that it's true. On the other hand, no one opens their eyes and sees atoms, right? Or molecules or cells for that matter, forget about quantum mechanics. So but we got there, we got to understanding that there are atoms and cells using the combination of our senses and our cognitive capacities. So adding the ability of our cognitive capacities to our senses is adding an enormous amount and I don't think it is a hard and fast boundary. You know, if you believe in cells, if you believe that we understand those, then there's no reason you believe we can't believe in quantum mechanics just as well. What to you is the most beautiful idea in physics? Conservation of momentum. Can you elaborate? Yeah. So if you were Aristotle, when Aristotle wrote his book on physics, he made the following very obvious point. We're on video here, right? So people can see this. Yeah. So if I push the bottle, let me cover this bottle so we do not have a mess, but okay. So I push the bottle, it moves, and if I stop pushing, it stops moving. And this kind of thing is repeated a large number of times all over the place. If you don't keep pushing things, they stop moving. This is an indisputably true fact about our everyday environment, okay? And for Aristotle, this blew up into a whole picture of the world in which things had natures and teleologies, and they had places they wanted to be, and when you were pushing them, you were moving them away from where they wanted to be, and they would return and stuff like that. And it took a thousand years or 1500 years for people to say, actually, if it weren't for things like dissipation and air resistance and friction and so forth, the natural thing is for things to move forever in a straight line, there's a constant velocity, right? Conservation of momentum. And the reason why I think that's the most beautiful idea in physics is because it shifts us from a view of natures and teleology to a view of patterns in the world. So when you were Aristotle, you needed to talk a vocabulary of why is this happening, what's the purpose of it, what's the cause, etc., because, you know, it's nature does or does not want to do that, whereas once you believe in conservation of momentum, things just happen. They just follow the pattern. You give me, you have Laplace's demon, ultimately, right? You give me the state of the world today, I can predict what it's going to do in the future, I can predict where it was in the past. It's impersonal, and it's also instantaneous. It's not directed toward any future goals, it's just doing what it does given the current state of the universe. I think even more than either classical mechanics or quantum mechanics, that is the profound deep insight that gets modern science off the ground. You don't need natures and purposes and goals, you just need some patterns. So it's the first moment in our understanding of the way the universe works where you branch from the intuitive physical space to kind of the space of ideas. And also the other point you said, which is, conveniently, most of the interesting ideas are acting in the moment. You don't need to know the history of time or the future. And of course, this took a long time to get there, right? I mean, the conservation of momentum itself took hundreds of years. It's weird, because like, someone would say something interesting, and then the next interesting thing would be said like 150 or 200 years later, right? They weren't even talking to each other, they were reading each other's books. And probably the first person to directly say that in outer space, in the vacuum, a projectile would move at a constant velocity was Avicenna, Ibn Sina in the Persian Golden Age, circa 1000. And he didn't like the idea. He used that, just like Schrodinger used Schrodinger's cat to say, surely you don't believe that, right? Ibn Sina was saying, surely you don't believe there really is a vacuum, because if there was a really vacuum, things could keep moving forever, right? But still, he got right the idea that there was this conservation of something impetus or mile, he would call it. And that's 500 years, 600 years before classical mechanics and Isaac Newton. So Galileo played a big role in this, but he didn't exactly get it right. And so it just takes a long time for this to sink in, because it is so against our everyday experience. Do you think it was a big leap, a brave or a difficult leap of sort of math and science to be able to say that momentum is conserved? I do. You know, I think it's an example of human reason in action. You know, even Aristotle knew that his theory had issues, because you could fire an arrow and it would go a long way before it stopped. So if his theory was things just automatically stop, what's going on? And he had this elaborate story. I don't know if you've heard the story, but the arrow would push the air in front of it away and the molecules of air would run around to the back of the arrow and push it again. And anyone reading this is going like, really, that's what you thought? But it was that kind of thought experiment that ultimately got people to say like, actually, no, if it weren't for the air molecules at all, the arrow would just go on by itself. And it's always this give and take between thought and experience, back and forth, right? Theory and experiment, we would say today. Another big question that I think comes up, certainly with quantum mechanics, is what's the difference between math and physics to you? To me, you know, very, very roughly, math is about the logical structure of all possible worlds and physics is about our actual world. And it just feels like our actual world is a gray area when you start talking about interpretations of quantum mechanics, or no? I'm certainly using the word world in the broadest sense, all of reality. So I think that reality is specific. I don't think that there's every possible thing going on in reality. I think that there are rules, whether it's the Schrodinger equation or whatever. So I think that there's a sensible notion of the set of all possible worlds and we live in one of them. The world that we're talking about might be a multiverse, might be many worlds of quantum mechanics, might be much bigger than the world of our everyday experience, but it's still one physically contiguous world in some sense. But so if you look at the overlap of math and physics, it feels like when physics tries to reach for understanding of our world, it uses the tools of math to sort of reach beyond the limit of our current understanding. What do you make of that process of sort of using math to, so you start maybe with intuition or you might start with the math and then build up an intuition or, but this kind of reaching into the darkness, into the mystery of the world with math. Well, I think I would put it a little bit differently. I think we have theories, theories of the physical world, which we then extrapolate and ask, you know, what do we conclude if we take these seriously well beyond where we've actually tested them? It is separately true that math is really, really useful when we construct physical theories and you know, famously Eugene Wigner asked about the unreasonable success of mathematics and physics. I think that's a little bit wrong because anything that could happen, any other theory of physics that wasn't the real world, but some other world, you could always describe it mathematically. It's just that it might be a mess. The surprising thing is not that math works, but that the math is so simple and easy that you can write it down on a t shirt, right? I mean, that's what is amazing. That's an enormous compression of information that seems to be valid in the real world. So that's an interesting fact about our world, which maybe we could hope to explain or just take as a brute fact. I don't know. But once you have that, you know, there's this indelible relationship between math and physics, but philosophically I do want to separate them. What we extrapolate, we don't extrapolate math because there's a whole bunch of wrong math, you know, that doesn't apply to our world, right? We extrapolate the physical theory that we best think explains our world. Again, an unanswerable question. Why do you think our world is so easily compressible into beautiful equations? Yeah. I mean, like I just hinted at, I don't know if there's an answer to that question. There could be. What would an answer look like? Well, an answer could look like if you showed that there was something about our world that maximizes something. You know, the mean of the simplicity and the powerfulness of the laws of physics or, you know, maybe we're just generic. Maybe in the set of all possible worlds, this is what the world would look like, right? Like I don't really know. I tend to think not. I tend to think that there is something specific and rock bottom about the facts of our world that don't have further explanation. Like the fact of the world exists at all. And furthermore, the specific laws of physics that we have. I think that in some sense, we're just going to, at some level, we're going to say, and that's how it is. And, you know, we can't explain anything more. I don't know how, if we're anywhere close to that right now, but that seems plausible to me. And speaking of rock bottom, one of the things sort of your book kind of reminded me or revealed to me is that what's fundamental and what's emergent, it just feels like I don't even know anymore what's fundamental in physics, if there's anything. It feels like everything, especially with quantum mechanics, is revealing to us is that most interesting things that I would, as a limited human would think are fundamental can actually be explained as emergent from the more deeper laws. I mean, we don't know, of course. You had to get that on the table. We don't know what is fundamental. We do have reasons to say that certain things are more fundamental than others, right? Atoms and molecules are more fundamental than cells and organs. Quantum fields are more fundamental than atoms and molecules. We don't know if that ever bottoms out. I do think that there's sensible ways to think about this. If you describe something like this table as a table, it has a height and a width and it's made of a certain material and it has a certain solidity and weight and so forth. That's a very useful description as far as it goes. There's a whole other description of this table in terms of a whole collection of atoms strung together in certain ways. The language of the atoms is more comprehensive than the language of the table. You could break apart the table, smash it to pieces, still talk about it as atoms, but you could no longer talk about it as a table, right? So I think that this comprehensiveness, the domain of validity of a theory gets broader and broader as the theory gets more and more fundamental. So what do you think Newton would say? Maybe right in the book review, if you read your latest book on quantum mechanics, something deeply hidden. It would take a long time for him to think that any of this was making any sense. You catch him up pretty quick in the beginning. Yeah. You give him a shout out in the beginning. That's right. He is the man. I'm happy to say that Newton was the greatest scientist who ever lived. He invented calculus in his spare time, which would have made him the greatest mathematician just all by himself, all by that one thing. But of course, it's funny because Newton was in some sense still a pre modern thinker. Rocky Kolb, who is a cosmologist at the University of Chicago said that Galileo, even though he came before Newton, was a more modern thinker than Newton was. If you got Galileo and brought him to the present day, it would take him six months to catch up and then he'd be in your office telling you why your most recent paper was wrong. Whereas Newton just thought in this kind of more mystical way. He wrote a lot more about the Bible and alchemy than he ever did about physics, but he was also more brilliant than anybody else and way more mathematically astute than Galileo. So I really don't know. He might have, he might just, yeah, say like, give me the textbooks, leave me alone for a few months and then be caught up. But he might have had mental blocks against seeing the world in this way. I really don't know. Or perhaps find an interesting mystical interpretation of quantum mechanics. Very possible. Yeah. Is there any other scientists or philosophers through history that you would like to know their opinion of your book? That's a, that's a good question. I mean, Einstein is the obvious one, right? We all, I mean, he was not that long ago, but I even speculated at the end of my book about what his opinion would be. I am curious as to, you know, what about older philosophers like Hume or Kant, right? Like what would they have thought? Or Aristotle, you know, what would they have thought about modern physics? Because they do in philosophy, your predilections end up playing a much bigger role in your ultimate conclusions because you're not as tied down by what the data is in physics. You know, physics is lucky because we can't stray too far off the reservation as long as we're trying to explain the world that we actually see in our telescopes and microscopes. But it's just not fair to play that game because the people we're thinking about didn't know a whole bunch of things that we know, right? Like we lived through a lot that they didn't live through. So by the time we got them caught up, they'd be different people. So let me ask a bunch of basic questions. I think it would be interesting, useful for people who are not familiar, but even for people who are extremely well familiar. Let's start with what is quantum mechanics? Quantum mechanics is the paradigm of physics that came into being in the early part of the 20th century that replaced classical mechanics, and it replaced classical mechanics in a weird way that we're still coming to terms with. So in classical mechanics, you have an object, it has a location, it has a velocity, and if you know the location and velocity of everything in the world, you can say what everything's going to do. Quantum mechanics has an aspect of it that is kind of on the same lines. There's something called the quantum state or the wave function. And there's an equation governing what the quantum state does. So it's very much like classical mechanics. The wave function is different. It's sort of a wave. It's a vector in a huge dimensional vector space rather than a position and a velocity, but okay, that's a detail. The equation is the Schrodinger equation, not Newton's laws, but okay, again, a detail. Where quantum mechanics really becomes weird and different is that there's a whole other set of rules in our textbook formulation of quantum mechanics in addition to saying that there's a quantum state and it evolves in time. And all these new rules have to do with what happens when you look at the system, when you observe it, when you measure it. In classical mechanics, there were no rules about observing. You just look at it and you see what's going on. That was it, right? In quantum mechanics, the way we teach it, there's something profoundly fundamental about the act of measurement or observation, and the system dramatically changes its state. Even though it has a wave function, like the electron in an atom is not orbiting in a circle, it's sort of spread out in a cloud, when you look at it, you don't see that cloud. When you look at it, it looks like a particle with a location. So it dramatically changes its state right away, and the effects of that change can be instantly seen in what the electron does next. So again, we need to be careful because we don't agree on what quantum mechanics says. That's why I need to say like in the textbook view, et cetera, right? But in the textbook view, quantum mechanics, unlike any other theory of physics, gives a fundamental role to the act of measurement. So maybe even more basic, what is an atom and what is an electron? Sure. This all came together in a few years around the turn of the last century, right? Around the year 1900. Atoms predated then, of course, the word atom goes back to the ancient Greeks, but it was the chemists in the 1800s that really first got experimental evidence for atoms. They realized that there were two different types of tin oxide. And in these two different types of tin oxide, there was exactly twice as much oxygen in one type as the other. And like, why is that? Why is it never 1.5 times as much, right? And so Dalton said, well, it's because there are tin atoms and oxygen atoms, and one form of tin oxide is one atom of tin and one atom of oxygen, and the other is one atom of tin and two atoms of oxygen. And on the basis of this, you know, a speculation, a theory, right, a hypothesis, but then on the basis of that, you make other predictions, and the chemists became quickly convinced that atoms were real. The physicists took a lot longer to catch on, but eventually they did. And I mean, Boltzmann, who believed in atoms, had a really tough time his whole life because he worked in Germany where atoms were not popular. They were popular in England, but not in Germany. And there, in general, the idea of atoms is, it's the most, the smallest building block of the universe for them. That's the kind of how they thought it was. That was the Greek idea, but the chemists in the 1800s jumped the gun a little bit. So these days, an atom is the smallest building block of a chemical element, right? Hydrogen, tin, oxygen, carbon, whatever, but we know that atoms can be broken up further than that. That's what physicists discovered in the early 1900s, Rutherford, especially, and his colleagues. So the atom that we think about now, the cartoon, is that picture you've always seen of a little nucleus and then electrons orbiting it like a little solar system. And we now know the nucleus is made of protons and neutrons. So the weight of the atom, the mass, is almost all in its nucleus. Protons and neutrons are something like 1800 times as heavy as electrons are. Protons are much lighter, but because they're lighter, they give all the life to the atoms. So when atoms get together, combine chemically, when electricity flows through a system, it's all the electrons that are doing all the work. And where quantum mechanics steps in, as you mentioned, with the position of velocity with classical mechanics and quantum mechanics is modeling the behavior of the electron. I mean, you can model the behavior of anything, but the electron, because that's where the fun is. The electron was the biggest challenge right from the start. Yeah. So what's a wave function? You said it's an interesting detail, but in any interpretation, what is the wave function in quantum mechanics? Well, you know, we had this idea from Rutherford that atoms look like little solar systems, but people very quickly realize that can't possibly be right because if an electron is orbiting in a circle, it will give off light. All the light that we have in this room comes from electrons zooming up and down and wiggling. That's what electromagnetic waves are. And you can calculate how long would it take for the electron just to spiral into the nucleus? And the answer is 10 to the minus 11 seconds, okay, 100 billionth of a second. So that's not right. Meanwhile, people had realized that light, which we understood from the 1800s was a wave, had properties that were similar to that of particles, right? This is Einstein and Planck and stuff like that. So if something that we agree was a wave had particle like properties, then maybe something we think is a particle, the electron has wave like properties, right? And so a bunch of people eventually came to the conclusion, don't think about the electron as a little point particle orbiting like a solar system. Think of it as a wave that is spread out. They cleverly gave this the name the wave function, which is the dopiest name in the world for one of the most profound things in the universe. There's literally a number at every point in space, which is the value of the electron's wave function at that point. And there's only one wave function. Yeah, they eventually figured that out. That took longer. But when you have two electrons, you do not have a wave function for electron one and a wave function for electron two. You have one combined wave function for both of them. And indeed, as you say, there's only one wave function for the entire universe at once. And that's where this beautiful dance, can you say what is entanglement? It seems one of the most fundamental ideas of quantum mechanics. Well, let's temporarily buy into the textbook interpretation of quantum mechanics. And what that says is that this wave function, so it's very small outside the atom, very big in the atom, basically the wave function, you take it and you square it, you square the number that gives you the probability of observing the system at that location. So if you say that for two electrons, there's only one wave function, and that wave function gives you the probability of observing both electrons at once doing something, okay? So maybe the electron can be here or here, here, here, and the other electron can also be there. But we have a wave function set up where we don't know where either electron is going to be seen. But we know they'll both be seen in the same place, okay? So we don't know exactly what we're going to see for either electron, but there's entanglement between the two of them. There's a sort of conditional statement. If we see one in one location, then we know the other one's going to be doing a certain thing. So that's a feature of quantum mechanics that is nowhere to be found in classical mechanics. In classical mechanics, there's no way I can say, well, I don't know where either one of these particles is, but if I know, if I find out where this one is, then I know where the other one is. That just never happens. They're truly separate. I don't know, it feels like, if you think of a wave function like as a dance floor, it seems like entanglement is strongest between things that are dancing together closest. So there's a closeness that's important. Well, that's another step. We have to be careful here because in principle, if you're talking about the entanglement of two electrons, for example, they can be totally entangled or totally unentangled no matter where they are in the universe. There's no relationship between the amount of entanglement and the distance between two electrons. But we now know that the reality of our best way of understanding the world is through quantum fields, not through particles. So even the electron, not just gravity and electromagnetism, but even the electron and the quarks and so forth are really vibrations in quantum fields. So even empty space is full of vibrating quantum fields. And those quantum fields in empty space are entangled with each other in exactly the way you just said. If they're nearby, if you have like two vibrating quantum fields that are nearby, then they'll be highly entangled. If they're far away, they will not be entangled. So what do quantum fields in a vacuum look like? Empty space? Just like empty space. It's as empty as it can be. But there's still a field. It's just, what does nothing look like? Just like right here, this location in space, there's a gravitational field, which I can detect by dropping something. Yes. I don't see it, but there it is. So we got a little bit of an idea of entanglement. Now, what is Hilbert space and Euclidean space? Yeah, you know, I think that people are very welcome to go through their lives not knowing what Hilbert space is. But if you dig into a little bit more into quantum mechanics, it becomes necessary. You know, the English language was invented long before quantum mechanics, or various forms of higher mathematics were invented. So we use the word space to mean different things. Of course, most of us think of space as this three dimensional world in which we live, right? I mean, some of us just think of it as outer space. Okay, but space around us gives us the three dimensional location of things and objects. But mathematicians use any generic abstract collection of elements as a space, okay? A space of possibilities, you know, momentum space, etc. So Hilbert space is the space of all possible quantum wave functions, either for the universe or for some specific system. And it could be an infinite dimensional space, or it could be just really, really large dimensional but finite. We don't know because we don't know the final theory of everything. But this abstract Hilbert space is really, really, really big and has no immediate connection to the three dimensional space in which we live. What do dimensions in Hilbert space mean? You know, it's just a way of mathematically representing how much information is contained in the state of the system. How many numbers do you have to give me to specify what the thing is doing? So in classical mechanics, I give you the location of something by giving you three numbers, right? Up, down, left, X, Y, Z coordinates. But then I might want to give you its entire state, physical state, which means both its position and also its velocity. The velocity also has three components. So its state lives in something called phase space, which is six dimensional, three dimensions of position, three dimensions of velocity. And then if it also has an orientation in space, that's another three dimensions and so forth. So as you describe more and more information about the system, you have an abstract mathematical space that has more and more numbers that you need to give. And each one of those numbers corresponds to a dimension in that space. So in terms of the amount of information, what is entropy? This mystical word that's overused in math and physics, but has a very specific meaning in this context. Sadly, it has more than one very specific meeting. This is the reason why it is hard. Entropy means different things even to different physicists. But one way of thinking about it is a measure of how much we don't know about the state of a system. So if I have a bottle of water molecules, and I know that, OK, there's a certain number of water molecules. I could weigh it and figure out. I know the volume of it, and I know the temperature and pressure and things like that. I certainly don't know the exact position and velocity of every water molecule. So there's a certain amount of information I know, a certain amount that I don't know that is part of the complete state of the system. And that's what the entropy characterizes, how much unknown information there is, the difference between what I do know about the system and its full exact microscopic state. So when we try to describe a quantum mechanical system, is it infinite or finite but very large? Yeah, we don't know. That depends on the system. You know, it's easy to mathematically write down a system that would have a potentially infinite entropy, an infinite dimensional Hilbert space. So let's go back a little bit. We said that the Hilbert space was the space in which quantum wave functions lived for different systems that will be different sizes. They could be infinite or finite. So that's the number of numbers, the number of pieces of information you could potentially give me about the system. So the bigger Hilbert space is, the bigger the entropy of that system could be, depending on what I know about it. If I don't know anything about it, then it has a huge entropy, right, but only up to the size of its Hilbert space. So we don't know in the real physical world whether or not, you know, this region of space that contains that water bottle has potentially an infinite entropy or just a finite entropy. We have different arguments on different sides. So if it's infinite, how do you think about infinity? Is this something you can, your cognitive abilities are able to process or is it just a mathematical tool? It's somewhere in between, right? I mean, we can say things about it. We can use mathematical tools to manipulate infinity very, very accurately. We can define what we mean. You know, for any number n, there's a number bigger than it. So there's no biggest number, right? So there's something called the total number of all numbers. It's infinite. But it is hard to wrap your brain around that, and I think that gives people pause because we talk about infinity as if it's a number, but it has plenty of properties that real numbers don't have. You know, if you multiply infinity by two, you get infinity again, right? That's a little bit different than what we're used to. Okay. But are you comfortable with the idea that in thinking of what the real world actually is that infinity could be part of that world? Are you comfortable that a world in some dimension, in some aspect? I'm comfortable with lots of things. I mean, you know, I don't want my level of comfort to affect what I think about the world. You know, I'm pretty open minded about what the world could be at the fundamental level. Yeah, but infinity is a tricky one. It's not almost a question of comfort. It's a question of, is it an overreach of our intuition? Sort of, it could be a convenient, almost like when you add a constant to an equation just because it'll help, it just feels like it's useful to at least be able to imagine a concept, not directly, but in some kind of way that this feels like it's a description of the real world. Think of it this way. There's only three numbers that are simple. There's zero, there's one, and there's infinity. A number like 318 is just bizarre. You need a lot of bits to give me what that number is. But zero and one and infinity, like once you have 300 things, you might as well have infinity things, right? Otherwise, you have to say when to stop making the things, right? So there's a sense in which infinity is a very natural number of things to exist. I was never comfortable with infinity because it's just such a, it was too good to be true. Because in math, it just helps make things work out. When things get very large, close to infinity, things seem to work out nicely. It's kind of like, because my deepest passion is probably psychology. And I'm uncomfortable how in the average, the beauty of how much we vary is lost. In that same kind of sense, infinity seems like a convenient way to erase the details. But the thing about infinity is it seems to pop up whether we like it or not, right? Like you're trying to be a computer scientist, you ask yourself, well, how long will it take this program to run? And you realize, well, for some of them, the answer is infinitely long. It's not because you tried to get there. You wrote a five line computer program, it doesn't halt. So coming back to the textbook definition of quantum mechanics, this idea that I don't think we talked about, can you, this one of the most interesting philosophical points, we talked at the human level, but at the physics level, that at least the textbook definition of quantum mechanics separates what is observed and what is real. One, how does that make you feel? And two, what does it then mean to observe something and why is it different than what is real? Yeah, you know, my personal feeling, such as it is, is that things like measurement and observers and stuff like that are not going to play a fundamental role in the ultimate laws of physics. But my feeling that way is because so far, that's where all the evidence has been pointing. I could be wrong. And there's certainly a sense in which it would be infinitely cool if somehow observation or mental cogitation did play a fundamental role in the nature of reality. But I don't think so. And again, I don't see any evidence for it. So I'm not spending a lot of time worrying about that possibility. So what do you do about the fact that in the textbook interpretation of quantum mechanics, this idea of measurement or looking at things seems to play an important role? Well, you come up with better interpretations of quantum mechanics and there are several alternatives. My favorite is the many worlds interpretation, which says two things. Number one, you, the observer, are just a quantum system like anything else. There's nothing special about you. Don't get so proud of yourself, you know, you're just a bunch of atoms. You have a wave function, you obey the Schrodinger equation like everything else. And number two, when you think you're measuring something or observing something, what's really happening is you're becoming entangled with that thing. So when you think there's a wave function for the electron, it's all spread out. But you look at it and you only see it in one location. What's really happening is that there's still the wave function for the electron in all those locations. But now it's entangled with the wave function of you in the following way. There's part of the wave function that says the electron was here and you think you saw it there. The electron was there and you think you saw it there. The electron was over there and you think you saw it there, etc. So in all of those different parts of the wave function, once they come into being, no longer talk to each other. They no longer interact or influence each other. It's as if they are separate worlds. So this was the invention of Hugh Everett III, who was a graduate student at Princeton in the 1950s. And he said, basically, look, you don't need all these extra rules about looking at things. Just listen to what the Schrodinger equation is telling you. It's telling you that you have a wave function, that you become entangled, and that the different versions of you no longer talk to each other. So just accept it. It's just he did therapy more than anything else. He said, like, it's okay. You don't need all these extra rules. All you need to do is believe the Schrodinger equation. The cost is there's a whole bunch of extra worlds out there. So are the worlds being created whether there's an observer or not? The worlds are created any time a quantum system that's in a superposition becomes entangled with the outside world. What's the outside world? It depends. Let's back up. Whatever it really says, what his theory is, is there's a wave function of the universe and it obeys the Schrodinger equation all the time. That's it. That's the full theory right there. The question, all of the work is how in the world do you map that theory onto reality, onto what we observe? So part of it is carving up the wave function into these separate worlds, saying, look, it describes a whole bunch of things that don't interact with each other. Let's call them separate worlds. Another part is distinguishing between systems and their environments. The environment is basically all the degrees of freedom, all the things going on in the world that you don't keep track of. So again, in the bottle of water, I might keep track of the total amount of water and the volume. I don't keep track of the individual positions and velocities. I don't keep track of all the photons or the air molecules in this room. So that's the outside world. The outside world is all the parts of the universe that you're not keeping track of when you're asking about the behavior of subsystem of it. So how many worlds are there? Yeah, we don't know that one either. There could be an infinite number. There could be only a finite number, but it's a big number one way or the other. It's just a very, very big number. In one of the talks, somebody asked, well, if it's finite. So actually I'm not sure exactly the logic you used to derive this, but is there going to be overlap, a duplicate world that you return to? So you've mentioned, and I'd love if you can elaborate on sort of idea that it's possible that there's some kind of equilibrium that these splitting worlds arrive at and then maybe over time, maybe somehow connected to entropy, you get a large number of worlds that are very similar to each other. Yeah. So this question of whether or not Hilbert space is finite or infinite dimensional is actually secretly connected to gravity and cosmology. This is the part that we're still struggling to understand right now, but we discovered back in 1998 that our universe is accelerating and what that means if it continues, which we think it probably will, but we're not sure. But if it does, that means there's a horizon around us. Because the universe is not only expanding, but expanding faster and faster, things can get so far away from us that from our perspective, it looks like they're moving away faster in the speed of light. We will never see them again. So there's literally a horizon around us and that horizon approaches some fixed distance away from us. And you can then argue that within that horizon, there's only a finite number of things that can possibly happen, the finite dimensional Hilbert space. In fact, we even have a guess for what the dimensionality is. It's 10 to the power of 10 to the power of 122. That's a very large number. Yes. Just to compare, the age of the universe is something like 10 to the 14 seconds, 10 to the 17 or 18 seconds maybe. The number of particles in the universe is 10 to the 88th. But the number of dimensions of Hilbert space is 10 to the 10 to the 122. So that's just crazy big. If that story is right, that in our observable horizon, there's only a finite dimensional Hilbert space, then this idea of branching of the wave function of the universe into multiple distinct separate branches has to reach a limit at some time. Once you branch that many times, you've run out of room in Hilbert space. And roughly speaking, that corresponds to the universe just expanding and emptying out and cooling off and entering a phase where it's just empty space, literally forever. What's the difference between splitting and copying, do you think? In terms of, a lot of this is an interpretation that helps us sort of model the world. So perhaps shouldn't be thought of as like, you know, philosophically or metaphysically. But in even at the physics level, do you see a difference between generating new copies of the world or splitting? I think it's better to think of in quantum mechanics in many worlds, the universe splits rather than new copies, because people otherwise worry about things like energy conservation. And no one who understands quantum mechanics worries about energy conservation, because the equation is perfectly clear. But if all you know is that someone told you the universe duplicates, then you have a reasonable worry about where all the energy for that came from. So a pre existing universe splitting into two skinnier universes is a better way of thinking about it. And mathematically, it's just like, you know, if you draw an x and y axis, and you draw a vector of length one, 45 degree angle, you know that you can write that vector of length one as the sum of two vectors pointing along x and y of length one over the square root of two. Okay, so I write one arrow as the sum of two arrows. But there's a conservation of arrowness, right? Like there's now two arrows, but the length is the same, I just I'm describing it in a different way. And that's exactly what happens when the universe branches, the the wave function of the universe is a big old vector. So to somebody who brings up a question of saying, doesn't this violate the conservation of energy? Can you give further elaboration? Right? So let's just be super duper perfectly clear. There's zero question about whether or not many worlds violates conservation of energy. Yes, it does not. Great. And I say this definitively, because there are other questions that I think there's answers to, but they're legitimate questions, right about, you know, where does probability come from and things like that, this conservation of energy question, we know the answer to it. And the answer to it is that energy is conserved. All of the effort goes into how best to translate what the equation unambiguously says into plain English, right? So this idea that there's a universe that has that that the universe comes equipped with a thickness, and it sort of divides up into thinner pieces, but the total amount of universe is is conserved over time, is a reasonably good way of putting English words to the underlying mathematics. So one of my favorite things about many worlds is, I mean, I love that there's something controversial in science. And for some reason, it makes people actually not like upset, but just get excited. Why do you think it is a controversial idea? So there's a lot of, it's actually one of the cleanest ways to think about quantum mechanics. So why do you think there's a discomfort a little bit among certain people? Well, I draw the distinction in my book between two different kinds of simplicity in a physical theory. There's simplicity in the theory itself, right? How we describe what's going on according to the theory by its own rights. But then, you know, theory is just some sort of abstract mathematical formalism, you have to map it onto the world somehow, right? And sometimes, like for Newtonian physics, it's pretty obvious, like, okay, here is a bottle and has a center of mass and things like that. Sometimes it's a little bit harder with general relativity, curvature of space time is a little bit harder to grasp. quantum mechanics is very hard to map what you're the language you're talking in a wave functions and things like that on to reality. And many worlds is the version of quantum mechanics where it is hardest to map on the underlying formalism to reality. So that's where the lack of simplicity comes in, not in the theory, but in how we use the theory to map on to reality. In fact, all of the work in sort of elaborating many worlds quantum mechanics is in the this effort to map it on to the world that we see. So it's perfectly legitimate to be bugged by that, right? To say like, well, no, that's just too far away from my experience, I am therefore intrinsically skeptical of it. Of course, you should give up on that skepticism if there are no alternatives. And this theory always keeps working, then eventually you should overcome your skepticism. But right now there are alternatives that are that, you know, people work to make alternatives that are by their nature closer to what we observe directly. Can you describe the alternatives? I don't think we touched on it, sort of the Copenhagen interpretation and the many worlds. Maybe there's a difference between the Everettian many worlds and many worlds as it is now, like has the idea sort of developed and so on. And just in general, what is the space of promising contenders? We have democratic debates now, there's a bunch of candidates. 12 candidates on stage. What are the quantum mechanical candidates on stage for the debate? So if you had a debate between quantum mechanical contenders, there'd be no problem getting 12 people up there on stage, but there would still be only three front runners. And right now the front runners would be Everett, hidden variable theories are another one. So the hidden variable theories say that the wave function is real, but there's something in addition to the wave function. The wave function is not everything, it's part of reality, but it's not everything. What else is there? We're not sure, but in the simplest version of the theory, there are literally particles. So many worlds says that quantum systems are sometimes are wave like in some ways and particle like in another because they really, really are waves, but under certain observational circumstances they look like particles. Whereas hidden variable says they look like waves and particles because there are both waves and particles involved in the dynamics. And that's easy to do if your particles are just non relativistic Newtonian particles moving around. They get pushed around by the wave function roughly. It becomes much harder when you take quantum field theory or quantum gravity into account. The other big contender are spontaneous collapse theories. So in the conventional textbook interpretation, we say when you look at a quantum system, its wave function collapses and you see it in one location, a spontaneous collapse theory says that every particle has a chance per second of having its wave function spontaneously collapse. The chance is very small for a typical particle, it will take hundreds of millions of years before it happens even once, but in a table or some macroscopic object, there are way more than a hundred million particles and they're all entangled with each other. So when one of them collapses, it brings everything else along with it. There's a slight variation of this. That's a spontaneous collapse theory. There are also induced collapse theories like Roger Penrose thinks that when the gravitational difference between two parts of the wave function becomes too large, the wave function collapses automatically. So those are basically in my mind, the three big alternatives, many worlds, which is just there's a wave function and always obeys the Schrodinger equation, hidden variables. There's a wave function that always obeys the Schrodinger equation, but there are also new variables or collapse theories, which the wave function sometimes obeys the Schrodinger equation and sometimes it collapses. So you can see that the alternatives are more complicated in their formalism than many worlds is, but they are closer to our experience. So just this moment of collapse, do you think of it as a wave function, fundamentally sort of a probabilistic description of the world and this collapse sort of reducing that part of the world into something deterministic, where again, you can now describe the position and the velocity in this simple classical model? Well there is... Is that how you think about collapse? There is a fourth category, there's a fourth contender, there's a mayor Pete of quantum mechanical interpretations, which are called epistemic interpretations. And what they say is all the wave function is, is a way of making predictions for experimental outcomes. It's not mapping onto an element of reality in any real sense. And in fact, two different people might have two different wave functions for the same physical system because they know different things about it, right? The wave function is really just a prediction mechanism. And then the problem with those epistemic interpretations is if you say, okay, but it's predicting about what, like what is the thing that is being predicted? And they say, no, no, no, that's not what we're here for. We're just here to tell you what the observational outcomes are going to be. But the other, the other interpretations kind of think that the wave function is real. Yes, that's right. So that's an ontic interpretation of the wave function, ontology being the study of what is real, what exists, as opposed to an epistemic interpretation of the wave function, epistemology being the study of what we know. That would actually just love to see that debate on stage. There was a version of it on stage at the world science festival a few years ago that you can look up online. On YouTube? Yep. It's on YouTube. Okay, awesome. I'll link it and watch it. Who won? I won. I don't know, there was no vote, there was no vote, but those there's Brian Green was the moderator and David Albert stood up for a spontaneous collapse and Shelley Goldstein was there for hidden variables and Rüdiger Schock was there for epistemic approaches. Why do you, I think you mentioned it, but just to elaborate, why do you find many worlds so compelling? Well, there's two reasons actually. One is, like I said, it is the simplest, right? It's like the most bare bones, austere, pure version of quantum mechanics. And I am someone who is very willing to put a lot of work into mapping the formalism onto reality. I'm less willing to complicate the formalism itself. But the other big reason is that there's something called modern physics with quantum fields and quantum gravity and holography and space time doing things like that. And when you take any of the other versions of quantum theory, they bring along classical baggage, all of the other versions of quantum mechanics, prejudice or privilege some version of classical reality like locations in space, okay? And I think that that's a barrier to doing better at understanding the theory of everything and understanding quantum gravity and the emergence of space time. Whenever if you change your theory from, you know, here's a harmonic oscillator, oh, there's a spin, here's an electromagnetic field, in hidden variable theories or dynamical collapse theories. You have to start from scratch. You have to say like, well, what are the hidden variables for this theory or how does it collapse or whatever? Whereas many worlds is plug and play. You tell me the theory and I can give you as many worlds version. So when we have a situation like we have with gravity and space time, where the classical description seems to break down in a dramatic way, then I think you should start from the most quantum theory that you have, which is really many worlds. So start with the quantum theory and try to build up a model of space time, the emergence of space time. That's it. Okay. So I thought space time was fundamental. Yeah, I know. So this sort of dream that Einstein had that everybody had and everybody has of, you know, the theory of everything. So how do we build up from many worlds from quantum mechanics, a model of space time model of gravity? Well, yeah, I mean, let me first mention very quickly why we think it's necessary. You know, we've had gravity in the form that Einstein bequeathed it to us for over a hundred years now, like 1915 or 1916, he put general relativity in the final form. So gravity is the curvature of space time and there's a field that pervades all the universe that tells us how curved space time is. And that's a fundamentally classical. That's totally classical. Right. Exactly. But we also have a formalism, an algorithm for taking a classical theory and quantizing it. This is how we get quantum electrodynamics, for example. And it could be tricky. I mean, you think you're quantizing something, so that means taking a classical theory and promoting it to a quantum mechanical theory. But you can run into problems. So they ran into problems and they did that with electromagnetism, namely that certain quantities were infinity and you don't like infinity, right? So Feynman and Tominaga and Schwinger won the Nobel Prize for teaching us how to deal with the infinities. And then Ken Wilson won another Nobel Prize for saying you shouldn't have been worried about those infinities after all. But still, that was the, it's always the thought that that's how you will make a good quantum theory. You'll start with a classical theory and quantize it. So if we have a classical theory, general relativity, we can quantize it or we can try to, but we run into even bigger problems with gravity than we ran into with electromagnetism. And so far, those problems are insurmountable. We've not been able to get a successful theory of gravity, quantum gravity, by starting with classical general relativity and quantizing it. And there's evidence that, there's a good reason why this is true, that whatever the quantum theory of gravity is, it's not a field theory. It's something that has weird nonlocal features built into it somehow that we don't understand. We get this idea from black holes and Hawking radiation and information conservation and a whole bunch of other ideas I talk about in the book. So if that's true, if the fundamental theory isn't even local in the sense that an ordinary quantum field theory would be, then we just don't know where to start in terms of getting a classical precursor and quantizing it. So the only sensible thing, or at least the next obvious sensible thing to me would be to say, okay, let's just start intrinsically quantum and work backwards, see if we can find a classical limit. So the idea of locality, the fact that locality is not fundamental to the nature of our existence, I guess in that sense, modeling everything as a field makes sense to me. Stuff that's close by interacts, stuff that's far away doesn't. So what's locality and why is it not fundamental? And how is that even possible? Yeah. I mean, locality is the answer to the question that Isaac Newton was worried about back in the beginning of our conversation, right? I mean, how can the earth know what the gravitational field of the sun is? And the answer as spelled out by Laplace and Einstein and others is that there's a field in between. And the way a field works is that what's happening to the field at this point in space only depends directly on what's happening at points right next to it. But what's happening at those points depends on what's happening right next to those, right? And so you can build up an influence across space through only local interactions. That's what locality means. What happens here is only affected by what's happening right next to it. That's locality. The idea of locality is built into every field theory, including general relativity as a classical theory. It seems to break down when we talk about black holes and, you know, Hawking taught us in the 1970s that black holes radiate, they give off, they eventually evaporate away. They're not completely black once we take quantum mechanics into account. And we think, we don't know for sure, but most of us think that if you make a black hole out of certain stuff, then like Laplace's demon taught us, you should be able to predict what that black hole will turn into if it's just obeying the Schrodinger equation. And if that's true, there are good arguments that can't happen while preserving locality at the same time. It's just that the information seems to be spread out nonlocally in interesting ways. And people should, you talk about holography with the Leonard Susskind on your Mindscape podcast. Oh yes, I have a podcast. I didn't even mention that. This is terrible. No, I'm going to, I'm going to ask you questions about that too, and I've been not shutting up about it. It's my favorite science podcast. So, or not, it's a, it's not even a science podcast. It's like, it's a scientist doing a podcast. That's right. That's what it is. Yeah. Anyway. Yeah. So holography is this idea when you have a black hole and black hole is a region of space inside of which gravity is so strong that you can't escape. And there's this weird feature of black holes that, again, it's totally a thought experiment feature because we haven't gone and probed any yet. But there seems to be one way of thinking about what happens inside a black hole as seen by an observer who's falling in, which is actually pretty normal. Like everything looks pretty normal until you hit the singularity and you die. But from the point of view of the outside observer, it seems like all the information that fell in is actually smeared over the horizon in a nonlocal way. And that's puzzling and that's, so holography because that's a two dimensional surface that is encapsulating the whole three dimensional thing inside, right? Still trying to deal with that. Still trying to figure out how to get there. But it's an indication that we need to think a little bit more subtly when we quantize gravity. And because you can describe everything that's going on in the three dimensional space by looking at the two dimensional projection of it, it means that locality doesn't, it's not necessary. Well, it means that somehow it's only a good approximation. It's not really what's going on. How are we supposed to feel about that? We're supposed to feel liberated. You know, space is just a good approximation and this was always going to be true once you started quantizing gravity. So we're just beginning now to face up to the dramatic implications of quantizing gravity. Is there other weird stuff that happens to quantum mechanics in black hole? I don't think that anything weird has happened with quantum mechanics. I think weird things happen with space time. I mean, that's what it is. Like quantum mechanics is still just quantum mechanics, but our ordinary notions of space time don't really quite work. And there's a principle that goes hand in hand with holography called complementarity, which says that there's no one unique way to describe what's going on inside a black hole. Different observers will have different descriptions, both of which are accurate, but sound completely incompatible with each other. So depends on how you look at it. The word complementarity in this context is borrowed from Niels Bohr, who points out you can measure the position or you can measure the momentum. You can't measure both at the same time in quantum mechanics. So a couple of questions on many worlds. How does many worlds help us understand our particular branch of reality? So okay, that's fine and good that is everything is splitting, but we're just traveling down a single branch of it. So how does it help us understand our little unique branch? Yeah, I mean, that's a great question. But that's the point is that we didn't invent many worlds because we thought it was cool to have a whole bunch of worlds, right? We invented it because we were trying to account for what we observe here in our world. And what we observe here in our world are wave functions collapsing, okay? We do have a position, a situation where the electron seems to be spread out. But then when we look at it, we don't see it spread out. We see it located somewhere. So what's going on? That's the measurement problem of quantum mechanics. That's what we have to face up to. So many worlds is just a proposed solution to that problem. And the answer is nothing special is happening. It's still just the Schrodinger equation, but you have a wave function too. And that's a different answer than would be given in hidden variables or dynamical collapse theories or whatever. So the entire point of many worlds is to explain what we observe, but it tries to explain what we already have observed, right? It's not trying to be different from what we've observed because that would be something other than quantum mechanics. But you know, the idea that there's worlds that we didn't observe that keep branching off is kind of, it's stimulating to the imagination. So is it possible to hop from, you mentioned the branches are independent. Is it possible to hop from one to the other? No. So it's a physical limit. The theory says it's impossible. There's already a copy of you in the other world, don't worry. Yes. Leave them alone. No, but there's a fear of missing out, FOMO, that I feel like immediately start to wonder if that other copy is having more or less fun. Well, the downside to many worlds is that you're missing out on an enormous amount. And that's always what it's going to be like. And I mean, there's a certain stage of acceptance in that. In terms of rewinding, do you think we can rewind the system back, sort of the nice thing about many worlds, I guess, is it really emphasizes the, maybe you can correct me, but the deterministic nature of a branch and it feels like it could be rewound back. Is it, do you see it as something that could be perfectly rewound back, rewinding back? Yeah. If you're at a fancy French restaurant and there's a nice linen white tablecloth and you have your glass of Bordeaux and you knock it over and the wine spills across the tablecloth. If the world were classical, okay, it would be possible that if you just lifted the wine glass up, you'd be lucky enough that every molecule of wine would hop back into the glass, right? But guess what? It's not going to happen in the real world. And the quantum wave function is exactly the same way. It is possible in principle to rewind everything if you start from perfect knowledge of the entire wave function of the universe. In practice, it's never going to happen. So time travel, not possible. Nope. At least quantum mechanics has no help. What about memory? Does the universe have a memory of itself where we could, in, in, so not time travel, but peek back in time and do a little like replay? Well, it's exactly the same in quantum mechanics as classical mechanics. So whatever you want to say about that, you know, the fundamental laws of physics in either many worlds, quantum mechanics or Newtonian physics conserve information. So if you have all the information about the quantum state of the world right now, your Laplace is demon like in your knowledge and calculational capacity, you can wind the clock backward. But none of us is. Right? And, you know, so in practice you can never do that. You can do experiments over and over again, starting from the same initial conditions for small systems. But once things get to be large, Avogadro's number of particles, right? Bigger than a cell, no chance. We we've talked a little bit about arrow of time last time, but in many worlds that there is a kind of implied arrow of time, right? So you've talked about the arrow of time that has to do with the second law of thermodynamics. That's the arrow of time that's emergent or fundamental. We don't know, I guess. No, it's emergent. Is that, does everyone agree on that? Well, nobody agrees with everything. They should. They should. So that arrow of time, is that different than the arrow of time that's implied by many worlds? It's not different, actually, no. In both cases, you have fundamental laws of physics that are completely reversible. If you give me the state of the universe at one moment in time, I can run the clock forward or backward equally well. There's no arrow of time built into the laws of physics at the most fundamental level. But what we do have are special initial conditions 14 billion years ago near the Big Bang. In thermodynamics, those special initial conditions take the form of things were low entropy and entropy has been increasing ever since, making the universe more disorganized and chaotic and that's the arrow of time. In quantum mechanics, the special initial conditions take the form of there was only one branch of the wave function and the universe has been branching more and more ever since. Okay, so if time is emergent, so it seems like our human cognitive capacity likes to take things that are emergent and assume and feel like they're fundamental. So what, so if time is emergent and locality, like is space emergent? Yes. Okay. But I didn't say time was emergent, I said the arrow of time was emergent. Those are different. What's the difference between the arrow of time and time? Are you using arrow of time to simply mean this, they're synonymous with the second law of thermodynamics? No, but the arrow of time is the difference between the past and future. So there's space, but there's no arrow of space. You don't feel that space has to have an arrow, right? You could live in thermodynamic equilibrium, there'd be no arrow of time, but there'd still be time. There'd still be a difference between now and the future or whatever. So if nothing changes, there's still time. Well things could even change, like if the whole universe consisted of the earth going around the sun, it would just go in circles or ellipses, right? Things would change, but it's not increasing entropy, there's no arrow. If you took a movie of that and I played you the movie backward, you would never know. So the arrow of time can theoretically point in the other direction for briefly. To the extent that it points in different directions, it's not a very good arrow. I mean, the arrow of time in the macroscopic world is so powerful that there's just no chance of going back. When you get down to tiny systems with only three or four moving parts, then entropy can fluctuate up and down. What does it mean for space to be an emergent phenomenon? It means that the fundamental description of the world does not include the word space. It'll be something like a vector in Hilbert space, right, and you have to say, well why is there a good approximate description which involves three dimensional space and stuff inside it? Okay, so time and space are emergent. We kind of mentioned in the beginning, can you elaborate, what do you feel hope is fundamental in our universe? A wave function living in Hilbert space. A wave function in Hilbert space that we can't intellectualize or visualize really. We can't visualize it, we can intellectualize it very easily. Like how do you think about? It's a vector in a 10 to the 10 to the 122 dimensional vector space. It's a complex vector, unit norm, it evolves according to the Schrodinger equation. Got it. When you put it that way. What's so hard, really? It's like, yep, quantum computers, there's some excitement, actually a lot of excitement with people that it will allow us to simulate quantum mechanical systems. What kind of questions do you about quantum mechanics, about the things we've been talking about, do you think, do you hope we can answer through quantum simulation? Well I think that there are, there's a whole fascinating frontier of things you can do with quantum computers. Both sort of practical things with cryptography or money, privacy eavesdropping, sorting things, simulating quantum systems, right? So it's a broader question maybe even outside of quantum computers. Some of the theories that we've been talking about, what's your hope, what's most promising to test these theories? What are kind of experiments we can conduct, whether in simulation or in the physical world that would validate or disprove or expand these theories? Well I think for, there's two parts of that question. One is many worlds and the other one is sort of emergent space time. For many worlds, you know, there are experiments ongoing to test whether or not wave functions spontaneously collapse. And if they do, then that rules out many worlds and that would be falsified. If there are hidden variables, there's a theorem that seems to indicate that the predictions will always be the same as many worlds. I'm a little skeptical of this theorem. I'm not complete. I haven't internalized it. I haven't made it in part of my intuitive view of the world yet, so there might be loopholes to that theorem. I'm not sure about that. Part of me thinks that there should be different experimental predictions if there are hidden variables, but I'm not sure. But otherwise, it's just quantum mechanics all the way down. And so there's this cottage industry in science journalism of writing breathless articles that say, you know, quantum mechanics shown to be more astonishing than ever before thought. And really, it's the same quantum mechanics we've been doing since 1926. Whereas with the emergent space time stuff, we know a lot less about what the theory is. It's in a very primitive state. We don't even really have a safely written down, respectable, honest theory yet. So there could very well be experimental predictions we just don't know about yet. That is one of the things that we're trying to figure out. Yeah, for emergent space time, you need really big stuff, right? Well, or really fast stuff, or really energetic stuff. We don't know. That's the thing. You know, so there could be violations of the speed of light if you have emergent space time. Not going faster than the speed of light, but the speed of light could be different for light of different wavelengths, right? That would be a dramatic violation of physics as we know it, but it could be possible. Or not. I mean, it's not an absolute prediction. That's the problem. The theories are just not well developed enough yet to say. Is there anything that quantum mechanics can teach us about human nature or the human mind? If you think about sort of consciousness and these kinds of topics, is there... It's certainly excessively used, as you point out. The word quantum is used for everything besides quantum mechanics. But in more seriousness, is there something that goes to the human level and can help us understand our mind? Not really is the short answer, you know. Minds are pretty classical. I don't think. We don't know this for sure, but I don't think that phenomena like entanglement are crucial to how the human mind works. What about consciousness? So you mentioned, I think early on in the conversation, you said it would be unlikely, but incredible if sort of the observer is somehow a fundamental part. So observer, not to romanticize the notion, but seems interlinked to the idea of consciousness. So if consciousness, as the panpsychists believe, is fundamental to the universe, is that possible? Is that weight... I mean, every... Everything's possible. Just like Joe Rogan likes to say, it's entirely possible. But okay. But is it on a spectrum of crazy out there? How the statistically speaking, how often do you ponder the possibility that consciousness is fundamental or the observer is fundamental to... Personally don't at all. There are people who do. I'm a thorough physicalist when it comes to consciousness. I do not think that there are any separate mental states or mental properties. I think they're all emergent, just like space time is and space time is hard enough to understand. So the fact that we don't yet understand consciousness is not at all surprising to me. You, as we mentioned, have an amazing podcast called Mindscape. It's as I said, one of my favorite podcasts sort of both for your explanation of physics, which a lot of people love, and when you venture out into things that are beyond your expertise, but it's just a really smart person exploring even questions like morality, for example. It's very interesting. I think you did a solo episode and so on. I mean, there's a lot of really interesting conversations that you have. What are some from memory, amazing conversations that pop to mind that you've had? What did you learn from them? Something that maybe changed your mind or just inspired you or just what did this whole experience of having conversations, what stands out to you? It's an unfair question. Totally unfair. That's okay. That's all right. You know, it's often the ones I feel like the ones I do on physics and closely related science or even philosophy ones are like, I know this stuff and I'm helping people learn about it. But I learn more from the ones that have nothing to do with physics or philosophy, right? So talking to Wynton Marsalis about jazz or talking to a Master Sommelier about wine, talking to Will Wilkinson about partisan polarization and the urban rural divide, talking to psychologists like Carol Tavris about cognitive dissonance and how those things work. Scott Derrickson who is the director of the movie Dr. Strange, I had a wonderful conversation with him where we went through the mechanics of making a blockbuster superhero movie, right? And he's also not a naturalist, he's an evangelical Christian so we talked about the nature of reality there. I want to have a couple more, you know, discussions with highly educated theists who know the theology really well but I haven't quite arranged those yet. I would love to hear that. I mean that's, how comfortable are you venturing into questions of religion? Oh, I'm totally comfortable doing it. You know, I did talk with Alan Lightman who is also an atheist but he, you know, he is trying to rescue the sort of spiritual side of things for atheism and I did talk to very vocal atheists like Alex Rosenberg so I need to talk to some, I've talked to some religious believers but I need to talk to more. How have you changed through having all these conversations? You know, part of the motivation was I had a long stack of books that I hadn't read and I couldn't find time to read them and I figured if I interviewed their authors, forced me to read them, right, and that has totally worked by the way. Now I'm annoyed that people write such long books. I think I'm still very much learning how to be a good interviewer. I think that's a skill that, you know, I think I have good questions but, you know, there's the give and take that is still I think I can be better at. Like I want to offer something to the conversation but not too much, right? I've had conversations where I barely talked at all and I have conversations where I talked half the time and I think there's a happy medium in between there. So I think I remember listening to, without mentioning names, some of your conversations where I wish you would have disagreed more. As a listener, it's more fun sometimes. Well, that's a very good question because, you know, everyone has an attitude toward that. Like some people are really there to basically give their point of view and their guest is supposed to, you know, respond accordingly. I want to sort of get my view on the record but I don't want to dwell on it when I'm talking to someone like David Chalmers who I disagree with a lot. You know, I want to say like, here's why I disagree with you but, you know, we're here to listen to you. Like I have an episode every week and you're only on once a week, right? So I have an upcoming podcast episode with Philip Goff who is a much more dedicated pan psychist and so there we really get into it. I think that I probably have disagreed with him more on that episode than I ever have with another podcast guest but that's what he wanted so it worked very well. Yeah, yeah. That kind of debate structure is beautiful when it's done right. Like when you're, when you can detect that the intent is that you have fundamental respect for the person. Yeah. That, and that's, for some reason, it's super fun to listen to when two really smart people are just arguing and sometimes lose their shit a little bit if I may say so. Well, there's a fine line because I have zero interest in bringing, I mean, like, I mean, maybe you implied this, I have zero interest in bringing on people for whom I don't have any intellectual respect. Like I constantly get requests like, you know, bring on a flat earther or whatever and really slap them down or a creationist, like I have zero interest. I'm happy to bring on, you know, a religious person, a believer, but I want someone who's smart and can act in good faith and can talk, not a charlatan or a lunatic, right? So I will only, I will happily bring on people with whom I disagree, but only people from whom I think the audience can learn something interesting. So let me ask, the idea of charlatan is an interesting idea. You might be more educated on this topic than me, but there's, there's folks, for example, who argue various aspects of evolution sort of try to approach and say that evolution sort of our current theory of evolution has many holes in it, has many flaws. And they argue that I think like Cambridge, Cambrian explosion, which is like a huge added variability of species, doesn't make sense under our current description of evolution and theory of evolution sort of, if you had to, were to have the conversation with people like that, how do you know that they're the difference in outside the box thinkers and people who are fundamentally unscientific and even bordering on charlatans? That's a great question. And you know, the further you get away from my expertise, the harder it is for me to really judge exactly those things. And, you know, yeah, I don't have a satisfying answer for that one because I think the example you use of someone who, you know, believes in the basic structure of natural selection, but thinks that, you know, this particular thing cannot be understood in the terms of our current understanding of Darwinism. That's a perfect edge case where it's hard to tell, right? And I would have, I would try to talk to people who I do respect and who do know things and I would have to, you know, given that I'm a physicist, I know that physicists will sometimes be too dismissive of alternative points of view. I have to take into account that biologists can also be too dismissive of alternative points of view. So, yeah, that's a tricky one. Have you gotten heat yet? I get heat all the time. Like there's always something, I mean, it's hilarious because I do have, I try very hard not to like have the same topic several times in a row. I did have like two climate change episodes, but they were from very different perspectives, but I like to mix it up. That's the whole, that's why I'm having fun. And every time I do an episode, someone says, oh, the person you should really get on to talk about exactly that is this other person. I'm like, well, I don't, but I did that now. I don't want to do that anymore. Well, I hope you keep doing it. You're inspiring millions of people, your books, your podcasts. Sean, it's an honor to talk to you. Thank you so much. Thank you very much, Lex.
Sean Carroll: Quantum Mechanics and the Many-Worlds Interpretation | Lex Fridman Podcast #47
The following is a conversation with Bjarne Stroustrup. He is the creator of C++, a programming language that, after 40 years, is still one of the most popular and powerful languages in the world. Its focus on fast, stable, robust code underlies many of the biggest systems in the world that we have come to rely on as a society. If you're watching this on YouTube, for example, many of the critical back end components of YouTube are written in C++. The same goes for Google, Facebook, Amazon, Twitter, most Microsoft applications, Adobe applications, most database systems, and most physical systems that operate in the real world, like cars, robots, rockets that launch us into space and one day will land us on Mars. C++ also happens to be the language that I use more than any other in my life. I've written several hundred thousand lines of C++ source code. Of course, lines of source code don't mean much, but they do give hints of my personal journey through the world of software. I've enjoyed watching the development of C++ as a programming language, leading up to the big update in the standard in 2011 and those that followed in 14, 17, and toward the new C++20 standard hopefully coming out next year. TITLE This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, here's my conversation with Björn Stroustrup. What was the first program you've ever written? Do you remember? BJÖRN It was my second year in university, first year of computer science, and it was an Alco 60. I calculated the shape of a super ellipse and then connected points on the perimeter, creating star patterns. It was with a wet ink on a paper printer. TITLE And that was in college, university? BJÖRN Yeah, yeah. I learned to program the second year in university. TITLE And what was the first programming language, if I may ask it this way, that you fell in love with? BJÖRN I think Alco 60. And after that, I remember Snowball. I remember Fortran, didn't fall in love with that. I remember Pascal, didn't fall in love with that. It all got in the way of me. And then I discovered Assembler, and that was much more fun. And from there, I went to Micro Code. TITLE So you were drawn to the, you found the low level stuff beautiful. BJÖRN I went through a lot of languages, and then I spent significant time in Assembler and Micro Code. That was sort of the first really profitable things that paid for my masters, actually. And then I discovered Simula, which was absolutely great. TITLE Simula? BJÖRN Simula was the extension of Alco 60, done primarily for simulation. But basically, they invented object oriented programming at inheritance and runtime polymorphism while they were doing it. And that was the language that taught me that you could have the sort of the problems of a program grow with the size of the program rather than with the square of the size of the program. That is, you can actually modularize very nicely. And that was a surprise to me. It was also a surprise to me that a stricter type system than Pascal's was helpful, whereas Pascal's type system got in my way all the time. So you need a strong type system to organize your code well, but it has to be extensible and flexible. TITLE Let's get into the details a little bit. If you remember, what kind of type system did Pascal have? What type system, typing system did Alco 60 have? BJÖRN Basically, Pascal was sort of the simplest language that Niklaus Wirth could define that served the needs of Niklaus Wirth at the time. And it has a sort of a highly moral tone to it. That is, if you can say it in Pascal, it's good. And if you can't, it's not so good. Whereas Simula allowed you basically to build your own type system. So instead of trying to fit yourself into Niklaus Wirth's world, Christen Nygaard's language and Johan Dahl's language allowed you to build your own. So it's sort of close to the original idea of you build a domain specific language. As a matter of fact, what you build is a set of types and relations among types that allows you to express something that's suitable for an application. TITLE So when you say types, stuff you're saying has echoes of object oriented programming. BJÖRN Yes, they invented it. Every language that uses the word class for type is a descendant of Simula, directly or indirectly. Christen Nygaard and Ole Johan Dahl were mathematicians and they didn't think in terms of types, but they understood sets and classes of elements. And so they called their types classes. And basically in C++, as in Simula, classes are user defined type. TITLE So can you try the impossible task and give a brief history of programming languages from your perspective? So we started with Algol 60, Simula, Pascal, but that's just the 60s and 70s. BJÖRN I can try. The most sort of interesting and major improvement of programming languages was Fortran, the first Fortran. Because before that, all code was written for a specific machine and each specific machine had a language, a simple language or a cross simpler or some extension of that idea. But you're writing for a specific machine in the language of that machine. And Bacchus and his team at IBM built a language that would allow you to write what you really wanted. That is, you could write it in a language that was natural for people. Now, these people happen to be engineers and physicists. So the language that came out was somewhat unusual for the rest of the world. But basically they said formula translation because they wanted to have the mathematical formulas translated into the machine. And as a side effect, they got portability because now they're writing in the terms that the humans used and the way humans thought. And then they had a program that translated it into the machine's needs. And that was new and that was great. And it's something to remember. We want to raise the language to the human level, but we don't want to lose the efficiency. And that was the first step towards the human. That was the first step. And of course, they were a very particular kind of humans. Business people were different, so they got cobalt instead, et cetera, et cetera. And Simula came out. No, let's not go to Simula yet. Let's go to Algol. Fortran didn't have, at the time, the notions of not a precise notion of type, not a precise notion of scope, not a set of translation phases that was what we have today, lexical syntax, semantics. It was sort of a bit of a model in the early days, but hey, they've just done the biggest breakthrough in the history of programming, right? So you can't criticize them for not having gotten all the technical details right. So we got Algol. That was very pretty. And most people in commerce and science considered it useless because it was not flexible enough, and it wasn't efficient enough, and et cetera, et cetera. But that was a breakthrough from a technical point of view. Then Simula came along to make that idea more flexible, and you could define your own types. And that's where I got very interested. Christen Nygård was the main idea man behind Simula. That was late 60s. This was late 60s. Well, I was a visiting professor in Aarhus, and so I learned object oriented programming by sitting around and, well, in theory, discussing with Christen Nygård. But Christen, once you get started and in full flow, it's very hard to get a word in edgeways. Where you just listen. So it was great. I learned it from there. Not to romanticize the notion, but it seems like a big leap to think about object oriented programming. It's really a leap of abstraction. Yes. And was that as big and beautiful of a leap as it seems from now in retrospect, or was it an obvious one at the time? It was not obvious, and many people have tried to do something like that, and most people didn't come up with something as wonderful as Simula. Lots of people got their PhDs and made their careers out of forgetting about Simula or never knowing it. For me, the key idea was basically I could get my own types. And that's the idea that goes further into C++, where I can get better types and more flexible types and more efficient types. But it's still the fundamental idea. When I want to write a program, I want to write it with my types that is appropriate to my problem and under the constraints that I'm under with hardware, software, environment, et cetera. And that's the key idea. People picked up on the class hierarchies and the virtual functions and the inheritance, and that was only part of it. It was an interesting and major part and still a major part and a lot of graphic stuff, but it was not the most fundamental. It was when you wanted to relate one type to another. You don't want them all to be independent. The classical example is that you don't actually want to write a city simulation with vehicles where you say, well, if it's a bicycle, write the code for turning a bicycle to the left. If it's a normal car, turn right the normal car way. If it's a fire engine, turn right the fire engine way. You get these big case statements and bunches of if statements and such. Instead, you tell the base class that that's the vehicle saying, turn left the way you want to. And this is actually a real example. They used it to simulate and optimize the emergency services for somewhere in Norway back in the 60s. So this was one of the early examples for why you needed inheritance and you needed a runtime polymorphism because you wanted to handle this set of vehicles in a manageable way. You can't just rewrite your code each time a new kind of vehicle comes along. Yeah, that's a beautiful, powerful idea. And of course it stretches through your work with C++ as we'll talk about. But I think you've structured it nicely. What other breakthroughs came along in the history of programming languages if we were to tell the history in that way? Obviously, I'm better at telling the part of the history that is the path I'm on as opposed to all the paths. Yeah, you skipped the hippie John McCarthy and Lisp, one of my favorite languages. Functional. But Lisp is not one of my favorite languages. It's obviously important. It's obviously interesting. Lots of people write code in it and then they rewrite it into C or C++ when they want to go to production. It's in the world I'm at, which are constrained by performance, reliability, issues, deployability, cost of hardware. I don't like things to be too dynamic. It is really hard to write a piece of code that's perfectly flexible that you can also deploy on a small computer and that you can also put in, say, a telephone switch in Bogota. What's the chance? If you get an error and you find yourself in the debugger that the telephone switch in Bogota on late Sunday night has a programmer around, their chance is zero. A lot of the things I think most about can't afford that flexibility. I'm quite aware that maybe 70%, 80% of all code are not under the kind of constraints I'm interested in. But somebody has to do the job I'm doing because you have to get from these high level flexible languages to the hardware. The stuff that lasts for 10, 20, 30 years is robust, operates under very constrained conditions. Yes, absolutely. That's right. And it's fascinating and beautiful in its own way. C++ is one of my favorite languages, and so is Lisp. So I can embody two for different reasons as a programmer. I understand why Lisp is popular, and I can see the beauty of the ideas and similarly with Smalltalk. It's just not as relevant in my world. And by the way, I distinguish between those and the functional languages where I go to things like ML and Haskell. Different kind of languages, they have a different kind of beauty and they're very interesting. And I think that's interesting. And I actually try to learn from all the languages I encounter to see what is there that would make working on the kind of problems I'm interested in with the kind of constraints that I'm interested in, what can actually be done better? Because we can surely do better than we do today. You've said that it's good for any professional programmer to know at least five languages as speaking about a variety of languages that you've taken inspiration from, and you've listed yours as being, at least at the time, C++, obviously, Java, Python, Ruby, script. Can you first of all, update that list, modify it? You don't have to be constrained to just five, but can you describe what you picked up also from each of these languages? How do you see them as inspirations for you when you're working with C++? This is a very hard question to answer. So about languages, you should know languages. I reckon I knew about 25 or thereabouts when I did C++. It was easier in those days because the languages were smaller, and you didn't have to learn a whole programming environment and such to do it. You could learn the language quite easily. And it's good to learn so many languages. I imagine, just like with natural language for communication, there's different paradigms that emerge in all of them, that there's commonalities and so on. So I picked five out of a hat. You picked five out of a hat. Obviously. The important thing that the number is not one. That's right. It's like, I don't like, I mean, if you're a monoglot, you are likely to think that your own culture is the only one superior to everybody else's. A good learning of a foreign language and a foreign culture is important. It helps you think and be a better person. With programming languages, you become a better programmer, better designer with the second language. Now, once you've got two, the weight of five is not that long. It's the second one that's most important. And then when I had to pick five, I sort of thinking what kinds of languages are there? Well, there's a really low level stuff. It's good. It's actually good to know machine code. Even still? Even today. The C++ optimizers write better machine code than I do. Yes. But I don't think I could appreciate them if I actually didn't understand machine code and machine architecture. At least in my position, I have to understand a bit of it because you mess up the cache and you're off in performance by a factor of 100. It shouldn't be that if you are interested in either performance or the size of the computer you have to deploy. So I would go as a simpler. I used to mention C, but these days going low level is not actually what gives you the performance. It is to express your ideas so cleanly that you can think about it and the optimizer can understand what you're up to. My favorite way of optimizing these days is to throw away out the clever bits and see if it still runs fast. And sometimes it runs faster. So I need the abstraction mechanisms or something like C++ to write compact high performance code. There was a beautiful keynote by Jason Turner at the CppCon a couple of years ago where he decided he was going to program Pong on Motorola 6800, I think it was. And he says, well, this is relevant because it looks like a microcontroller. It has specialized hardware. It has not very much memory and it's relatively slow. And so he shows in real time how he writes Pong starting with fairly straightforward low level stuff, improving his abstractions and what he's doing. He's writing C++ and it translates into 86 assembler, which you can do with Clang and you can see it in real time. It's the compiler explorer, which you can use on the web. And then he wrote a little program that translated 86 assembler into Motorola assembler. And so he types and you can see this thing in real time. Wow. You can see it in real time. And even if you can't read the assembly code, you can just see it. His code gets better. The code, the assembler gets smaller. He increases the abstraction level, uses C++ 11 as it were better. This code gets cleaner. It gets easier maintainable. The code shrinks and it keeps shrinking. And I could not in any reasonable amount of time write that assembler as good as the compiler generated from really quite nice modern C++. And I'll go as far as to say the thing that looked like C was significantly uglier and smaller and larger when it became machine code. So the abstractions that can be optimized are important. I would love to see that kind of visualization in larger code bases. Yeah. That might be beautiful. But you can't show a larger code base in a one hour talk and have it fit on screen. Right. So that's C and C++. So my two languages would be machine code and C++. And then I think you can learn a lot from the functional languages. So PIC has gloy ML. I don't care which. I think actually you learn the same lessons of expressing especially mathematical notions really clearly and having a type system that's really strict. And then you should probably have a language for sort of quickly churning out something. You could pick JavaScript. You could pick Python. You could pick Ruby. What do you make of JavaScript in general? So you're talking in the platonic sense about languages, about what they're good at, what their philosophy of design is. But there's also a large user base behind each of these languages and they use it in the way sometimes maybe it wasn't really designed for. That's right. JavaScript is used way beyond probably what it was designed for. Let me say it this way. When you build a tool, you do not know how it's going to be used. You try to improve the tool by looking at how it's being used and when people cut their fingers off and try and stop that from happening. But really you have no control over how something is used. So I'm very happy and proud of some of the things C++ is being used at and some of the things I wish people wouldn't do. Bitcoin mining being my favorite example uses as much energy as Switzerland and mostly serves criminals. But back to the languages, I actually think that having JavaScript run in the browser was an enabling thing for a lot of things. Yes, you could have done it better, but people were trying to do it better and they were using more principles, language designs, but they just couldn't do it right. And the nonprofessional programmers that write lots of that code just couldn't understand them. So it did an amazing job for what it was. It's not the prettiest language and I don't think it ever will be the prettiest language, but let's not be bigots here. So what was the origin story of C++? Yeah, you basically gave a few perspectives of your inspiration of object oriented programming. That's you had a connection with C and performance efficiency was an important thing you were drawn to. Efficiency and reliability. Reliability. You have to get both. What's reliability? I really want my telephone calls to get through and I want the quality of what I am talking, coming out at the other end. The other end might be in London or wherever. And you don't want the system to be crashing. If you're doing a bank, you mustn't crash. It might be your bank account that is in trouble. There's different constraints like in games, it doesn't matter too much if there's a crash, nobody dies and nobody gets ruined. But I am interested in the combination of performance, partly because of sort of speed of things being done, part of being able to do things that is necessary to have reliability of larger systems. If you spend all your time interpreting a simple function call, a simple function call, you are not going to have enough time to do proper signal processing to get the telephone calls to sound right. Either that or you have to have ten times as many computers and you can't afford your phone anymore. It's a ridiculous idea in the modern world because we have solved all of those problems. I mean, they keep popping up in different ways because we tackle bigger and bigger problems. So efficiency remains always an important aspect. But you have to think about efficiency, not just as speed, but as an enabler to important things. And one of the things it enables is reliability, is dependability. When I press the pedal, the brake pedal of a car, it is not actually connected directly to anything but a computer. That computer better work. Let's talk about reliability just a little bit. So modern cars have ECUs, have millions of lines of code today. So this is certainly especially true of autonomous vehicles where some of the aspects of the control or driver assistance systems that steer the car, that keep it in the lane and so on. So how do you think, you know, I talked to regulators, people in government who are very nervous about testing the safety of these systems of software. Ultimately software that makes decisions that could lead to fatalities. So how do we test software systems like these? First of all, safety, like performance and like security is the system's property. People tend to look at one part of a system at a time and saying something like, this is secure. That's all right. I don't need to do that. Yeah, that piece of code is secure. I'll buy your operator. If you want to have reliability, if you want to have performance, if you want to have security, you have to look at the whole system. I did not expect you to say that, but that's very true. Yes, I'm dealing with one part of the system and I want my part to be really good, but I know it's not the whole system. Furthermore, if making an individual part perfect, may actually not be the best way of getting the highest degree of reliability and performance and such. There's people that say C++ is not type safe. You can break it. Sure. I can break anything that runs on a computer. I may not go through your type system. If I wanted to break into your computer, I'll probably try SQL injection. And it's very true. If you think about safety or even reliability at the system level, especially when a human being is involved, it starts becoming hopeless pretty quickly in terms of proving that something is safe to a certain level. Yeah. Because there's so many variables. It's so complex. Well, let's get back to something we can talk about and actually talk about it. Yeah. Talk about and actually make some progress on. Yes. We can look at C++ programs and we can try and make sure they crash this often. The way you do that is largely by simplification. The first step is to simplify the code, have less code, have code that are less likely to go wrong. It's not by runtime testing everything. It is not by big test frameworks that you are using. Yes, we do that also. But the first step is actually to make sure that when you want to express something, you can express it directly in code rather than going through endless loops and convolutions in your head before it gets down the code. The way you are thinking about a problem is not in the code. There is a missing piece that's just in your head. And the code, you can see what it does, but it cannot see what you thought about it unless you have expressed things directly. When you express things directly, you can maintain it. It's easier to find errors. It's easier to make modifications. It's actually easier to test it. And lo and behold, it runs faster. And therefore, you can use a smaller number of computers, which means there's less hardware that could possibly break. So I think the key here is simplification. But it has to be, to use the Einstein quote, as simple as possible and no simpler. Not simpler. There are other areas with under constraints where you can be simpler than you can be in C++. But in the domain I'm dealing with, that's the simplification I'm after. So how do you inspire or ensure that the Einstein level of simplification is reached? So can you do code review? Can you look at code? If I gave you the code for the Ford F150 and said, here, is this a mess or is this okay? Is it possible to tell? Is it possible to regulate? An experienced developer can look at code and see if it smells. Mixed metaphors deliberately. Yes. The point is that it is hard to generate something that is really obviously clean and can be appreciated. But you can usually recognize when you haven't reached that point. And so I've never looked at the F150 code, so I wouldn't know. But I know what I ought to be looking for. I'll be looking for some tricks that correlate with bugs and elsewhere. And I have tried to formulate rules for what good code looks like. And the current version of that is called the C++ core guidelines. One thing people should remember is there's what you can do in a language and what you should do. In a language, you have lots of things that is necessary in some context, but not in others. There's things that exist just because there's 30 year old code out there and you can't get rid of it. But you can't have rules that says when you create it, try and follow these rules. This does not create good programs by themselves, but it limits the damage from mistakes. It limits the possibilities of mistakes. And basically, we are trying to say, what is it that a good programmer does? At the fairly simple level of where you use the language and how you use it. Now, I can put all the rules for chiseling in marble. It doesn't mean that somebody who follows all of those rules can do a masterpiece by Michelangelo. That is, there's something else to write a good program. Just is there something else to create an important work of art? That is, there's some kind of inspiration, understanding, gift. But we can approach the sort of technical, the craftsmanship level of it. The famous painters, the famous sculptures was among other things, superb craftsmen. They could express their ideas using their tools very well. And so these days, I think what I'm doing, what a lot of people are doing, we are still trying to figure out how it is to use our tools very well. For a really good piece of code, you need a spark of inspiration, and you can't, I think, regulate that. You cannot say that I'll take a picture only, I'll buy your picture only if you're at least Van Gogh. There are other things you can regulate, but not the inspiration. I think that's quite beautifully put. It is true that there is as an experienced programmer, when you see code that's inspired, that's like Michelangelo, you know it when you see it. And the opposite of that is code that is messy, code that smells, you know, when you see it. And I'm not sure you can describe it in words, except vaguely through guidelines and so on. Yes, it's easier to recognize ugly than to recognize beauty in code. And for the reason is that sometimes beauty comes from something that's innovative and unusual. And you have to sometimes think reasonably hard to appreciate that. On the other hand, the messes have things that are in common. And you can have static checkers and dynamic checkers that find a large number of the most common mistakes. You can catch a lot of sloppiness mechanically. I'm a great fan of static analysis in particular, because you can check for not just the language rules, but for the usage of language rules. And I think we will see much more static analysis in the coming decade. Can you describe what static analysis is? You represent a piece of code so that you can write a program that goes over that representation and look for things that are are right and not right. So, for instance, you can analyze a program to see if resources are leaked. That's one of my favorite problems. It's not actually all that hard and modern C++, but you can do it. If you are writing in the C level, you have to have a malloc and a free. And they have to match. If you have them in a single function, you can usually do it very easily. If there's a malloc here, there should be a free there. On the other hand, in between can be showing complete code and then it becomes impossible. If you pass that pointer to the memory out of a function and then want to make sure that the free is done somewhere else, now it gets really difficult. And so for static analysis, you can run through a program and you can try and figure out if there's any leaks. And what you will probably find is that you will find some leaks and you'll find quite a few places where your analysis can't be complete. It might depend on runtime. It might depend on the cleverness of your analyzer and it might take a long time. Some of these programs run for a long time. But if you combine such analysis with a set of rules that says how people could use it, you can actually see why the rules are violated. And that stops you from getting into the impossible complexities. You don't want to solve the halting problem. So static analysis is looking at the code without running the code. Yes. And thereby it's almost not a production code, but it's almost like an education tool of how the language should be used. It guides you like it at its best, right? It would guide you in how you write future code as well. And you learn together. Yes. So basically you need a set of rules for how you use the language. Then you need a static analysis that catches your mistakes when you violate the rules or when your code ends up doing things that it shouldn't, despite the rules, because there is the language rules. We can go further. And again, it's back to my idea that I'd much rather find errors before I start running the code. If nothing else, once the code runs, if it catches an error at run times, I have to have an error handler. And one of the hardest things to write in code is error handling code, because you know something went wrong. Do you know really exactly what went wrong? Usually not. How can you recover when you don't know what the problem was? You can't be 100% sure what the problem was in many, many cases. And this is part of it. So yes, we need good languages, we need good type systems, we need rules for how to use them, we need static analysis. And the ultimate for static analysis is of course program proof, but that still doesn't scale to the kind of systems we deploy. Then we start needing testing and the rest of the stuff. So C++ is an object oriented programming language that creates, especially with its newer versions, as we'll talk about, higher and higher levels of abstraction. So how do you design? Let's even go back to the origin of C++. How do you design something with so much abstraction that's still efficient and is still something that you can manage, do static analysis on, you can have constraints on, they can be reliable, all those things we've talked about. To me, there's a slight tension between high level abstraction and efficiency. That's a good question. I could probably have a year's course just trying to answer it. Yes, there's a tension between efficiency and abstraction, but you also get the interesting situation that you get the best efficiency out of the best abstraction. And my main tool for efficiency for performance actually is abstraction. So let's go back to how C++ was got there. You said it was object oriented programming language. I actually never said that. It's always quoted, but I never did. I said C++ supports object oriented programming and other techniques. And that's important because I think that the best solution to most complex, interesting problems require ideas and techniques from things that has been called object oriented data abstraction, functional, traditional C style code, all of the above. And so when I was designing C++, I soon realized I couldn't just add features. If you just add what looks pretty or what people ask for or what you think is good, one by one, you're not going to get a coherent whole. What you need is a set of guidelines that that guides your decisions. Should this feature be in or should this feature be out? How should a feature be modified before it can go in and such? And in the book I wrote about that, the design evolution of C++, there's a whole bunch of rules like that. Most of them are not language technical. They're things like don't violate static type system because I like static type system for the obvious reason that I like things to be reliable on reasonable amounts of hardware. But one of these rules is the zero overhead principle. The what kind of principle? The zero overhead principle. It basically says that if you have an abstraction, it should not cost anything compared to write the equivalent code at a lower level. So if I have, say, a matrix multiply, it should be written in such a way that you could not drop to the C level of abstraction and use arrays and pointers and such and run faster. And so people have written such matrix multiplications, and they've actually gotten code that ran faster than Fortran because once you had the right abstraction, you can eliminate temporaries and you can do loop fusion and other good stuff like that. That's quite hard to do by hand and in a lower level language. And there's some really nice examples of that. And the key here is that that matrix multiplication, the matrix abstraction, allows you to write code that's simple and easy. You can do that in any language. But with C++, it has the features so that you can also have this thing run faster than if you hand coded it. Now, people have given that lecture many times, I and others, and a very common question after the talk where you have demonstrated that you can outperform Fortran for dense matrix multiplication, people come up and says, yeah, but that was C++. If I rewrote your code in C, how much faster would it run? The answer is much slower. This happened the first time actually back in the 80s with a friend of mine called Doug McElroy, who demonstrated exactly this effect. And so the principle is you should give programmers the tools so that the abstractions can follow the zero void principle. Furthermore, when you put in a language feature in C++ or a standard library feature, you try to meet this. It doesn't mean it's absolutely optimal, but it means if you hand code it with the usual facilities in the language in C++ in C, you should not be able to better it. Usually you can do better if you use embedded assembler for machine code for some of the details to utilize part of a computer that the compiler doesn't know about. But you should get to that point before you beat to the abstraction. So that's a beautiful ideal to reach for. And we meet it quite often. Quite often. So where's the magic of that coming from? There's some of it is the compilation process. So the implementation of C++, some of it is the design of the feature itself, the guidelines. So I think it's important that you think about the guidelines. So I've recently and often talked to Chris Latner, so Clang. What, just out of curiosity, is your relationship in general with the different implementations of C++ as you think about you and committee and other people in C++, think about the design of features or design of previous features. In trying to reach the ideal of zero overhead, does the magic come from the design, the guidelines, or from the implementations? And not all. You go for programming technique, programming language features, and implementation techniques. You need all three. And how can you think about all three at the same time? It takes some experience, takes some practice, and sometimes you get it wrong. But after a while, you sort of get it right. I don't write compilers anymore. But Brian Kernighan pointed out that one of the reasons C++ succeeded was some of the craftsmanship I put into the early compilers. And of course, I did the language assign. Of course, I wrote a fair amount of code using this kind of stuff. And I think most of the successes involve progress in all three areas together. A small group of people can do that. Two, three people can work together to do something like that. It's ideal if it's one person that has all the skills necessary. But nobody has all the skills necessary in all the fields where C++ is used. So if you want to approach my ideal in, say, concurrent programming, you need to know about algorithms from current programming. You need to know the trigger of lock free programming. You need to know something about compiler techniques. And then you have to know some of the application areas where this is, like some forms of graphics or some forms of what we call web server kind of stuff. And that's very hard to get into a single head. But small groups can do it too. So is there differences in your view, not saying which is better or so on, but differences in the different implementations of C++? Why are there several sort of maybe naive questions for me? GCC, clang, so on? This is a very reasonable question. When I designed C++, most languages had multiple implementations. Because if you run on an IBM, if you run on a Sun, if you run on a Motorola, there was just many, many companies and they each have their own compilation structure and their own compilers. It was just fairly common that there was many of them. And I wrote C Front assuming that other people would write compilers with C++ if successful. And furthermore, I wanted to utilize all the backend infrastructures that were available. I soon realized that my users were using 25 different linkers. I couldn't write my own linker. Yes, I could, but I couldn't write 25 linkers and also get any work done on the language. And so it came from a world where there was many linkers, many optimizers, many compiler front ends, not to start, but many operating systems. The whole world was not an 86 and a Linux box or something, whatever is the standard today. In the old days, they set a VAX. So basically, I assumed there would be lots of compilers. It was not a decision that there should be many compilers. It was just a fact. That's the way the world is. And yes, many compilers emerged. And today, there's at least four front ends, Clang, GCC, Microsoft, and EDG, it is design group. They supply a lot of the independent organizations and the embedded systems industry. And there's lots and lots of backends. We have to think about how many dozen backends there are. Because different machines have different things, especially in the embedded world, the machines are very different, the architectures are very different. And so having a single implementation was never an option. Now, I also happen to dislike monocultures. Monocultures. They are dangerous. Because whoever owns the monoculture can go stale. And there's no competition. And there's no incentive to innovate. There's a lot of incentive to put barriers in the way of change. Because hey, we own the world. And it's a very comfortable world for us. And who are you to mess with that? So I really am very happy that there's four front ends for C++. Clang's great. But GCC was great. But then it got somewhat stale. Clang came along. And GCC is much better now. Microsoft is much better now. So at least a low number of front ends puts a lot of pressure on standards compliance and also on performance and error messages and compile time speed, all this good stuff that we want. Do you think, crazy question, there might come along, do you hope there might come along implementation of C++ written, given all its history, written from scratch? So written today from scratch? Well, Clang and the LLVM is more or less written from scratch. But there's been C++ 11, 14, 17, 20. You know, there's been a lot of I think sooner or later somebody's going to try again. There has been attempts to write new C++ compilers and some of them has been used and some of them has been absorbed into others and such. Yeah, it'll happen. So what are the key features of C++? And let's use that as a way to sort of talk about the evolution of C++, the new features. So at the highest level, what are the features that were there in the beginning? What features got added? Let's first get a principle or an aim in place. C++ is for people who want to use hardware really well and then manage the complexity of doing that through abstraction. And so the first facility you have is a way of manipulating the machines at a fairly low level. That looks very much like C. It has loops, it has variables, it has pointers like machine addresses, it can access memory directly, it can allocate stuff in the absolute minimum of space needed on the machine. There's a machine facing part of C++ which is roughly equivalent to C. I said C++ could beat C and it can. It doesn't mean I dislike C. If I disliked C, I wouldn't have built on it. Furthermore, after Dennis Ritchie, I'm probably the major contributor to modern C. I had lunch with Dennis most days for 16 years and we never had a harsh word between us. So these C versus C++ fights are for people who don't quite understand what's going on. Then the other part is the abstraction. The key is the class. There, the key is the class which is a user defined type. My idea for the class is that you should be able to build a type that's just like the building types in the way you use them, in the way you declare them, in the way you get the memory and you can do just as well. So in C++ there's an int as in C. You should be able to build an abstraction, a class which we can call capital int that you can use exactly like an integer and run just as fast as an integer. There's the idea right there. And of course you probably don't want to use the int itself but it has happened. People have wanted integers that were range checked so that you couldn't overflow and such, especially for very safety critical applications like the fuel injection for a marine diesel engine for the largest ships. This is a real example by the way. This has been done. They built themselves an integer that was just like integer except that couldn't overflow. If there was an overflow you went into the error handling. And then you built more interesting types. You can build a matrix which you need to do graphics or you could build a gnome for a video game. And all these are classes and they appear just like the built in types. Exactly. In terms of efficiency and so on. So what else is there? And flexibility. So I don't know, for people who are not familiar with object oriented programming there's inheritance. There's a hierarchy of classes. You can just like you said create a generic vehicle that can turn left. So what people found was that you don't actually know. How do I say this? A lot of types are related. That is the vehicles, all vehicles are related. Bicycles, cars, fire engines, tanks. They have some things in common and some things that differ. And you would like to have the common things common and having the differences specific. And when you didn't want to know about the differences, just turn left. You don't have to worry about it. That's how you get the traditional object oriented programming coming out of Simula adopted by Smalltalk and C++ and all the other languages. The other kind of obvious similarity between types comes when you have something like a vector. Fortran gave us the vector as called array of doubles. But the minute you have a vector of doubles, you want a vector of double precision doubles and for short doubles for graphics. And why should you not have a vector of integers while you're added or a vector of vectors and a vector of vectors of chess pieces? Now you have a board, right? So this is you express the commonality as the idea of a vector and the variations come through parameterization. And so here we get the two fundamental ways of abstracting or of having similarities of types in C++. There's the inheritance and there's a parameterization. There's the object oriented programming and there's the generic programming. With the templates for the generic programming. Yep. So you've presented it very nicely, but now you have to make all that happen and make it efficient. So generic programming with templates, there's all kinds of magic going on, especially recently that you can help catch up on. But it feels to me like you can do way more than what you just said with templates. You can start doing this kind of metaprogramming, this kind of... You can do metaprogramming also. I didn't go there in that explanation. We're trying to be very basic, but go back on to the implementation. If you couldn't implement this efficiently, if you couldn't use it so that it became efficient, it has no place in C++ because it will violate the zero overhead principle. So when I had to get object oriented programming inheritance, I took the idea of virtual functions from Simula. Virtual functions is a Simula term, class is a Simula term. If you ever use those words, say thanks to Christen Nygaard and Olli Høndahl. And I did the simplest implementation I knew of, which was basically a jump table. So you get the virtual function table, the function goes in, does an indirection through a table and get the right function. That's how you pick the right thing there. And I thought that was trivial. It's close to optimal and it was obvious. It turned out the Simula had a more complicated way of doing it and therefore was slower. And it turns out that most languages have something that's a little bit more complicated, sometimes more flexible, but you pay for it. And one of the strengths of C++ was that you could actually do this object oriented stuff and your overhead compared to ordinary functions, there's no indirection. It's sort of in 5, 10, 25% just the call. It's down there. It's not two. And that means you can afford to use it. Furthermore, in C++, you have the distinction between a virtual function and a nonvirtual function. If you don't want any overhead, if you don't need the indirection that gives you the flexibility in object oriented programming, just don't ask for it. So the idea is that you only use virtual functions if you actually need the flexibility. So it's not zero overhead, but it's zero overhead compared to any other way of achieving the flexibility. Now, auto parameterization. Basically, the compiler looks at the template, say the vector, and it looks at the parameter, and then combines the two and generates a piece of code that is exactly as if you've written a vector of that specific type. So that's the minimal overhead. If you have many template parameters, you can actually combine code that the compiler couldn't usually see at the same time and therefore get code that is faster than if you had handwritten the stuff, unless you are very, very clever. So the thing is, parameterized code, the compiler fills stuff in during the compilation process, not during runtime. That's right. And furthermore, it gives all the information it's gotten, which is the template, the parameter, and the context of use. It combines the three and generates good code. But it can generate, now, it's a little outside of what I'm even comfortable thinking about, but it can generate a lot of code. Yes. And how do you, I remember being both amazed at the power of that idea, and how ugly the debugging looked? Yes. Debugging can be truly horrid. Come back to this, because I have a solution. Anyway, the debugging was ugly. The code generated by C++ has always been ugly, because there's these inherent optimizations. A modern C++ compiler has front end, middle end, and back end. Even C Front, back in 83, had front end and back end optimizations. I actually took the code, generated an internal representation, munched that representation to generate good code. So people say, it's not a compiler, it generates C. The reason it generated C was I wanted to use C's code generator, and I wanted to use C's code generator to generate good code. C was I wanted to use C's code generators that was really good at back end optimizations. But I needed front end optimizations, and therefore, the C I generated was optimized C. The way a really good handcrafted optimizer human could generate it, and it was not meant for humans. It was the output of a program, and it's much worse today. And with templates, it gets much worse still. So it's hard to combine simple debugging with the optimal code, because the idea is to drag in information from different parts of the code to generate good code, machine code. And that's not readable. So what people often do for debugging is they turn the optimizer off. And so you get code that when something in your source code looks like a function call, it is a function call. When the optimizer is turned on, it may disappear, the function call, it may inline. And so one of the things you can do is you can actually get code that is smaller than the function call, because you eliminate the function preamble and return. And there's just the operation there. One of the key things when I did templates was I wanted to make sure that if you have, say, a sort algorithm, and you give it a sorting criteria, if that sorting criteria is simply comparing things with less than, the code generated should be the less than, not an indirect function call to a comparison object, which is what it is in the source code. But we really want down to the single instruction. But anyway, turn off the optimizer, and you can debug. The first level of debugging can be done, and I always do without the optimization on, because then I can see what's going on. And then there's this idea of concepts that puts some, now I've never even, I don't know if it was ever available in any form, but it puts some constraints on the stuff you can parameterize, essentially. Let me try and explain this. So yes, it wasn't there 10 years ago. We have had versions of it that actually work for the last four or five years. It was a design by Gabi Dos Reis, Drew Sutton and me. We were professors and postdocs in Texas at the time. And the implementation by Andrew Sutton has been available for that time. And it is part of C++20. And there's a standard library that uses it. So this is becoming really very real. It's available in Clang and GCC. GCC for a couple of years, and I believe Microsoft is soon going to do it. We expect all of C++20 to be available in all the major compilers in 20. But this kind of stuff is available now. I'm just saying that because otherwise people might think I was talking about science fiction. And so what I'm going to say is concrete. You can run it today. And there's production uses of it. So the basic idea is that when you have a generic component, like a sort function, the sort function will require at least two parameters. One is the data structure with a given type and a comparison criteria. And these things are related, but obviously you can't compare things if you don't know what the type of things you compare. And so you want to be able to say, I'm going to sort something and it is to be sortable. What does it mean to be sortable? You look it up in the standard. It has to have a it has to be a sequence with a beginning and an end. There has to be random access to that sequence. And there has to be the element types has to be comparable by default. Which means less than operator can operate on. Yes. Less than logical operator can operate. Basically what concepts are, they're compile time predicates. They're predicates you can ask, are you a sequence? Yes, I have a beginning and end. Are you a random access sequence? Yes, I have a subscripting and plus. Is your element type something that has a less than? Yes, I have a less than it's and so basically that's the system. And so instead of saying, I will take a parameter of any type, it'll say, I'll take something that's sortable. And it's well defined. And so we say, okay, you can sort with less than, I don't want less than, I want greater than or something I invent. So you have two parameters, the sortable thing and the comparison criteria. And the comparison criteria will say, well, I can, you can write it saying it should operate on the element type. And then you can say, well, I can sort with less than, and it has the comparison operations. So that's just simply the fundamental thing. It's compile time predicates. Do you have the properties I need? So it specifies the requirements of the code on the parameters that it gets. It's very similar to types actually. But operating in the space of concepts. Concepts. The word concept was used by Alex Stefanov, who is sort of the father of generic programming in the context of C++. There's other places that use that word, but the way we call it generic programming is Alex's. And he called them concepts because he said they are the sort of the fundamental concepts of an area. So they should be called concepts. And we've had concepts all the time. If you look at the KNR book about C, C has arithmetic types and it has integral types. It says so in the book. And then it lists what they are and they have certain properties. The difference today is that we can actually write a concept that will ask a type, are you an integral type? Do you have the properties necessary to be an integral type? Do you have plus, minus, divide and such? So maybe the story of concepts, because I thought it might be part of C++11. C O X or whatever it was at the time. What was the, why didn't it, what, like what we'll, we'll talk a little bit about this fascinating process of standards, because I think it's really interesting for people. It's interesting for me, but why did it take so long? What shapes did the idea of concepts take? What were the challenges? Back in 87 or thereabouts. 1987? Well, 1987 or thereabouts when I was designing templates, obviously I wanted to express the notion of what is required by a template of its arguments. And so I looked at this and basically for templates, I wanted three properties. I wanted to be very flexible. It had to be able to express things I couldn't imagine because I know I can't imagine everything. And I've been suffering from languages that try to constrain you to only do what the designer thought good. Didn't want to do that. Secondly, it had to run faster, as fast or faster than handwritten code. So basically, if I have a vector of T and I take a vector of char, it should run as fast as you built a vector of char yourself without parameterization. And thirdly, I wanted to be able to express the constraints of the arguments, have proper type checking of the interfaces. And neither I nor anybody else at the time knew how to get all three. And I thought for C++, I must have the two first. Otherwise, it's not C++. And it bothered me for another couple of decades that I couldn't solve the third one. I mean, I was the one that put function argument type checking into C. I know the value of good interfaces. I didn't invent that idea. It's very common, but I did it. And I wanted to do the same for templates, of course, and I couldn't. So it bothered me. Then we tried again, 2002, 2003. Gaby DesRays and I started analyzing the problem, explained possible solutions. It was not a complete design. A group in University of Indiana, an old friend of mine, they started a project at Indiana and we thought we could get a good system of concepts in another two or three years that would have made C++ 11 to C++ 06 or 07. Well, it turns out that I think we got a lot of the fundamental ideas wrong. They were too conventional. They didn't quite fit C++ in my opinion. Didn't serve implicit conversions very well. It didn't serve mixed type arithmetic, mixed type computations very well. A lot of stuff came out of the functional community and that community didn't deal with multiple types in the same way as C++ does, had more constraints on what you could express and didn't have the draconian performance requirements. And basically we tried. We tried very hard. We had some successes, but it just in the end wasn't, didn't compile fast enough, was too hard to use and didn't run fast enough unless you had optimizers that was beyond the state of the art. They still are. So we had to do something else. Basically it was the idea that a set of parameters has defined a set of operations and you go through an interaction table just like for virtual functions and then you try to optimize the interaction away to get performance. And we just couldn't do all of that. But get back to the standardization. We are standardizing C++ under ISO rules, which are very open process. People come in, there's no requirements for education or experience. So you started to develop C++ and there's a whole, when was the first standard established? What is that like? The ISO standard, is there a committee that you're referring to? There's a group of people. What was that like? How often do you meet? What's the discussion? I'll try and explain that. So sometime in early 1989, two people, one from IBM, one from HP, turned up in my office and told me I would like to standardize C++. This was a new idea to me and when I pointed out that it wasn't finished yet and it wasn't ready for formal standardization and such. And they say, no, Bjarne, you haven't gotten it. You really want to do this. Our organizations depend on C++. We cannot depend on something that's owned by another corporation that might be a competitor. Of course we could rely on you, but you might get run over by a boss. We really need to get this out in the open. It has to be standardized under formal rules and we are going to standardize it under ISO rules and you really want to be part of it because basically otherwise we'll do it ourselves. And we know you can do it better. So through a combination of arm twisting and flattery, it got started. So in late 89, there was a meeting in DC at the, actually no, it was not ISO then, it was ANSI, the American National Standard doing. We met there. We were lectured on the rules of how to do an ANSI standard. There was about 25 of us there, which apparently was a new record for that kind of meeting. And some of the old C guys that has been standardized in C was there. So we got some expertise in. So the way this works is that it's an open process. Anybody can sign up if they pay the minimal fee, which is about a thousand dollars, less than a little bit more now. And I think it's $1,280. It's not going to kill you. And we have three meetings a year. This is fairly standard. We tried two meetings a year for a couple of years that didn't work too well. So three one week meetings a year and you meet and you have technical discussions, and then you bring proposals forward for votes. The votes are done one person per, one vote per organization. So you can't have say IBM come in with 10 people and dominate things that's not allowed. And these are organizations that extensively UC plus plus. Yes. Or individuals or individuals. I mean, it's a bunch of people in the room deciding the design of a language based on which a lot of the world's systems run. Right. Well, I think most people would agree it's better than if I decided it or better than if a single organization like AG&T decides it. I don't know if everyone agrees to that, by the way. Bureaucracies have their critics too. Yes. Look, standardization is not pleasant. It's horrifying. It's like democracy. Exactly. As Churchill says, democracy is the worst way, except for the others. Right. And it's, I would say the same with formal standardization. But anyway, so we meet and we have these votes and that determines what the standard is. A couple of years later, we extended this so it became worldwide. We have standard organizations that are active in currently 15 to 20 countries and another 15 to 20 are sort of looking and voting based on the rest of the work on it. And we meet three times a year. Next week I'll be in Cologne, Germany, spending a week doing standardization and we'll vote out the committee draft of C++20, which goes to the national standards committees for comments and requests for changes and improvements. Then we do that and there's a second set of votes where hopefully everybody votes in favor. This has happened several times. The first time we finished, we started in the first technical meeting was in 1990. The last was in 98. We voted it out. That was the standard that people used until 11 or a little bit past 11. And it was an international standard. All the countries voted in favor. It took longer with 11. I'll mention why, but all the nations voted in favor. And we work on the basis of consensus. That is, we do not want something that passes 6040 because then we're going to get dialects and opponents and people complain too much. They all complain too much, but basically it has no real effect. The standards has been obeyed. They have been working to make it easier to use many compilers, many computers and all of that kind of stuff. It was traditional with ISO standards to take 10 years. We did the first one in eight, brilliant. And we thought we were going to do the next one in six because now we are good at it. Right. It took 13. Yeah. It was named OX. It was named OX. Hoping that you would at least get it within the single, within the odds, the single digits. I thought we would get, I thought we'd get six, seven or eight. The confidence of youth. That's right. Well, the point is that this was sort of like a second system effect. That is, we now knew how to do it. And so we're going to do it much better. And we've got more ambitious and it took longer. Furthermore, there is this tendency because it's a 10 year cycle or it doesn't matter. Just before you're about to ship, somebody has a bright idea. And so we really, really must get that in. We did that successfully with the STL. We got the standard library that gives us all the STL stuff. That basically, I think it saved C++. It was beautiful. And then people tried it with other things and it didn't work so well. They got things in, but it wasn't as dramatic and it took longer and longer and longer. So after C++ 11, which was a huge improvement and what, basically what most people are using today, we decided never again. And so how do you avoid those slips? And the answer is that you ship more often. So that if you have a slip on a 10 year cycle, by the time you know it's a slip, there's 11 years till you get it. Now with a three year cycle, there is about three or four years till you get it. Like the delay between feature freeze and shipping. So you always get one or two years more. And so we shipped 14 on time, we shipped 17 on time, and we ship, we will ship 20 on time. It'll happen. And furthermore, this gives a predictability that allows the implementers, the compiler implementers, the library implementers, they have a target and they deliver on it. 11 took two years before most compilers were good enough. 14, most compilers were actually getting pretty good in 14. 17, everybody shipped in 17. We are going to have at least almost everybody ship almost everything in 20. And I know this and I know this because they're shipping in 19. Predictability is good. Delivery on time is good. And so yeah. That's great. That's how it works. There's a lot of features that came in in C++ 11. There's a lot of features at the birth of C++ that were amazing and ideas with concepts in 2020. What to you is the most, just to you personally, beautiful or just you sit back and think, wow, that's just nice and clean feature of C++? I have written two papers for the History of Programming Languages Conference, which basically asked me such questions. And I'm writing a third one, which I will deliver at the History of Programming Languages Conference in London next year. So I've been thinking about that. And there is one clear answer. Constructors and destructors. The way a constructor can establish the environment for the use of a type for an object and the destructor that cleans up any messes at the end of it. That is key to C++. That's why we don't have to use garbage collection. That's how we can get predictable performance. That's how you can get the minimal overhead in many, many cases, and have really clean types. It's the idea of constructor destructor pairs. Sometimes it comes out under the name RAII. Resource acquisition is initialization, which is the idea that you grab resources in the constructor and release them in destructor. It's also the best example of why I shouldn't be in advertising. I get the best idea and I call it resource acquisition is initialization. Not the greatest naming I've ever heard. Not the greatest naming I've ever heard. So it's types, abstraction of types. You said, I want to create my own types. So types is an essential part of C++ and making them efficient is the key part. And to you, this is almost getting philosophical, but the construction and the destruction, the creation of an instance of a type and the freeing of resources from that instance of a type is what defines the object. It's almost like birth and death is what defines human life. That's right. By the way, philosophy is important. You can't do good language design without philosophy because what you are determining is what people can express and how. This is very important. By the way, constructors destructors came into C++ in 79 in about the second week of my work with what was then called C of the classes. It is a fundamental idea. Next comes the fact that you need to control copying because once you control, as you said, birth and death, you have to control taking copies, which is another way of creating an object. And finally, you have to be able to move things around so you get the move operations. And that's the set of key operations you can define on a C++ type. And so to you, those things are just just a beautiful part of C++ that is at the core of it all. Yes. You mentioned that you hope there will be one unified set of guidelines in the future for how to construct a programming language. So perhaps not one programming language, but a unification of how we build programming languages, if you remember such statements. I have some trouble remembering it, but I know the origin of that idea. So maybe you can talk about sort of C++ has been improving. There's been a lot of programming language. Do you, where does the arc of history taking us? Do you hope that there is a unification about the languages with which we communicate in the digital space? Well, I think that languages should be designed not by clobbering language features together and and doing slightly different versions of somebody else's ideas, but through the creation of a set of principles, rules of thumbs, whatever you call them. I made them for C++. And we're trying to teach people in the standards committee about these rules, because a lot of people come in and says, I've got a great idea. Let's put it in the language. And then you have to ask, why does it fit in the language? Why does it fit in this language? It may fit in another language and not here, or it may fit here and not the other language. So you have to work from a set of principles and you have to develop that set of principles. And one example that I sometimes remember is I was sitting down with some of the designers of Common Lisp and we were talking about languages and language features. And obviously we didn't agree about anything because, well, Lisp is not C++ and vice versa. It's too many parentheses. But suddenly we started making progress. I said, I had this problem and I developed it according to these ideas. And they said, why? We had that problem, different problem, and we developed it with the same kind of principles. And so we worked through large chunks of C++ and large chunks of Common Lisp and figured out we actually had similar sets of principles of how to do it. But the constraints on our designs were very different and the aims for the usage was very different. But there was commonality in the way you reason about language features and the fundamental principles you are trying to do. So do you think that's possible? So there, just like there is perhaps a unified theory of physics, of the fundamental forces of physics, that I'm sure there is commonalities among the languages, but there's also people involved that help drive the development of these languages. Do you have a hope or an optimism that there will be a unification? If you think about physics and Einstein towards a simplified language, do you think that's possible? Let's remember sort of modern physics, I think, started with Galileo in the 1300s. So they've had 700 years to get going. Modern computing started in about 49. We've got, what is it, 70 years. They have 10 times. Furthermore, they are not as bothered with people using physics the way we are worried about programming is done by humans. So each have problems and constraints the others have, but we are very immature compared to physics. So I would look at sort of the philosophical level and look for fundamental principles. Like you don't leak resources, you shouldn't. You don't take errors at runtime that you don't need to. You don't violate some kind of type system. There's many kinds of type systems, but when you have one, you don't break it, etc., etc. There will be quite a few, and it will not be the same for all languages. But I think if we step back at some kind of philosophical level, we would be able to agree on sets of principles that applied to sets of problem areas. And within an area of use, like in C++'s case, what used to be called systems programming, the area between the hardware and the fluffier parts of the system, you might very well see a convergence. So these days you see Rust having adopted RAII and sometimes accuse me for having borrowed it 20 years before they discovered it. But we're seeing some kind of convergence here instead of relying on garbage collection all the time. The garbage collection languages are doing things like the dispose patterns and such that imitate some of the construction destruction stuff. And they're trying not to use the garbage collection all the time and things like that. So there's a conversion. But I think we have to step back to the philosophical level, agree on principles, and then we'll see some conversions, convergences. And it will be application domain specific. So a crazy question, but I work a lot with machine learning, with deep learning. I'm not sure if you touch that world that much, but you could think of programming as a thing that takes some input. A programming is the task of creating a program and a program takes some input and produces some output. So machine learning systems train on data in order to be able to take an input and produce output. But they're messy, fuzzy things, much like we as children grow up. We take some input, we make some output, but we're noisy. We mess up a lot. We're definitely not reliable. Biological system are a giant mess. So there's a sense in which machine learning is a kind of way of programming, but just fuzzy. It's very, very, very different than C++. Because C++ is just like you said, it's extremely reliable, it's efficient, you can measure it, you can test it in a bunch of different ways. With biological systems or machine learning systems, you can't say much except sort of empirically saying that 99.8% of the time, it seems to work. What do you think about this fuzzy kind of programming? Do you even see it as programming? Is it totally another kind of world? I think it's a different kind of world. And it is fuzzy. And in my domain, I don't like fuzziness. That is, people say things like they want everybody to be able to program. But I don't want everybody to program my airplane controls or the car controls. I want that to be done by engineers. I want that to be done with people that are specifically educated and trained for doing building things. And it is not for everybody. Similarly, a language like C++ is not for everybody. It is generated to be a sharp and effective tool for professionals, basically, and definitely for people who aim at some kind of precision. You don't have people doing calculations without understanding math. Counting on your fingers is not going to cut it if you want to fly to the moon. And so there are areas where an 84% accuracy rate, 16% false positive rate, is perfectly acceptable and where people will probably get no more than 70. You said 98%. What I have seen is more like 84. And by really a lot of blood, sweat, and tears, you can get up to 92.5. So this is fine if it is, say, prescreening stuff before the human look at it. It is not good enough for life threatening situations. And so there's lots of areas where the fuzziness is perfectly acceptable and good and better than humans, cheaper than humans, cheaper than humans. But it's not the kind of engineering stuff I'm mostly interested in. I worry a bit about machine learning in the context of cars. You know much more about this than I do. I worry too. But I'm sort of an amateur here. I've read some of the papers, but I've not ever done it. And the idea that scares me the most is the one I have heard, and I don't know how common it is, that you have this AI system, machine learning, all of these trained neural nets. And when there's something that's too complicated, they ask the human for help. But the human is reading a book or asleep, and he has 30 seconds or three seconds to figure out what the problem was that the AI system couldn't handle and do the right thing. This is scary. I mean, how do you do the cutting work between the machine and the human? It's very, very difficult. And for the designer of one of the most reliable, efficient, and powerful programming languages, C++, I can understand why that world is actually unappealing. It is for most engineers. To me, it's extremely appealing because we don't know how to get that interaction right. But I think it's possible. But it's very, very hard. It is. And I was stating a problem, not a solution. That is impossible. I mean, I would much rather never rely on the human. If you're driving a nuclear reactor, if you're or an autonomous vehicle, it's much better to design systems written in C++ than never ask human for help. Let's just get one fact in. Yeah. All of this AI stuff is on top of C++. So that's one reason I have to keep a weather eye out on what's going on in that field. But I will never become an expert in that area. But it's a good example of how you separate different areas of applications and you have to have different tools, different principles. And then they interact. No major system today is written in one language. And there are good reasons for that. When you look back at your life work, what is a moment? What is a event creation that you're really proud of? They say, damn, I did pretty good there. Is it as obvious as the creation of C++? It's obvious. I've spent a lot of time with C++. And there's a combination of a few good ideas, a lot of hard work, and a bit of work that I've done. And I've tried to get away from it a few times, but I get dragged in again, partly because I'm most effective in this area and partly because what I do has much more impact if I do it in the context of C++. I have four and a half million people that pick it up tomorrow if I get something right. If I did it in another field, I would have to start learning, then I have to build it and then we'll see if anybody wants to use it. One of the things that has kept me going for all of these years is one, the good things that people do with it and the interesting things they do with it. And also, I get to see a lot of interesting stuff and talk to a lot of interesting people. I mean, if it has just been statements on paper on a screen, I don't think I could have kept going. But I get to see the telescopes up on Mauna Kea and I actually went and see how Ford built cars and I got to JPL and see how they do the Mars rovers. There's so much cool stuff going on. And most of the cool stuff is done by pretty nice people and sometimes in very nice places. Cambridge, Sophia, Silicon Valley. There's more to it than just code. But code is central. On top of the code are the people in very nice places. Well, I think I speak for millions of people, Yaron, in saying thank you for creating this language that so many systems are built on top of that make a better world. So thank you and thank you for talking today. Yeah, thanks. And we'll make it even better. Good.
Bjarne Stroustrup: C++ | Lex Fridman Podcast #48
The following is a conversation with Elon Musk, Part 2, the second time we spoke on the podcast, with parallels, if not in quality, than an outfit, to the objectively speaking greatest sequel of all time, Godfather Part 2. As many people know, Elon Musk is a leader of Tesla, SpaceX, Neuralink, and the Boring Company. What may be less known is that he's a world class engineer and designer, constantly emphasizing first principles thinking and taking on big engineering problems that many before him will consider impossible. As scientists and engineers, most of us don't question the way things are done, we simply follow the momentum of the crowd. But revolutionary ideas that change the world on the small and large scales happen when you return to the fundamentals and ask, is there a better way? This conversation focuses on the incredible engineering and innovation done in brain computer interfaces at Neuralink. This work promises to help treat neurobiological diseases to help us further understand the connection between the individual neuron to the high level function of the human brain. And finally, to one day expand the capacity of the brain through two way communication with computational devices, the internet, and artificial intelligence systems. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by YouTube, Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. And now, as an anonymous YouTube commenter referred to our previous conversation as the quote, historical first video of two robots conversing without supervision, here's the second time, the second conversation with Elon Musk. Let's start with an easy question about consciousness. In your view, is consciousness something that's unique to humans or is it something that permeates all matter, almost like a fundamental force of physics? I don't think consciousness permeates all matter. Panpsychists believe that. Yeah. There's a philosophical. How would you tell? That's true. That's a good point. I believe in scientific methods. I don't know about your mind or anything, but the scientific method is like, if you cannot test the hypothesis, then you cannot reach meaningful conclusion that it is true. Do you think consciousness, understanding consciousness is within the reach of science of the scientific method? We can dramatically improve our understanding of consciousness. You know, hot press to say that we understand anything with complete accuracy, but can we dramatically improve our understanding of consciousness? I believe the answer is yes. Does an AI system in your view have to have consciousness in order to achieve human level or superhuman level intelligence? Does it need to have some of these human qualities that consciousness, maybe a body, maybe a fear of mortality, capacity to love those kinds of silly human things? There's a different, you know, there's this, there's the scientific method, which I very much believe in where something is true to the degree that it is testably. So and otherwise, you're really just talking about, you know, preferences or untestable beliefs or that, you know, that kind of thing. So it ends up being somewhat of a semantic question, where we were conflating a lot of things with the word intelligence. If we parse them out and say, you know, are we headed towards the future where an AI will be able to outthink us in every way? Then the answer is unequivocally yes. In order for an AI system that needs to outthink us in every way, it also needs to have a capacity to have consciousness, self awareness, and understanding. It will be self aware. Yes, that's different from consciousness. I mean, to me, in terms of what what consciousness feels like, it feels like consciousness is in a different dimension. But this is this could be just an illusion. You know, if you damage your brain in some way, physically, you get you, you damage your consciousness, which implies that consciousness is a physical phenomenon. And in my view, the thing is that that I think are really quite, quite likely is that digital intelligence will be able to outthink us in every way, and it will simply be able to simulate what we consider consciousness. So to the degree that you would not be able to tell the difference. And from the from the aspect of the scientific method, it's might as well be consciousness, if we can simulate it perfectly. If you can't tell the difference, when this is sort of the Turing test, but think of a more sort of advanced version of the Turing test. If you're if you're talking to a digital super intelligence and can't tell if that is a computer or a human, like let's say you're just having conversation over a phone or a video conference or something where you're you think you're talking looks like a person makes all of the right inflections and movements and all the small subtleties that constitute a human and talks like human makes mistakes like a human like and you literally just can't tell is this Are you video conferencing with a person or or an AI might as well might as well be human. So on a darker topic, you've expressed serious concern about existential threats of AI. It's perhaps one of the greatest challenges our civilization faces, but since I would say we're kind of an optimistic descendants of apes, perhaps we can find several paths of escaping the harm of AI. So if I can give you an example of an example of an example of escaping the harm of AI. So if I can give you three options, maybe you can comment which do you think is the most promising. So one is scaling up efforts on AI safety and beneficial AI research in hope of finding an algorithmic or maybe a policy solution. Two is becoming a multi planetary species as quickly as possible. And three is merging with AI and riding the wave of that increasing intelligence as it continuously improves. What do you think is most promising, most interesting, as a civilization that we should invest in? I think there's a lot of tremendous amount of investment going on in AI, where there's a lack of investment is in AI safety. And there should be in my view, a government agency that oversees anything related to AI to confirm that it is does not represent a public safety risk, just as there is a regulatory authority for the Food and Drug Administration is that's for automotive safety, there's the FAA for aircraft safety, which I really come to the conclusion that it is important to have a government referee or referee that is serving the public interest in ensuring that things are safe when when there's a potential danger to the public. I would argue that AI is unequivocally something that has potential to be dangerous to the public, and therefore should have a regulatory agency just as other things that are dangerous to the public have a regulatory agency. But let me tell you, the problem with this is that the government moves very slowly. And the rate of the rate, the usually way a regulatory agency comes into being is that something terrible happens. There's a huge public outcry. And years after that, there's a regulatory agency or a rule put in place, take something like, like seatbelts, it was known for a decade or more that seatbelts would have a massive impact on safety and save so many lives in serious injuries. And the car industry fought the requirement to put seatbelts in tooth and nail. That's crazy. Yeah. And hundreds of 1000s of people probably died because of that. And they said people wouldn't buy cars if they had seatbelts, which is obviously absurd. Yeah, or look at the tobacco industry and how long they fought any thing about smoking. That's part of why I helped make that movie. Thank you for smoking. You can sort of see just how pernicious it can be when you have these companies effectively achieve regulatory capture of government. The bad people in the community refer to the advent of digital super intelligence as a singularity. That is not to say that it is good or bad, but that it is very difficult to predict what will happen after that point. And then there's some probability it will be bad, some probably it'll be it will be good. We obviously want to affect that probability and have it be more good than bad. Well, let me on the merger with AI question and the incredible work that's being done at Neuralink. There's a lot of fascinating innovation here across different disciplines going on. So the flexible wires, the robotic sewing machine, that responsive brain movement, everything around ensuring safety and so on. So we currently understand very little about the human brain. Do you also hope that the work at Neuralink will help us understand more about our about our human brain? Yeah, I think the work in Neuralink will definitely shed a lot of insight into how the brain, the mind works. Right now, just the data we have regarding how the brain works is very limited. You know, we've got fMRI, which is that that's kind of like putting us, you know, stethoscope on the outside of a factory wall and then putting it like all over the factory wall and you can sort of hear the sounds, but you don't know what the machines are doing, really. It's hard. You can infer a few things, but it's very broad brushstroke. In order to really know what's going on in the brain, you really need you have to have high precision sensors. And then you want to have stimulus and response. Like if you trigger a neuron, what, how do you feel? What do you see? How does it change your perception of the world? You're speaking to physically just getting close to the brain, being able to measure signals, how do you know what's going on in the brain? Physically, just getting close to the brain, being able to measure signals from the brain will give us sort of open the door inside the factory. Yes, exactly. Being able to have high precision sensors that tell you what individual neurons are doing. And then being able to trigger a neuron and see what the response is in the brain. So you can see the consequences of if you fire this neuron, what happens? How do you feel? What does it change? It'll be really profound to have this in people because people can articulate their change. Like if there's a change in mood, or if they can tell you if they can see better, or hear better, or be able to form sentences better or worse, or their memories are jogged, or that kind of thing. So on the human side, there's this incredible general malleability, plasticity of the human brain, the human brain adapts, adjusts, and so on. So that's not that plastic, to be totally frank. So there's a firm structure, but nevertheless, there's some plasticity. And the open question is, sort of, if I could ask a broad question is how much that plasticity can be utilized. Sort of, on the human side, there's some plasticity in the human brain. And on the machine side, we have neural networks, machine learning, artificial intelligence, it's able to adjust and figure out signals. So there's a mysterious language that we don't perfectly understand that's within the human brain. And then we're trying to understand that language to communicate both directions. So the brain is adjusting a little bit, we don't know how much, and the machine is adjusting. Where do you see, as they try to sort of reach together, almost like with an alien species, try to find a protocol, communication protocol that works? Where do you see the biggest, the biggest benefit arriving from on the machine side or the human side? Do you see both of them working together? I think the machine side is far more malleable than the biological side, by a huge amount. So it'll be the machine that adapts to the brain. That's the only thing that's possible. The brain can't adapt that well to the machine. You can't have neurons start to regard an electrode as another neuron, because neurons just, there's like the pulse. And so something else is pulsing. So there is that elasticity in the interface, which we believe is something that can happen. But the vast majority of the malleability will have to be on the machine side. But it's interesting, when you look at that synaptic plasticity at the interface side, there might be like an emergent plasticity. Because it's a whole nother, it's not like in the brain, it's a whole nother extension of the brain. You know, we might have to redefine what it means to be malleable for the brain. So maybe the brain is able to adjust to external interfaces. There will be some adjustments to the brain, because there's going to be something reading and simulating the brain. And so it will adjust to that thing. But most, the vast majority of the adjustment will be on the machine side. This is just, this is just, it has to be that otherwise it will not work. Ultimately, like, we currently operate on two layers, we have sort of a limbic, like prime primitive brain layer, which is where all of our kind of impulses are coming from. It's sort of like we've got, we've got like a monkey brain with a computer stuck on it. That's that's the human brain. And a lot of our impulses and everything are driven by the monkey brain. And the computer, the cortex is constantly trying to make the monkey brain happy. It's not the cortex that's steering the monkey brains, the monkey brain steering the cortex. You know, the cortex is the part that tells the story of the whole thing. So we convince ourselves it's, it's more interesting than just the monkey brain. The cortex is like what we call like human intelligence. You know, it's just like, that's like the advanced computer relative to other creatures. The other creatures do not have either. Really, they don't, they don't have the computer, or they have a very weak computer relative to humans. But it's, it's like, it sort of seems like surely the really smart thing should control the dumb thing. But actually, the dumb thing controls the smart thing. So do you think some of the same kind of machine learning methods, whether that's natural language processing applications are going to be applied for the communication between the machine and the brain to learn how to do certain things like movement of the body, how to process visual stimuli, and so on. Do you see the value of using machine learning to understand the language of the two way communication with the brain? Sure. Yeah, absolutely. I mean, we're neural net. And that, you know, AI is basically neural net. So it's like digital neural net will interface with biological neural net. And hopefully bring us along for the ride. Yeah. But the vast majority of our intelligence will be digital. Like, so like, think of like, the difference in intelligence between your cortex and your limbic system is gigantic, your limbic system really has no comprehension of what the hell the cortex is doing. It's just literally hungry, you know, or tired or angry or sexy or something, you know, that's just and then that communicates that that impulse to the cortex and tells the cortex to go satisfy that then love a great deal of like, a massive amount of thinking, like truly stupendous amount of thinking has gone into sex without purpose, without procreation, without procreation. Which is actually quite a silly action in the absence of procreation. It's a bit silly. Why are you doing it? Because it makes the limbic system happy. That's why. That's why. But it's pretty absurd, really. Well, the whole of existence is pretty absurd in some kind of sense. Yeah. But I mean, this is a lot of computation has gone into how can I do more of that with procreation not even being a factor? This is, I think, a very important area of research by NSFW. An agency that should receive a lot of funding, especially after this conversation. I propose the formation of a new agency. Oh, boy. What is the most exciting or some of the most exciting things that you see in the future impact of Neuralink, both in the science, the engineering and societal broad impact? Neuralink, I think, at first will solve a lot of brain related diseases. So it could be anything from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points in age. Parents can't remember their kids names and that kind of thing. So it could be anything from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points in age. Parents can't remember their kids names and that kind of thing. So there's a tremendous amount of good that Neuralink can do in solving critical damage to the brain or the spinal cord. There's a lot that can be done to improve quality of life of individuals. And those will be steps to address the existential risk associated with digital superintelligence. Like we will not be able to be smarter than a digital supercomputer. So therefore, if you cannot beat them, join them. And at least we won't have that option. So you have hope that Neuralink will be able to be a kind of connection to allow us to merge, the wave of the improving AI systems. I think the chance is above zero percent. So it's non zero. There's a chance. And that's what I've seen. Dumb and Dumber. Yes. So I'm saying there's a chance. He's saying one in a billion or one in a million, whatever it was, a dumb and dumber. You know, it went from maybe one in a million to improving. Maybe it'll be one in a thousand and then one in a hundred, then one in ten. Depends on the rate of improvement of Neuralink and how fast we're able to do make progress. Well, I've talked to a few folks here that are quite brilliant engineers, so I'm excited. Yeah, I think it's like fundamentally good, you know, giving somebody back full motor control after they've had a spinal cord injury. You know, restoring brain functionality after a stroke, solving debilitating genetically oriented brain diseases. These are all incredibly great, I think. And in order to do these, you have to be able to interface with neurons at a detailed level and you need to be able to fire the right neurons, read the right neurons, and and then effectively you can create a circuit, replace what's broken with with silicon and essentially fill in the missing functionality. And then over time, we can develop a tertiary layer. So if like the limbic system is the primary layer, then the cortex is like the second layer. And as I said, obviously the cortex is vastly more intelligent than the limbic system, but people generally like the fact that they have a limbic system and a cortex. I haven't met anyone who wants to delete either one of them. They're like, okay, I'll keep them both. That's cool. The limbic system is kind of fun. That's where the fun is, absolutely. And then people generally don't want to lose their cortex either. They're like having the cortex and the limbic system. And then there's a tertiary layer, which will be digital superintelligence. And I think there's room for optimism given that the cortex, the cortex is very intelligent and limbic system is not, and yet they work together well. Perhaps there can be a tertiary layer where digital superintelligence lies, and that will be vastly more intelligent than the cortex, but still coexist peacefully and in a benign manner with the cortex and limbic system. That's a super exciting future, both in low level engineering that I saw as being done here and the actual possibility in the next few decades. It's important that Neuralink solve this problem sooner rather than later, because the point at which we have digital superintelligence, that's when we pass the singularity and things become just very uncertain. It doesn't mean that they're necessarily bad or good. For the point at which we pass singularity, things become extremely unstable. So we want to have a human brain interface before the singularity, or at least not long after it, to minimize existential risk for humanity and consciousness as we know it. So there's a lot of fascinating actual engineering, low level problems here at Neuralink that are quite exciting. The problems that we face in Neuralink are material science, electrical engineering, software, mechanical engineering, microfabrication. It's a bunch of engineering disciplines, essentially. That's what it comes down to, is you have to have a tiny electrode, so small it doesn't hurt neurons, but it's got to last for as long as a person. So it's going to last for decades. And then you've got to take that signal, you've got to process that signal locally at low power. So we need a lot of chip design engineers, because we're going to do signal processing, and do so in a very power efficient way, so that we don't heat your brain up, because the brain is very heat sensitive. And then we've got to take those signals and we're going to do something with them. And then we've got to stimulate the back to bidirectional communication. So somebody's good at material science, software, and we've got to do a lot of that. So somebody's good at material science, software, mechanical engineering, electrical engineering, chip design, microfabrication. Those are the things we need to work on. We need to be good at material science, so that we can have tiny electrodes that last a long time. And it's a tough thing with the material science problem, it's a tough one, because you're trying to read and simulate electrically in an electrically active area. Your brain is very electrically active and electrochemically active. So how do you have a coating on the electrode that doesn't dissolve over time and is safe in the brain? This is a very hard problem. And then how do you collect those signals in a way that is most efficient? Because you really just have very tiny amounts of power to process those signals. And then we need to automate the whole thing so it's like LASIK. If this is done by neurosurgeons, there's no way it can scale to a large number of people. And it needs to scale to a large number of people, because I think ultimately we want the future to be determined by a large number of humans. Do you think that this has a chance to revolutionize surgery period? So neurosurgery and surgery all across? Yeah, for sure. It's got to be like LASIK. If LASIK had to be done by hand by a person, that wouldn't be great. It's done by a robot. And the ophthalmologist kind of just needs to make sure your head's in the right position, and then they just press a button and go. SmartSummon and soon Autopark takes on the full beautiful mess of parking lots and their human to human nonverbal communication. I think it has actually the potential to have a profound impact in changing how our civilization looks at AI and robotics, because this is the first time human beings, people that don't own a Tesla may have never seen a Tesla or heard about a Tesla, get to watch hundreds of thousands of cars without a driver. Do you see it this way, almost like an education tool for the world about AI? Do you feel the burden of that, the excitement of that, or do you just think it's a smart parking feature? I do think you are getting at something important, which is most people have never really seen a robot. And what is the car that is autonomous? It's a four wheeled robot. Yeah, it communicates a certain sort of message with everything from safety to the possibility of what AI could bring to its current limitations, its current challenges, it's what's possible. Do you feel the burden of that almost like a communicator educator to the world about AI? We were just really trying to make people's lives easier with autonomy. But now that you mentioned it, I think it will be an eye opener to people about robotics, because they've really never seen most people never seen a robot. And there are hundreds of thousands of Tesla's won't be long before there's a million of them that have autonomous capability, and the drive without a person in it. And you can see the kind of evolution of the car's personality and, and thinking with each iteration of autopilot, you can see it's, it's uncertain about this, or it gets it, but now it's more certain. Now it's moving in a slightly different way. Like, I can tell immediately if a car is on Tesla autopilot, because it's got just little nuances of movement, it just moves in a slightly different way. Cars on Tesla autopilot, for example, on the highway are far more precise about being in the center of the lane than a person. If you drive down the highway and look at how at where cars are, the human driven cars are within their lane, they're like bumper cars. They're like moving all over the place. The car in autopilot, dead center. Yeah, so the incredible work that's going into that neural network, it's learning fast. Autonomy is still very, very hard. We don't actually know how hard it is fully, of course. You look at the most problems you tackle, this one included, with an exponential lens, but even with an exponential improvement, things can take longer than expected sometimes. So where does Tesla currently stand on its quest for full autonomy? What's your sense? When can we see successful deployment of full autonomy? Well, on the highway already, the the probability of intervention is extremely low. Yes. So for highway autonomy, with the latest release, especially the probability of needing to intervene is really quite low. In fact, I'd say for stop and go traffic, it's far safer than a person right now. The probability of an injury or impact is much, much lower for autopilot than a person. And then with navigating autopilot, you can change lanes, take highway interchanges, and then we're coming at it from the other direction, which is low speed, full autonomy. And in a way, this is like, how does a person learn to drive? You learn to drive in the parking lot. You know, the first time you learn to drive probably wasn't jumping on August Street in San Francisco. That'd be crazy. You learn to drive in the parking lot, get things get things right at low speed. And then the missing piece that we're working on is traffic lights and stop streets. Stop streets, I would say actually also relatively easy, because, you know, you kind of know where the stop street is, worst case in geocoders, and then use visualization to see where the line is and stop at the line to eliminate the GPS error. So actually, I'd say it's probably complex traffic lights and very windy roads are the two things that need to get solved. What's harder, perception or control for these problems? So being able to perfectly perceive everything, or figuring out a plan once you perceive everything, how to interact with all the agents in the environment in your sense, from a learning perspective, is perception or action harder? And that giant, beautiful multitask learning neural network, the hottest thing is having accurate representation of the physical objects in vector space. So transfer taking the visual input, primarily visual input, some sonar and radar, and and then creating the an accurate vector space representation of the objects around you. Once you have an accurate vector space representation, the planning and control is relatively easier. That is relatively easy. Basically, once you have accurate vector space representation, then you're kind of like a video game, like cars and like Grand Theft Auto or something like they work pretty well. They drive down the road, they don't crash, you know, pretty much unless you crash into them. That's because they've they've got an accurate vector space representation of where the cars are, and they're just and then they're rendering that as the as the output. Do you have a sense, high level, that Tesla's on track on being able to achieve full autonomy? So on the highway? Yeah, absolutely. And still no driver state, driver sensing? And we have driver sensing with torque on the wheel. That's right. Yeah. By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is a driving feature. I've been saying for a long time, singing in the car is really good for attention management and vigilance management. That's right. Tesla karaoke is great. It's one of the most fun features of the car. Do you think of a connection between fun and safety sometimes? Yeah, you can do both at the same time. That's great. I just met with Andrew and wife of Carl Sagan, directed Cosmos. I'm generally a big fan of Carl Sagan. He's super cool. And had a great way of putting things. All of our consciousness, all civilization, everything we've ever known and done is on this tiny blue dot. People also get they get too trapped in there. This is like squabbles amongst humans. Let's not think of the big picture. They take civilization and our continued existence for granted. I shouldn't do that. Look at the history of civilizations. They rise and they fall. And now civilization is all it's globalized. And so civilization, I think now rises and falls together. There's no there's not geographic isolation. This is a big risk. Things don't always go up. That should be that's an important lesson of history. In 1990, at the request of Carl Sagan, the Voyager One spacecraft, which is a spacecraft that's reaching out farther than anything human made into space, turned around to take a picture of Earth from 3.6 billion years ago. And that's a picture of Earth from 3.7 billion miles away. And as you're talking about the pale blue dot, that picture there takes up less than a single pixel in that image. Yes. Appearing as a tiny blue dot, as a pale blue dot, as Carl Sagan called it. So he spoke about this dot of ours in 1994. And if you could humor me, I was wondering if in the last two minutes you could read the words that he wrote describing this pale blue dot. Sure. Yes, it's funny. The universe appears to be 13.8 billion years old. Earth is like four and a half billion years old. In another half billion years or so, the sun will expand and probably evaporate the oceans and make life impossible on Earth, which means that if it had taken consciousness 10% longer to evolve, it would never have evolved at all. It's 10% longer. And I wonder how many dead one planet civilizations there are out there in the cosmos. That never made it to the other planet and ultimately extinguished themselves or were destroyed by external factors. Probably a few. It's only just possible to travel to Mars. Just barely. If G was 10% more, it wouldn't work really. If G was 10% lower, it would be easy. Like you can go single stage from the surface of Mars all the way to the surface of the Earth. Because Mars is 37% Earth's gravity. We need a giant booster to get off the Earth. Channeling Carl Sagan. Look again at that dot. That's here. That's home. That's us. On it, everyone you love, everyone you know, everyone you've ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines. Every hunter and farger, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of our species lived there on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. This is not true. This is false. Mars. And I think Carl Sagan would agree with that. He couldn't even imagine it at that time. So thank you for making the world dream. And thank you for talking today. I really appreciate it. Thank you.
Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49
The following is a conversation with Michael Kearns. He's a professor at the University of Pennsylvania and a coauthor of the new book, Ethical Algorithm, that is the focus of much of this conversation. It includes algorithmic fairness, bias, privacy, and ethics in general. But that is just one of many fields that Michael is a world class researcher in, some of which we touch on quickly, including learning theory or the theoretical foundation of machine learning, game theory, quantitative finance, computational social science, and much more. But on a personal note, when I was an undergrad, early on, I worked with Michael on an algorithmic trading project and competition that he led. That's when I first fell in love with algorithmic game theory. While most of my research life has been in machine learning and human robot interaction, the systematic way that game theory reveals the beautiful structure in our competitive and cooperating world of humans has been a continued inspiration to me. So for that and other things, I'm deeply thankful to Michael and really enjoyed having this conversation again in person after so many years. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. This episode is supported by an amazing podcast called Pessimists Archive. Jason, the host of the show, reached out to me looking to support this podcast, and so I listened to it, to check it out. And by listened, I mean I went through it, Netflix binge style, at least five episodes in a row. It's not one of my favorite podcasts, and I think it should be one of the top podcasts in the world, frankly. It's a history show about why people resist new things. Each episode looks at a moment in history when something new was introduced, something that today we think of as commonplace, like recorded music, umbrellas, bicycles, cars, chess, coffee, the elevator, and the show explores why it freaked everyone out. The latest episode on mirrors and vanity still stays with me as I think about vanity in the modern day of the Twitter world. That's the fascinating thing about the show, is that stuff that happened long ago, especially in terms of our fear of new things, repeats itself in the modern day, and so has many lessons for us to think about in terms of human psychology and the role of technology in our society. Anyway, you should subscribe and listen to Pessimist Archive. I highly recommend it. And now, here's my conversation with Michael Kearns. You mentioned reading Fear and Loathing in Las Vegas in high school, and having a more, or a bit more of a literary mind. So, what books, non technical, non computer science, would you say had the biggest impact on your life, either intellectually or emotionally? You've dug deep into my history, I see. Went deep. Yeah, I think, well, my favorite novel is Infinite Jest by David Foster Wallace, which actually, coincidentally, much of it takes place in the halls of buildings right around us here at MIT. So that certainly had a big influence on me. And as you noticed, like, when I was in high school, I actually even started college as an English major. So, I was very influenced by sort of that genre of journalism at the time, and thought I wanted to be a writer, and then realized that an English major teaches you to read, but it doesn't teach you how to write, and then I became interested in math and computer science instead. Well, in your new book, Ethical Algorithm, you kind of sneak up from an algorithmic perspective on these deep, profound philosophical questions of fairness, of privacy. In thinking about these topics, how often do you return to that literary mind that you had? Yeah, I'd like to claim there was a deeper connection, but, you know, I think both Aaron and I kind of came at these topics first and foremost from a technical angle. I mean, you know, I kind of consider myself primarily and originally a machine learning researcher, and I think as we just watched, like the rest of the society, the field technically advance, and then quickly on the heels of that kind of the buzzkill of all of the antisocial behavior by algorithms, just kind of realized there was an opportunity for us to do something about it from a research perspective. You know, more to the point in your question, I mean, I do have an uncle who is literally a moral philosopher, and so in the early days of my life, he was a philosopher, and so in the early days of our technical work on fairness topics, I would occasionally, you know, run ideas behind him. So, I mean, I remember an early email I sent to him in which I said, like, oh, you know, here's a specific definition of algorithmic fairness that we think is some sort of variant of Rawlsian fairness. What do you think? And I thought I was asking a yes or no question, and I got back your kind of classical philosopher's response saying, well, it depends. Hey, then you might conclude this, and that's when I realized that there was a real kind of rift between the ways philosophers and others had thought about things like fairness, you know, from sort of a humanitarian perspective and the way that you needed to think about it as a computer scientist if you were going to kind of implement actual algorithmic solutions. But I would say the algorithmic solutions take care of some of the low hanging fruit. Sort of the problem is a lot of algorithms, when they don't consider fairness, they are just terribly unfair. And when they don't consider privacy, they're terribly, they violate privacy. Sort of the algorithmic approach fixes big problems. But there's still, when you start pushing into the gray area, that's when you start getting into this philosophy of what it means to be fair, starting from Plato, what is justice kind of questions? Yeah, I think that's right. And I mean, I would even not go as far as you want to say that sort of the algorithmic work in these areas is solving like the biggest problems. And, you know, we discuss in the book, the fact that really we are, there's a sense in which we're kind of looking where the light is in that, you know, for example, if police are racist in who they decide to stop and frisk, and that goes into the data, there's sort of no undoing that downstream by kind of clever algorithmic methods. And I think, especially in fairness, I mean, I think less so in privacy, where we feel like the community kind of really has settled on the right definition, which is differential privacy. If you just look at the algorithmic fairness literature already, you can see it's going to be much more of a mess. And, you know, you've got these theorems saying, here are three entirely reasonable, desirable notions of fairness. And, you know, here's a proof that you cannot simultaneously have all three of them. So I think we know that algorithmic fairness compared to algorithmic privacy is going to be kind of a harder problem. And it will have to revisit, I think, things that have been thought about by, you know, many generations of scholars before us. So it's very early days for fairness, I think. TK So before we get into the details of differential privacy, and on the fairness side, let me linger on the philosophy a bit. Do you think most people are fundamentally good? Or do most of us have both the capacity for good and evil within us? SB I mean, I'm an optimist. I tend to think that most people are good and want to do right. And that deviations from that are, you know, kind of usually due to circumstance, not due to people being bad at heart. TK With people with power, are people at the heads of governments, people at the heads of companies, people at the heads of, maybe, so financial power markets, do you think the distribution there is also, most people are good and have good intent? SB Yeah, I do. I mean, my statement wasn't qualified to people not in positions of power. I mean, I think what happens in a lot of the, you know, the cliche about absolute power corrupts absolutely. I mean, you know, I think even short of that, you know, having spent a lot of time on Wall Street, and also in arenas very, very different from Wall Street, like academia, you know, one of the things I think I've benefited from by moving between two very different worlds is you become aware that, you know, these worlds kind of develop their own social norms, and they develop their own rationales for, you know, behavior, for instance, that might look unusual to outsiders. But when you're in that world, it doesn't feel unusual at all. And I think this is true of a lot of, you know, professional cultures, for instance. And, you know, so then your maybe slippery slope is too strong of a word. But, you know, you're in some world where you're mainly around other people with the same kind of viewpoints and training and worldview as you. And I think that's more of a source of, of, you know, kind of abuses of power than sort of, you know, there being good people and evil people, and that somehow the evil people are the ones that somehow rise to power. Oh, that's really interesting. So it's the, within the social norms constructed by that particular group of people, you're all trying to do good. But because as a group, you might be, you might drift into something that for the broader population, it does not align with the values of society. That kind of, that's the word. Yeah, I mean, or not that you drift, but even the things that don't make sense to the outside world don't seem unusual to you. So it's not sort of like a good or a bad thing, but, you know, like, so for instance, you know, on, in the world of finance, right? There's a lot of complicated types of activity that if you are not immersed in that world, you cannot see why the purpose of that, you know, that activity exists at all. It just seems like, you know, completely useless and people just like, you know, pushing money around. And when you're in that world, right, you're, and you learn more, your view does become more nuanced, right? You realize, okay, there is actually a function to this activity. And in some cases, you would conclude that actually, if magically we could eradicate this activity tomorrow, it would come back because it actually is like serving some useful purpose. It's just a useful purpose that's very difficult for outsiders to see. And so I think, you know, lots of professional work environments or cultures, as I might put it, kind of have these social norms that, you know, don't make sense to the outside world. Academia is the same, right? I mean, lots of people look at academia and say, you know, what the hell are all of you people doing? Why are you paid so much in some cases at taxpayer expenses to do, you know, to publish papers that nobody reads? You know, but when you're in that world, you come to see the value for it. And, but even though you might not be able to explain it to, you know, the person in the street. Right. And in the case of the financial sector, tools like credit might not make sense to people. Like, it's a good example of something that does seem to pop up and be useful or just the power of markets and just in general capitalism. Yeah. In finance, I think the primary example I would give is leverage, right? So being allowed to borrow, to sort of use ten times as much money as you've actually borrowed, right? So that's an example of something that before I had any experience in financial markets, I might have looked at and said, well, what is the purpose of that? That just seems very dangerous and it is dangerous and it has proven dangerous. But, you know, if the fact of the matter is that, you know, sort of on some particular time scale, you are holding positions that are, you know, very unlikely to, you know, lose, you know, your value at risk or variance is like one or five percent, then it kind of makes sense that you would be allowed to use a little bit more than you have because you have, you know, some confidence that you're not going to lose it all in a single day. Now, of course, when that happens, we've seen what happens, you know, not too long ago. But, you know, but the idea that it serves no useful economic purpose under any circumstances is definitely not true. We'll return to the other side of the coast, Silicon Valley, and the problems there as we talk about privacy, as we talk about fairness. At the high level, and I'll ask some sort of basic questions with the hope to get at the fundamental nature of reality. But from a very high level, what is an ethical algorithm? So I can say that an algorithm has a running time of using big O notation n log n. I can say that a machine learning algorithm classified cat versus dog with 97 percent accuracy. Do you think there will one day be a way to measure sort of in the same compelling way as the big O notation of this algorithm is 97 percent ethical? First of all, let me riff for a second on your specific n log n example. So because early in the book when we're just kind of trying to describe algorithms period, we say like, okay, you know, what's an example of an algorithm or an algorithmic problem? First of all, like it's sorting, right? You have a bunch of index cards with numbers on them and you want to sort them. And we describe, you know, an algorithm that sweeps all the way through, finds the smallest number, puts it at the front, then sweeps through again, finds the second smallest number. So we make the point that this is an algorithm and it's also a bad algorithm in the sense that, you know, it's quadratic rather than n log n, which we know is kind of optimal for sorting. And we make the point that sort of like, you know, so even within the confines of a very precisely specified problem, there, you know, there might be many, many different algorithms for the same problem with different properties. Like some might be faster in terms of running time, some might use less memory, some might have, you know, better distributed implementations. And so the point is that already we're used to, you know, in computer science thinking about trade offs between different types of quantities and resources and there being, you know, better and worse algorithms. And our book is about that part of algorithmic ethics that we know how to kind of put on that same kind of quantitative footing right now. So, you know, just to say something that our book is not about, our book is not about kind of broad, fuzzy notions of fairness. It's about very specific notions of fairness. There's more than one of them. There are tensions between them, right? But if you pick one of them, you can do something akin to saying that this algorithm is 97% ethical. You can say, for instance, the, you know, for this lending model, the false rejection rate on black people and white people is within 3%, right? So we might call that a 97% ethical algorithm and a 100% ethical algorithm would mean that that difference is 0%. In that case, fairness is specified when two groups, however, they're defined are given to you. That's right. So the, and then you can sort of mathematically start describing the algorithm. But nevertheless, the part where the two groups are given to you, I mean, unlike running time, you know, we don't in computer science talk about how fast an algorithm feels like when it runs. True. We measure it and ethical starts getting into feelings. So, for example, an algorithm runs, you know, if it runs in the background, it doesn't disturb the performance of my system. It'll feel nice. I'll be okay with it. But if it overloads the system, it'll feel unpleasant. So in that same way, ethics, there's a feeling of how socially acceptable it is. How does it represent the moral standards of our society today? So in that sense, and sorry to linger on that first of high, low philosophical questions. Do you have a sense we'll be able to measure how ethical an algorithm is? First of all, I didn't, certainly didn't mean to give the impression that you can kind of measure, you know, memory speed trade offs, you know, and that there's a complete mapping from that onto kind of fairness, for instance, or ethics and accuracy, for example. In the type of fairness definitions that are largely the objects of study today and starting to be deployed, you as the user of the definitions, you need to make some hard decisions before you even get to the point of designing fair algorithms. One of them, for instance, is deciding who it is that you're worried about protecting, who you're worried about being harmed by, for instance, some notion of discrimination or unfairness. And then you need to also decide what constitutes harm. So, for instance, in a lending application, maybe you decide that, you know, falsely rejecting a creditworthy individual, you know, sort of a false negative, is the real harm and that false positives, i.e. people that are not creditworthy or are not gonna repay your loan, that get a loan, you might think of them as lucky. And so that's not a harm, although it's not clear that if you don't have the means to repay a loan, that being given a loan is not also a harm. So, you know, the literature is sort of so far quite limited in that you sort of need to say, who do you want to protect and what would constitute harm to that group? And when you ask questions like, will algorithms feel ethical? One way in which they won't, under the definitions that I'm describing, is if, you know, if you are an individual who is falsely denied a loan, incorrectly denied a loan, all of these definitions basically say like, well, you know, your compensation is the knowledge that we are also falsely denying loans to other people, you know, in other groups at the same rate that we're doing it to you. And, you know, and so there is actually this interesting even technical tension in the field right now between these sort of group notions of fairness and notions of fairness that might actually feel like real fairness to individuals, right? They might really feel like their particular interests are being protected or thought about by the algorithm rather than just, you know, the groups that they happen to be members of. Is there parallels to the big O notation of worst case analysis? So, is it important to looking at the worst violation of fairness for an individual? Is it important to minimize that one individual? So like worst case analysis, is that something you think about or? I mean, I think we're not even at the point where we can sensibly think about that. So first of all, you know, we're talking here both about fairness applied at the group level, which is a relatively weak thing, but it's better than nothing. And also the more ambitious thing of trying to give some individual promises, but even that doesn't incorporate, I think something that you're hinting at here is what I might call subjective fairness, right? So a lot of the definitions, I mean, all of the definitions in the algorithmic fairness literature are what I would kind of call received wisdom definitions. It's sort of, you know, somebody like me sits around and things like, okay, you know, I think here's a technical definition of fairness that I think people should want or that they should, you know, think of as some notion of fairness, maybe not the only one, maybe not the best one, maybe not the last one. But we really actually don't know from a subjective standpoint, like what people really think is fair. You know, we just started doing a little bit of work in our group at actually doing kind of human subject experiments in which we, you know, ask people about, you know, we ask them questions about fairness, we survey them, we, you know, we show them pairs of individuals in, let's say, a criminal recidivism prediction setting, and we ask them, do you think these two individuals should be treated the same as a matter of fairness? And to my knowledge, there's not a large literature in which ordinary people are asked about, you know, they have sort of notions of their subjective fairness elicited from them. It's mainly, you know, kind of scholars who think about fairness kind of making up their own definitions. And I think this needs to change actually for many social norms, not just for fairness, right? So there's a lot of discussion these days in the AI community about interpretable AI or understandable AI. And as far as I can tell, everybody agrees that deep learning or at least the outputs of deep learning are not very understandable, and people might agree that sparse linear models with integer coefficients are more understandable. But nobody's really asked people. You know, there's very little literature on, you know, sort of showing people models and asking them, do they understand what the model is doing? And I think that in all these topics, as these fields mature, we need to start doing more behavioral work. Yeah, which is one of my deep passions is psychology. And I always thought computer scientists will be the best future psychologists in a sense that data is, especially in this modern world, the data is a really powerful way to understand and study human behavior. And you've explored that with your game theory side of work as well. Yeah, I'd like to think that what you say is true about computer scientists and psychology from my own limited wandering into human subject experiments. We have a great deal to learn, not just computer science, but AI and machine learning more specifically, I kind of think of as imperialist research communities in that, you know, kind of like physicists in an earlier generation, computer scientists kind of don't think of any scientific topic that's off limits to them. They will like freely wander into areas that others have been thinking about for decades or longer. And, you know, we usually tend to embarrass ourselves in those efforts for some amount of time. Like, you know, I think reinforcement learning is a good example, right? So a lot of the early work in reinforcement learning, I have complete sympathy for the control theorists that looked at this and said like, okay, you are reinventing stuff that we've known since like the forties, right? But, you know, in my view, eventually this sort of, you know, computer scientists have made significant contributions to that field, even though we kind of embarrassed ourselves for the first decade. So I think if computer scientists are gonna start engaging in kind of psychology, human subjects type of research, we should expect to be embarrassing ourselves for a good 10 years or so, and then hope that it turns out as well as, you know, some other areas that we've waded into. So you kind of mentioned this, just to linger on the idea of an ethical algorithm, of idea of groups, sort of group thinking and individual thinking. And we're struggling that. One of the amazing things about algorithms and your book and just this field of study is it gets us to ask, like forcing machines, converting these ideas into algorithms is forcing us to ask questions of ourselves as a human civilization. So there's a lot of people now in public discourse doing sort of group thinking, thinking like there's particular sets of groups that we don't wanna discriminate against and so on. And then there is individuals, sort of in the individual life stories, the struggles they went through and so on. Now, like in philosophy, it's easier to do group thinking because you don't, it's very hard to think about individuals. There's so much variability, but with data, you can start to actually say, you know what group thinking is too crude. You're actually doing more discrimination by thinking in terms of groups and individuals. Can you linger on that kind of idea of group versus individual and ethics? And is it good to continue thinking in terms of groups in algorithms? So let me start by answering a very good high level question with a slightly narrow technical response, which is these group definitions of fairness, like here's a few groups, like different racial groups, maybe gender groups, maybe age, what have you. And let's make sure that for none of these groups, do we have a false negative rate, which is much higher than any other one of these groups. Okay, so these are kind of classic group aggregate notions of fairness. And you know, but at the end of the day, an individual you can think of as a combination of all of their attributes, right? They're a member of a racial group, they have a gender, they have an age, and many other demographic properties that are not biological, but that are still very strong determinants of outcome and personality and the like. So one, I think, useful spectrum is to sort of think about that array between the group and the specific individual, and to realize that in some ways, asking for fairness at the individual level is to sort of ask for group fairness simultaneously for all possible combinations of groups. So in particular, you know, if I build a predictive model that meets some definition of fairness, definition of fairness by race, by gender, by age, by what have you, marginally, to get it slightly technical, sort of independently, I shouldn't expect that model to not discriminate against disabled Hispanic women over age 55, making less than $50,000 a year annually, even though I might have protected each one of those attributes marginally. So the optimization, actually, that's a fascinating way to put it. So you're just optimizing, the one way to achieve the optimizing fairness for individuals is just to add more and more definitions of groups that each individual belongs to. That's right. So, you know, at the end of the day, we could think of all of ourselves as groups of size one because eventually there's some attribute that separates you from me and everybody else in the world, okay? And so it is possible to put, you know, these incredibly coarse ways of thinking about fairness and these very, very individualistic specific ways on a common scale. And you know, one of the things we've worked on from a research perspective is, you know, so we sort of know how to, you know, in relative terms, we know how to provide fairness guarantees at the core system of the scale. We don't know how to provide kind of sensible, tractable, realistic fairness guarantees at the individual level, but maybe we could start creeping towards that by dealing with more refined subgroups. I mean, we gave a name to this phenomenon where, you know, you protect, you enforce some definition of fairness for a bunch of marginal attributes or features, but then you find yourself discriminating against a combination of them. We call that fairness gerrymandering because like political gerrymandering, you know, you're giving some guarantee at the aggregate level, but when you kind of look in a more granular way at what's going on, you realize that you're achieving that aggregate guarantee by sort of favoring some groups and discriminating against other ones. And so there are, you know, it's early days, but there are algorithmic approaches that let you start creeping towards that, you know, individual end of the spectrum. Does there need to be human input in the form of weighing the value of the importance of each kind of group? So for example, is it like, so gender, say crudely speaking, male and female, and then different races, are we as humans supposed to put value on saying gender is 0.6 and race is 0.4 in terms of in the big optimization of achieving fairness? Is that kind of what humans are supposed to do here? I mean, of course, you know, I don't need to tell you that, of course, technically one could incorporate such weights if you wanted to into a definition of fairness. You know, fairness is an interesting topic in that having worked in the book being about both fairness, privacy, and many other social norms, fairness, of course, is a much, much more loaded topic. So privacy, I mean, people want privacy, people don't like violations of privacy, violations of privacy cause damage, angst, and bad publicity for the companies that are victims of them. But sort of everybody agrees more data privacy would be better than less data privacy. And you don't have these, somehow the discussions of fairness don't become politicized along other dimensions like race and about gender and, you know, whether we, and, you know, you quickly find yourselves kind of revisiting topics that have been kind of unresolved forever, like affirmative action, right? Sort of, you know, like, why are you protecting, and some people will say, why are you protecting this particular racial group? And others will say, well, we need to do that as a matter of retribution. Other people will say, it's a matter of economic opportunity. And I don't know which of, you know, whether any of these are the right answers, but you sort of, fairness is sort of special in that as soon as you start talking about it, you inevitably have to participate in debates about fair to whom, at what expense to who else. I mean, even in criminal justice, right, you know, where people talk about fairness in criminal sentencing or, you know, predicting failures to appear or making parole decisions or the like, they will, you know, they'll point out that, well, these definitions of fairness are all about fairness for the criminals. And what about fairness for the victims, right? So when I basically say something like, well, the false incarceration rate for black people and white people needs to be roughly the same, you know, there's no mention of potential victims of criminals in such a fairness definition. And that's the realm of public discourse. I should actually recommend, I just listened to people listening, Intelligence Squares debates, US edition just had a debate. They have this structure where you have old Oxford style or whatever they're called, debates, you know, it's two versus two and they talked about affirmative action and it was incredibly interesting that there's really good points on every side of this issue, which is fascinating to listen to. Yeah, yeah, I agree. And so it's interesting to be a researcher trying to do, for the most part, technical algorithmic work, but Aaron and I both quickly learned you cannot do that and then go out and talk about it and expect people to take it seriously if you're unwilling to engage in these broader debates that are entirely extra algorithmic, right? They're not about, you know, algorithms and making algorithms better. They're sort of, you know, as you said, sort of like, what should society be protecting in the first place? When you discuss the fairness, an algorithm that achieves fairness, whether in the constraints and the objective function, there's an immediate kind of analysis you can perform, which is saying, if you care about fairness in gender, this is the amount that you have to pay for it in terms of the performance of the system. Like do you, is there a role for statements like that in a table, in a paper, or do you want to really not touch that? No, no, we want to touch that and we do touch it. So I mean, just again, to make sure I'm not promising your viewers more than we know how to provide, but if you pick a definition of fairness, like I'm worried about gender discrimination and you pick a notion of harm, like false rejection for a loan, for example, and you give me a model, I can definitely, first of all, go audit that model. It's easy for me to go, you know, from data to kind of say like, okay, your false rejection rate on women is this much higher than it is on men, okay? But once you also put the fairness into your objective function, I mean, I think the table that you're talking about is what we would call the Pareto curve, right? You can literally trace out, and we give examples of such plots on real data sets in the book, you have two axes. On the X axis is your error, on the Y axis is unfairness by whatever, you know, if it's like the disparity between false rejection rates between two groups. And you know, your algorithm now has a knob that basically says, how strongly do I want to enforce fairness? And the less unfair, you know, if the two axes are error and unfairness, we'd like to be at zero, zero. We'd like zero error and zero unfairness simultaneously. Anybody who works in machine learning knows that you're generally not going to get to zero error period without any fairness constraint whatsoever. So that's not going to happen. But in general, you know, you'll get this, you'll get some kind of convex curve that specifies the numerical trade off you face. You know, if I want to go from 17% error down to 16% error, what will be the increase in unfairness that I experienced as a result of that? And so this curve kind of specifies the, you know, kind of undominated models. Models that are off that curve are, you know, can be strictly improved in one or both dimensions. You can, you know, either make the error better or the unfairness better or both. And I think our view is that not only are these objects, these Pareto curves, you know, with efficient frontiers as you might call them, not only are they valuable scientific objects, I actually think that they in the near term might need to be the interface between researchers working in the field and stakeholders in given problems. So you know, you could really imagine telling a criminal jurisdiction, look, if you're concerned about racial fairness, but you're also concerned about accuracy. You want to, you know, you want to release on parole people that are not going to recommit a violent crime and you don't want to release the ones who are. So you know, that's accuracy. But if you also care about those, you know, the mistakes you make not being disproportionately on one racial group or another, you can show this curve. I'm hoping that in the near future, it'll be possible to explain these curves to non technical people that are the ones that have to make the decision, where do we want to be on this curve? Like, what are the relative merits or value of having lower error versus lower unfairness? You know, that's not something computer scientists should be deciding for society, right? That, you know, the people in the field, so to speak, the policymakers, the regulators, that's who should be making these decisions. But I think and hope that they can be made to understand that these trade offs generally exist and that you need to pick a point and like, and ignoring the trade off, you know, you're implicitly picking a point anyway, right? You just don't know it and you're not admitting it. Just to linger on the point of trade offs, I think that's a really important thing to sort of think about. So you think when we start to optimize for fairness, there's almost always in most system going to be trade offs. Can you like, what's the trade off between just to clarify, there have been some sort of technical terms thrown around, but sort of a perfectly fair world. Why is that? Why will somebody be upset about that? The specific trade off I talked about just in order to make things very concrete was between numerical error and some numerical measure of unfairness. What is numerical error in the case of... Just like say predictive error, like, you know, the probability or frequency with which you release somebody on parole who then goes on to recommit a violent crime or keep incarcerated somebody who would not have recommitted a violent crime. So in the case of awarding somebody parole or giving somebody parole or letting them out on parole, you don't want them to recommit a crime. So it's your system failed in prediction if they happen to do a crime. Okay, so that's one axis. And what's the fairness axis? So then the fairness axis might be the difference between racial groups in the kind of false positive predictions, namely people that I kept incarcerated predicting that they would recommit a violent crime when in fact they wouldn't have. Right. And the unfairness of that, just to linger it and allow me to in eloquently to try to sort of describe why that's unfair, why unfairness is there. The unfairness you want to get rid of is that in the judge's mind, the bias of having being brought up to society, the slight racial bias, the racism that exists in the society, you want to remove that from the system. Another way that's been debated is sort of equality of opportunity versus equality of outcome. And there's a weird dance there that's really difficult to get right. And we don't, affirmative action is exploring that space. Right. And then this also quickly bleeds into questions like, well, maybe if one group really does recommit crimes at a higher rate, the reason for that is that at some earlier point in the pipeline or earlier in their lives, they didn't receive the same resources that the other group did. And so there's always in kind of fairness discussions, the possibility that the real injustice came earlier, right? Earlier in this individual's life, earlier in this group's history, et cetera, et cetera. And so a lot of the fairness discussion is almost, the goal is for it to be a corrective mechanism to account for the injustice earlier in life. By some definitions of fairness or some theories of fairness, yeah. Others would say like, look, it's not to correct that injustice, it's just to kind of level the playing field right now and not falsely incarcerate more people of one group than another group. But I mean, I think just it might be helpful just to demystify a little bit about the many ways in which bias or unfairness can come into algorithms, especially in the machine learning era, right? I think many of your viewers have probably heard these examples before, but let's say I'm building a face recognition system, right? And so I'm kind of gathering lots of images of faces and trying to train the system to recognize new faces of those individuals from training on a training set of those faces of individuals. And it shouldn't surprise anybody or certainly not anybody in the field of machine learning if my training data set was primarily white males and I'm training the model to maximize the overall accuracy on my training data set, that the model can reduce its error most by getting things right on the white males that constitute the majority of the data set, even if that means that on other groups, they will be less accurate, okay? Now, there's a bunch of ways you could think about addressing this. One is to deliberately put into the objective of the algorithm not to optimize the error at the expense of this discrimination, and then you're kind of back in the land of these kind of two dimensional numerical trade offs. A valid counter argument is to say like, well, no, you don't have to, there's no, you know, the notion of the tension between error and accuracy here is a false one. You could instead just go out and get much more data on these other groups that are in the minority and, you know, equalize your data set, or you could train a separate model on those subgroups and, you know, have multiple models. The point I think we would, you know, we tried to make in the book is that those things have cost too, right? Going out and gathering more data on groups that are relatively rare compared to your plurality or more majority group that, you know, it may not cost you in the accuracy of the model, but it's going to cost, you know, it's going to cost the company developing this model more money to develop that, and it also costs more money to build separate predictive models and to implement and deploy them. So even if you can find a way to avoid the tension between error and accuracy in training a model, you might push the cost somewhere else, like money, like development time, research time and the like. There are fundamentally difficult philosophical questions, in fairness, and we live in a very divisive political climate, outraged culture. There is alt right folks on 4chan, trolls. There is social justice warriors on Twitter. There's very divisive, outraged folks on all sides of every kind of system. How do you, how do we as engineers build ethical algorithms in such divisive culture? Do you think they could be disjoint? The human has to inject your values, and then you can optimize over those values. But in our times, when you start actually applying these systems, things get a little bit challenging for the public discourse. How do you think we can proceed? Yeah, I mean, for the most part in the book, a point that we try to take some pains to make is that we don't view ourselves or people like us as being in the position of deciding for society what the right social norms are, what the right definitions of fairness are. Our main point is to just show that if society or the relevant stakeholders in a particular domain can come to agreement on those sorts of things, there's a way of encoding that into algorithms in many cases, not in all cases. One other misconception that hopefully we definitely dispel is sometimes people read the title of the book and I think not unnaturally fear that what we're suggesting is that the algorithms themselves should decide what those social norms are and develop their own notions of fairness and privacy or ethics, and we're definitely not suggesting that. The title of the book is Ethical Algorithm, by the way, and I didn't think of that interpretation of the title. That's interesting. Yeah, yeah. I mean, especially these days where people are concerned about the robots becoming our overlords, the idea that the robots would also sort of develop their own social norms is just one step away from that. But I do think, obviously, despite disclaimer that people like us shouldn't be making those decisions for society, we are kind of living in a world where in many ways computer scientists have made some decisions that have fundamentally changed the nature of our society and democracy and sort of civil discourse and deliberation in ways that I think most people generally feel are bad these days, right? But they had to make, so if we look at people at the heads of companies and so on, they had to make those decisions, right? There has to be decisions, so there's two options, either you kind of put your head in the sand and don't think about these things and just let the algorithm do what it does, or you make decisions about what you value, you know, of injecting moral values into the algorithm. Look, I never mean to be an apologist for the tech industry, but I think it's a little bit too far to sort of say that explicit decisions were made about these things. So let's, for instance, take social media platforms, right? So like many inventions in technology and computer science, a lot of these platforms that we now use regularly kind of started as curiosities, right? I remember when things like Facebook came out and its predecessors like Friendster, which nobody even remembers now, people really wonder, like, why would anybody want to spend time doing that? I mean, even the web when it first came out, when it wasn't populated with much content and it was largely kind of hobbyists building their own kind of ramshackle websites, a lot of people looked at this and said, well, what is the purpose of this thing? Why is this interesting? Who would want to do this? And so even things like Facebook and Twitter, yes, technical decisions were made by engineers, by scientists, by executives in the design of those platforms, but, you know, I don't think 10 years ago anyone anticipated that those platforms, for instance, might kind of acquire undue, you know, influence on political discourse or on the outcomes of elections. And I think the scrutiny that these companies are getting now is entirely appropriate, but I think it's a little too harsh to kind of look at history and sort of say like, oh, you should have been able to anticipate that this would happen with your platform. And in this sort of gaming chapter of the book, one of the points we're making is that, you know, these platforms, right, they don't operate in isolation. So unlike the other topics we're discussing, like fairness and privacy, like those are really cases where algorithms can operate on your data and make decisions about you and you're not even aware of it, okay? Things like Facebook and Twitter, these are, you know, these are systems, right? These are social systems and their evolution, even their technical evolution because machine learning is involved, is driven in no small part by the behavior of the users themselves and how the users decide to adopt them and how to use them. And so, you know, I'm kind of like who really knew that, you know, until we saw it happen, who knew that these things might be able to influence the outcome of elections? Who knew that, you know, they might polarize political discourse because of the ability to, you know, decide who you interact with on the platform and also with the platform naturally using machine learning to optimize for your own interest that they would further isolate us from each other and, you know, like feed us all basically just the stuff that we already agreed with. So I think, you know, we've come to that outcome, I think, largely, but I think it's something that we all learned together, including the companies as these things happen. You asked like, well, are there algorithmic remedies to these kinds of things? And again, these are big problems that are not going to be solved with, you know, somebody going in and changing a few lines of code somewhere in a social media platform. But I do think in many ways, there are definitely ways of making things better. I mean, like an obvious recommendation that we make at some point in the book is like, look, you know, to the extent that we think that machine learning applied for personalization purposes in things like newsfeed, you know, or other platforms has led to polarization and intolerance of opposing viewpoints. As you know, right, these algorithms have models, right, and they kind of place people in some kind of metric space, and they place content in that space, and they sort of know the extent to which I have an affinity for a particular type of content. And by the same token, they also probably have that same model probably gives you a good idea of the stuff I'm likely to violently disagree with or be offended by, okay? So you know, in this case, there really is some knob you could tune that says like, instead of showing people only what they like and what they want, let's show them some stuff that we think that they don't like, or that's a little bit further away. And you could even imagine users being able to control this, you know, just like everybody gets a slider, and that slider says like, you know, how much stuff do you want to see that's kind of, you know, you might disagree with, or is at least further from your interest. It's almost like an exploration button. So just get your intuition. Do you think engagement, so like you staying on the platform, you're staying engaged. Do you think fairness, ideas of fairness won't emerge? Like how bad is it to just optimize for engagement? Do you think we'll run into big trouble if we're just optimizing for how much you love the platform? Well, I mean, optimizing for engagement kind of got us where we are. So do you, one, have faith that it's possible to do better? And two, if it is, how do we do better? I mean, it's definitely possible to do different, right? And again, you know, it's not as if I think that doing something different than optimizing for engagement won't cost these companies in real ways, including revenue and profitability potentially. In the short term at least. Yeah. In the short term. Right. And again, you know, if I worked at these companies, I'm sure that it would have seemed like the most natural thing in the world also to want to optimize engagement, right? And that's good for users in some sense. You want them to be, you know, vested in the platform and enjoying it and finding it useful, interesting, and or productive. But you know, my point is, is that the idea that there is, that it's sort of out of their hands as you said, or that there's nothing to do about it, never say never, but that strikes me as implausible as a machine learning person, right? I mean, these companies are driven by machine learning and this optimization of engagement is essentially driven by machine learning, right? It's driven by not just machine learning, but you know, very, very large scale A, B experimentation where you kind of tweak some element of the user interface or tweak some component of an algorithm or tweak some component or feature of your click through prediction model. And my point is, is that anytime you know how to optimize for something, you, you know, by def, almost by definition, that solution tells you how not to optimize for it or to do something different. Engagement can be measured. So sort of optimizing for sort of minimizing divisiveness or maximizing intellectual growth over the lifetime of a human being are very difficult to measure. That's right. And I'm not claiming that doing something different will immediately make it apparent that this is a good thing for society and in particular, I mean, I think one way of thinking about where we are on some of these social media platforms is that, you know, it kind of feels a bit like we're in a bad equilibrium, right? That these systems are helping us all kind of optimize something myopically and selfishly for ourselves and of course, from an individual standpoint at any given moment, like why would I want to see things in my newsfeed that I found irrelevant, offensive or, you know, or the like, okay? But you know, maybe by all of us, you know, having these platforms myopically optimized in our interests, we have reached a collective outcome as a society that we're unhappy with in different ways. Let's say with respect to things like, you know, political discourse and tolerance of opposing viewpoints. And if Mark Zuckerberg gave you a call and said, I'm thinking of taking a sabbatical, could you run Facebook for me for six months? What would you, how? I think no thanks would be my first response, but there are many aspects of being the head of the entire company that are kind of entirely exogenous to many of the things that we're discussing here. Yes. And so I don't really think I would need to be CEO of Facebook to kind of implement the, you know, more limited set of solutions that I might imagine. But I think one concrete thing they could do is they could experiment with letting people who chose to, to see more stuff in their newsfeed that is not entirely kind of chosen to optimize for their particular interests, beliefs, et cetera. So the, the kind of thing, so I could speak to YouTube, but I think Facebook probably does something similar is they're quite effective at automatically finding what sorts of groups you belong to, not based on race or gender or so on, but based on the kind of stuff you enjoy watching in the case of YouTube. Sort of, it's a, it's a difficult thing for Facebook or YouTube to then say, well, you know what? We're going to show you something from a very different cluster. Even though we believe algorithmically, you're unlikely to enjoy that thing sort of that's a weird jump to make. There has to be a human, like at the very top of that system that says, well, that will be longterm healthy for you. That's more than an algorithmic decision. Or that same person could say that'll be longterm healthy for the platform or for the platform's influence on society outside of the platform, right? And it, you know, it's easy for me to sit here and say these things, but conceptually I do not think that these are kind of totally or should, they shouldn't be kind of completely alien ideas, right? That, you know, you could try things like this and it wouldn't be, you know, we wouldn't have to invent entirely new science to do it because if we're all already embedded in some metric space and there's a notion of distance between you and me and every other, every piece of content, then, you know, we know exactly, you know, the same model that tells, you know, dictates how to make me really happy also tells how to make me as unhappy as possible as well. Right. The focus in your book and algorithmic fairness research today in general is on machine learning, like we said, is data, but, and just even the entire AI field right now is captivated with machine learning, with deep learning. Do you think ideas in symbolic AI or totally other kinds of approaches are interesting, useful in the space, have some promising ideas in terms of fairness? I haven't thought about that question specifically in the context of fairness. I definitely would agree with that statement in the large, right? I mean, I am, you know, one of many machine learning researchers who do believe that the great successes that have been shown in machine learning recently are great successes, but they're on a pretty narrow set of tasks. I mean, I don't, I don't think we're kind of notably closer to general artificial intelligence now than we were when I started my career. I mean, there's been progress and I do think that we are kind of as a community, maybe looking a bit where the light is, but the light is shining pretty bright there right now and we're finding a lot of stuff. So I don't want to like argue with the progress that's been made in areas like deep learning, for example. This touches another sort of related thing that you mentioned and that people might misinterpret from the title of your book, ethical algorithm. Is it possible for the algorithm to automate some of those decisions? Sort of a higher level decisions of what kind of, like what, what should be fair, what should be fair. The more you know about a field, the more aware you are of its limitations. And so I'm a, I'm pretty leery of sort of trying, you know, there's, there's so much we don't all, we already don't know in fairness, even when we're the ones picking the fairness definitions and, you know, comparing alternatives and thinking about the tensions between different definitions that the idea of kind of letting the algorithm start exploring as well. I definitely think, you know, this is a much narrower statement. I definitely think that kind of algorithmic auditing for different types of unfairness, right? So like in this gerrymandering example where I might want to prevent not just discrimination against very broad categories, but against combinations of broad categories. You know, you quickly get to a point where there's a lot of, a lot of categories. There's a lot of combinations of end features and, you know, you can use algorithmic techniques to sort of try to find the subgroups on which you're discriminating the most and try to fix that. That's actually kind of the form of one of the algorithms we developed for this fairness gerrymandering problem. But I'm, I'm, you know, partly because of our technological, you know, our sort of our scientific ignorance on these topics right now. And also partly just because these topics are so loaded emotionally for people that I just don't see the value. I mean, again, never say never, but I just don't think we're at a moment where it's a great time for computer scientists to be rolling out the idea like, hey, you know, you know, not only have we kind of figured fairness out, but, you know, we think the algorithm should start deciding what's fair or giving input on that decision. I just don't, it's like the cost benefit analysis to the field of kind of going there right now just doesn't seem worth it to me. That said, I should say that I think computer scientists should be more philosophically, like should enrich their thinking about these kinds of things. I think it's been too often used as an excuse for roboticists working on autonomous vehicles, for example, to not think about the human factor or psychology or safety in the same way like computer science design algorithms that have been sort of using it as an excuse. And I think it's time for basically everybody to become a computer scientist. I was about to agree with everything you said except that last point. I think that the other way of looking at it is that I think computer scientists, you know, and many of us are, but we need to weigh it out into the world more, right? I mean, just the influence that computer science and therefore computer scientists have had on society at large just like has exponentially magnified in the last 10 or 20 years or so. And you know, before when we were just tinkering around amongst ourselves and it didn't matter that much, there was no need for sort of computer scientists to be citizens of the world more broadly. And I think those days need to be over very, very fast. And I'm not saying everybody needs to do it, but to me, like the right way of doing it is to not to sort of think that everybody else is going to become a computer scientist. But you know, I think people are becoming more sophisticated about computer science, even lay people. You know, I think one of the reasons we decided to write this book is we thought 10 years ago I wouldn't have tried this just because I just didn't think that sort of people's awareness of algorithms and machine learning, you know, the general population would have been high. I mean, you would have had to first, you know, write one of the many books kind of just explicating that topic to a lay audience first. Now I think we're at the point where like lots of people without any technical training at all know enough about algorithms and machine learning that you can start getting to these nuances of things like ethical algorithms. I think we agree that there needs to be much more mixing, but I think a lot of the onus of that mixing needs to be on the computer science community. Yeah. So just to linger on the disagreement, because I do disagree with you on the point that I think if you're a biologist, if you're a chemist, if you're an MBA business person, all of those things you can, like if you learned a program, and not only program, if you learned to do machine learning, if you learned to do data science, you immediately become much more powerful in the kinds of things you can do. And therefore literature, like library sciences, like, so you were speaking, I think, I think it holds true what you're saying for the next few years. But long term, if you're interested to me, if you're interested in philosophy, you should learn a program, because then you can scrape data and study what people are thinking about on Twitter, and then start making philosophical conclusions about the meaning of life. I just feel like the access to data, the digitization of whatever problem you're trying to solve, will fundamentally change what it means to be a computer scientist. I mean, a computer scientist in 20, 30 years will go back to being Donald Knuth style theoretical computer science, and everybody would be doing basically, exploring the kinds of ideas that you explore in your book. It won't be a computer science major. Yeah, I mean, I don't think I disagree enough, but I think that that trend of more and more people in more and more disciplines adopting ideas from computer science, learning how to code, I think that that trend seems firmly underway. I mean, you know, like an interesting digressive question along these lines is maybe in 50 years, there won't be computer science departments anymore, because the field will just sort of be ambient in all of the different disciplines. And people will look back and having a computer science department will look like having an electricity department or something that's like, you know, everybody uses this, it's just out there. I mean, I do think there will always be that kind of Knuth style core to it, but it's not an implausible path that we kind of get to the point where the academic discipline of computer science becomes somewhat marginalized because of its very success in kind of infiltrating all of science and society and the humanities, etcetera. What is differential privacy, or more broadly, algorithmic privacy? Algorithmic privacy more broadly is just the study or the notion of privacy definitions or norms being encoded inside of algorithms. And so, you know, I think we count among this body of work just, you know, the literature and practice of things like data anonymization, which we kind of at the beginning of our discussion of privacy say like, okay, this is sort of a notion of algorithmic privacy. It kind of tells you, you know, something to go do with data, but, you know, our view is that it's, and I think this is now, you know, quite widespread, that it's, you know, despite the fact that those notions of anonymization kind of redacting and coarsening are the most widely adopted technical solutions for data privacy, they are like deeply fundamentally flawed. And so, you know, to your first question, what is differential privacy? Differential privacy seems to be a much, much better notion of privacy that kind of avoids a lot of the weaknesses of anonymization notions while still letting us do useful stuff with data. What is anonymization of data? So by anonymization, I'm, you know, kind of referring to techniques like I have a database. The rows of that database are, let's say, individual people's medical records, okay? And I want to let people use that data. Maybe I want to let researchers access that data to build predictive models for some disease, but I'm worried that that will leak, you know, sensitive information about specific people's medical records. So anonymization broadly refers to the set of techniques where I say like, okay, I'm first going to like, I'm going to delete the column with people's names. I'm going to not put, you know, so that would be like a redaction, right? I'm just redacting that information. I am going to take ages and I'm not going to like say your exact age. I'm going to say whether you're, you know, zero to 10, 10 to 20, 20 to 30, I might put the first three digits of your zip code, but not the last two, et cetera, et cetera. And so the idea is that through some series of operations like this on the data, I anonymize it. You know, another term of art that's used is removing personally identifiable information. And you know, this is basically the most common way of providing data privacy, but that it's in a way that still lets people access the, some variant form of the data. So at a slightly broader picture, as you talk about what does anonymization mean when you have multiple database, like with a Netflix prize, when you can start combining stuff together. So this is exactly the problem with these notions, right? Is that notions of a anonymization, removing personally identifiable information, the kind of fundamental conceptual flaw is that, you know, these definitions kind of pretend as if the data set in question is the only data set that exists in the world or that ever will exist in the future. And of course, things like the Netflix prize and many, many other examples since the Netflix prize, I think that was one of the earliest ones though, you know, you can reidentify people that were, you know, that were anonymized in the data set by taking that anonymized data set and combining it with other allegedly anonymized data sets and maybe publicly available information about you. You know, for people who don't know the Netflix prize was, was being publicly released this data. So the names from those rows were removed, but what was released is the preference or the ratings of what movies you like and you don't like. And from that combined with other things, I think forum posts and so on, you can start to figure out I guess it was specifically the internet movie database where, where lots of Netflix users publicly rate their movie, you know, their movie preferences. And so the anonymized data and Netflix, when it's just this phenomenon, I think that we've all come to realize in the last decade or so is that just knowing a few apparently irrelevant innocuous things about you can often act as a fingerprint. Like if I know, you know, what, what rating you gave to these 10 movies and the date on which you entered these movies, this is almost like a fingerprint for you in the sea of all Netflix users. There were just another paper on this in science or nature of about a month ago that, you know, kind of 18 attributes. I mean, my favorite example of this is, was actually a paper from several years ago now where it was shown that just from your likes on Facebook, just from the time, you know, the things on which you clicked on the thumbs up button on the platform, not using any information, demographic information, nothing about who your friends are, just knowing the content that you had liked was enough to, you know, in the aggregate accurately predict things like sexual orientation, drug and alcohol use, whether you were the child of divorced parents. So we live in this era where, you know, even the apparently irrelevant data that we offer about ourselves on public platforms and forums often unbeknownst to us, more or less acts as signature or, you know, fingerprint. And that if you can kind of, you know, do a join between that kind of data and allegedly anonymized data, you have real trouble. So is there hope for any kind of privacy in a world where a few likes can identify you? So there is differential privacy, right? What is differential privacy? Yeah, so differential privacy basically is a kind of alternate, much stronger notion of privacy than these anonymization ideas. And, you know, it's a technical definition, but like the spirit of it is we compare two alternate worlds, okay? So let's suppose I'm a researcher and I want to do, you know, there's a database of medical records and one of them is yours, and I want to use that database of medical records to build a predictive model for some disease. So based on people's symptoms and test results and the like, I want to, you know, build a probably model predicting the probability that people have disease. So, you know, this is the type of scientific research that we would like to be allowed to continue. And in differential privacy, you ask a very particular counterfactual question. We basically compare two alternatives. One is when I do this, I build this model on the database of medical records, including your medical record. And the other one is where I do the same exercise with the same database with just your medical record removed. So basically, you know, it's two databases, one with N records in it and one with N minus one records in it. The N minus one records are the same, and the only one that's missing in the second case is your medical record. So differential privacy basically says that any harms that might come to you from the analysis in which your data was included are essentially nearly identical to the harms that would have come to you if the same analysis had been done without your medical record included. So in other words, this doesn't say that bad things cannot happen to you as a result of data analysis. It just says that these bad things were going to happen to you already, even if your data wasn't included. And to give a very concrete example, right, you know, like we discussed at some length, the study that, you know, in the 50s that was done that established the link between smoking and lung cancer. And we make the point that, like, well, if your data was used in that analysis and, you know, the world kind of knew that you were a smoker because, you know, there was no stigma associated with smoking before those findings, real harm might have come to you as a result of that study that your data was included in. In particular, your insurer now might have a higher posterior belief that you might have lung cancer and raise your premium. So you've suffered economic damage. But the point is, is that if the same analysis has been done with all the other N minus one medical records and just yours missing, the outcome would have been the same. Or your data wasn't idiosyncratically crucial to establishing the link between smoking and lung cancer because the link between smoking and lung cancer is like a fact about the world that can be discovered with any sufficiently large database of medical records. But that's a very low value of harm. Yeah. So that's showing that very little harm is done. Great. But how what is the mechanism of differential privacy? So that's the kind of beautiful statement of it. It's the mechanism by which privacy is preserved. Yeah. So it's basically by adding noise to computations, right? So the basic idea is that every differentially private algorithm, first of all, or every good differentially private algorithm, every useful one, is a probabilistic algorithm. So it doesn't, on a given input, if you gave the algorithm the same input multiple times, it would give different outputs each time from some distribution. And the way you achieve differential privacy algorithmically is by kind of carefully and tastefully adding noise to a computation in the right places. And to give a very concrete example, if I wanna compute the average of a set of numbers, the non private way of doing that is to take those numbers and average them and release like a numerically precise value for the average. In differential privacy, you wouldn't do that. You would first compute that average to numerical precisions, and then you'd add some noise to it, right? You'd add some kind of zero mean, Gaussian or exponential noise to it so that the actual value you output is not the exact mean, but it'll be close to the mean, but it'll be close... The noise that you add will sort of prove that nobody can kind of reverse engineer any particular value that went into the average. So noise is a savior. How many algorithms can be aided by adding noise? Yeah, so I'm a relatively recent member of the differential privacy community. My co author, Aaron Roth is really one of the founders of the field and has done a great deal of work and I've learned a tremendous amount working with him on it. It's a pretty grown up field already. Yeah, but now it's pretty mature. But I must admit, the first time I saw the definition of differential privacy, my reaction was like, wow, that is a clever definition and it's really making very strong promises. And I first saw the definition in much earlier days and my first reaction was like, well, my worry about this definition would be that it's a great definition of privacy, but that it'll be so restrictive that we won't really be able to use it. We won't be able to compute many things in a differentially private way. So that's one of the great successes of the field, I think, is in showing that the opposite is true and that most things that we know how to compute, absent any privacy considerations, can be computed in a differentially private way. So for example, pretty much all of statistics and machine learning can be done differentially privately. So pick your favorite machine learning algorithm, back propagation and neural networks, cart for decision trees, support vector machines, boosting, you name it, as well as classic hypothesis testing and the like in statistics. None of those algorithms are differentially private in their original form. All of them have modifications that add noise to the computation in different places in different ways that achieve differential privacy. So this really means that to the extent that we've become a scientific community very dependent on the use of machine learning and statistical modeling and data analysis, we really do have a path to provide privacy guarantees to those methods and so we can still enjoy the benefits of the data science era while providing rather robust privacy guarantees to individuals. So perhaps a slightly crazy question, but if we take the ideas of differential privacy and take it to the nature of truth that's being explored currently. So what's your most favorite and least favorite food? Hmm. I'm not a real foodie, so I'm a big fan of spaghetti. Spaghetti? Yeah. What do you really don't like? I really don't like cauliflower. Wow, I love cauliflower. Okay. Is there one way to protect your preference for spaghetti by having an information campaign bloggers and so on of bots saying that you like cauliflower? So like this kind of the same kind of noise ideas, I mean if you think of in our politics today there's this idea of Russia hacking our elections. What's meant there I believe is bots spreading different kinds of information. Is that a kind of privacy or is that too much of a stretch? No it's not a stretch. I've not seen those ideas, you know, that is not a technique that to my knowledge will provide differential privacy, but to give an example like one very specific example about what you're discussing is there was a very interesting project at NYU I think led by Helen Nissenbaum there in which they basically built a browser plugin that tried to essentially obfuscate your Google searches. So to the extent that you're worried that Google is using your searches to build, you know, predictive models about you to decide what ads to show you which they might very reasonably want to do, but if you object to that they built this widget you could plug in and basically whenever you put in a query into Google it would send that query to Google, but in the background all of the time from your browser it would just be sending this torrent of irrelevant queries to the search engine. So you know it's like a weed and chaff thing so you know out of every thousand queries let's say that Google was receiving from your browser one of them was one that you put in but the other 999 were not okay so it's the same kind of idea kind of you know privacy by obfuscation. So I think that's an interesting idea, doesn't give you differential privacy. It's also I was actually talking to somebody at one of the large tech companies recently about the fact that you know just this kind of thing that there are some times when the response to my data needs to be very specific to my data right like I type mountain biking into Google, I want results on mountain biking and I really want Google to know that I typed in mountain biking, I don't want noise added to that. And so I think there's sort of maybe even interesting technical questions around notions of privacy that are appropriate where you know it's not that my data is part of some aggregate like medical records and that we're trying to discover important correlations and facts about the world at large but rather you know there's a service that I really want to you know pay attention to my specific data yet I still want some kind of privacy guarantee and I think these kind of obfuscation ideas are sort of one way of getting at that but maybe there are others as well. So where do you think we'll land in this algorithm driven society in terms of privacy? So sort of China like Kai Fuli describes you know it's collecting a lot of data on its citizens but in the best form it's actually able to provide a lot of sort of protect human rights and provide a lot of amazing services and it's worst forms that can violate those human rights and limit services. So where do you think we'll land because algorithms are powerful when they use data. So as a society do you think we'll give over more data? Is it possible to protect the privacy of that data? So I'm optimistic about the possibility of you know balancing the desire for individual privacy and individual control of privacy with kind of societally and commercially beneficial uses of data not unrelated to differential privacy or suggestions that say like well individuals should have control of their data. They should be able to limit the uses of that data. They should even you know there's you know fledgling discussions going on in research circles about allowing people selective use of their data and being compensated for it. And then you get to sort of very interesting economic questions like pricing right. And one interesting idea is that maybe differential privacy would also you know be a conceptual framework in which you could talk about the relative value of different people's data like you know to demystify this a little bit. If I'm trying to build a predictive model for some rare disease and I'm trying to use machine learning to do it, it's easy to get negative examples because the disease is rare right. But I really want to have lots of people with the disease in my data set okay. And so somehow those people's data with respect to this application is much more valuable to me than just like the background population. And so maybe they should be compensated more for it. And so you know I think these are kind of very, very fledgling conceptual questions that maybe we'll have kind of technical thought on them sometime in the coming years. But I do think we'll you know to kind of get more directly answer your question. I think I'm optimistic at this point from what I've seen that we will land at some you know better compromise than we're at right now where again you know privacy guarantees are few far between and weak and users have very, very little control. And I'm optimistic that we'll land in something that you know provides better privacy overall and more individual control of data and privacy. But you know I think to get there it's again just like fairness it's not going to be enough to propose algorithmic solutions. There's going to have to be a whole kind of regulatory legal process that prods companies and other parties to kind of adopt solutions. And I think you've mentioned the word control a lot and I think giving people control that's something that people don't quite have in a lot of these algorithms and that's a really interesting idea of giving them control. Some of that is actually literally an interface design question sort of just enabling because I think it's good for everybody to give users control. It's almost not a trade off except that you have to hire people that are good at interface design. Yeah. I mean the other thing that has to be said right is that you know it's a cliche but you know we as the users of many systems platforms and apps you know we are the product. We are not the customer. The customer are advertisers and our data is the product. Okay. So it's one thing to kind of suggest more individual control of data and privacy and uses but this you know if this happens in sufficient degree it will upend the entire economic model that has supported the internet to date. And so some other economic model will have to be you know we'll have to replace it. So the idea of markets you mentioned by exposing the economic model to the people they will then become a market. They could be participants in it. And you know this isn't you know this is not a weird idea right because there are markets for data already. It's just that consumers are not participants and there's like you know there's sort of you know publishers and content providers on one side that have inventory and then their advertisers on the others and you know you know Google and Facebook are running you know they're pretty much their entire revenue stream is by running two sided markets between those parties right. And so it's not a crazy idea that there would be like a three sided market or that you know that on one side of the market or the other we would have proxies representing our interest. It's not you know it's not a crazy idea but it would it's not a crazy technical idea but it would have pretty extreme economic consequences. Speaking of markets a lot of fascinating aspects of this world arise not from individual human beings but from the interaction of human beings. You've done a lot of work in game theory. First can you say what is game theory and how does it help us model and study? Yeah game theory of course let us give credit where it's due. You know it comes from the economist first and foremost but as I've mentioned before like you know computer scientists never hesitate to wander into other people's turf and so there is now this 20 year old field called algorithmic game theory. But you know game theory first and foremost is a mathematical framework for reasoning about collective outcomes in systems of interacting individuals. You know so you need at least two people to get started in game theory and many people are probably familiar with Prisoner's Dilemma as kind of a classic example of game theory and a classic example where everybody looking out for their own individual interests leads to a collective outcome that's kind of worse for everybody than what might be possible if they cooperated for example. But cooperation is not an equilibrium in Prisoner's Dilemma. And so my work in the field of algorithmic game theory more generally in these areas kind of looks at settings in which the number of actors is potentially extraordinarily large and their incentives might be quite complicated and kind of hard to model directly but you still want kind of algorithmic ways of kind of predicting what will happen or influencing what will happen in the design of platforms. So what to you is the most beautiful idea that you've encountered in game theory? There's a lot of them. I'm a big fan of the field. I mean you know I mean technical answers to that of course would include Nash's work just establishing that you know there is a competitive equilibrium under very very general circumstances which in many ways kind of put the field on a firm conceptual footing because if you don't have equilibrium it's kind of hard to ever reason about what might happen since you know there's just no stability. So just the idea that stability can emerge when there's multiple. Not that it will necessarily emerge just that it's possible right. Like the existence of equilibrium doesn't mean that sort of natural iterative behavior will necessarily lead to it. In the real world. Yeah. Maybe answering a slightly less personally than you asked the question I think within the field of algorithmic game theory perhaps the single most important kind of technical contribution that's been made is the realization between close connections between machine learning and game theory and in particular between game theory and the branch of machine learning that's known as no regret learning and this sort of provides a very general framework in which a bunch of players interacting in a game or a system each one kind of doing something that's in their self interest will actually kind of reach an equilibrium and actually reach an equilibrium in a you know a pretty you know a rather you know short amount of steps. So you kind of mentioned acting greedily can somehow end up pretty good for everybody. Or pretty bad. Or pretty bad. Yeah. It will end up stable. Yeah. Right. And and you know stability or equilibrium by itself is neither is not necessarily either a good thing or a bad thing. So what's the connection between machine learning and the ideas. Well I think we kind of talked about these ideas already in kind of a non technical way which is maybe the more interesting way of understanding them first which is you know we have many systems platforms and apps these days that work really hard to use our data and the data of everybody else on the platform to selfishly optimize on behalf of each user. OK. So you know let me let me give I think the cleanest example which is just driving apps navigation apps like you know Google Maps and Waze where you know miraculously compared to when I was growing up at least you know the objective would be the same when you wanted to drive from point A to point B spend the least time driving not necessarily minimize the distance but minimize the time. Right. And when I was growing up like the only resources you had to do that were like maps in the car which literally just told you what roads were available and then you might have like half hourly traffic reports just about the major freeways but not about side roads. So you were pretty much on your own. And now we've got these apps you pull it out and you say I want to go from point A to point B and in response kind of to what everybody else is doing if you like what all the other players in this game are doing right now here's the you know the route that minimizes your driving time. So it is really kind of computing a selfish best response for each of us in response to what all of the rest of us are doing at any given moment. And so you know I think it's quite fair to think of these apps as driving or nudging us all towards the competitive or Nash equilibrium of that game. Now you might ask like well that sounds great why is that a bad thing. Well you know it's known both in theory and with some limited studies from actual like traffic data that all of us being in this competitive equilibrium might cause our collective driving time to be higher maybe significantly higher than it would be under other solutions. And then you have to talk about what those other solutions might be and what the algorithms to implement them are which we do discuss in the kind of game theory chapter of the book. But similarly you know on social media platforms or on Amazon you know all these algorithms that are essentially trying to optimize our behalf they're driving us in a colloquial sense towards some kind of competitive equilibrium and you know one of the most important lessons of game theory is that just because we're at equilibrium doesn't mean that there's not a solution in which some or maybe even all of us might be better off. And then the connection to machine learning of course is that in all these platforms I've mentioned the optimization that they're doing on our behalf is driven by machine learning you know like predicting where the traffic will be predicting what products I'm going to like predicting what would make me happy in my newsfeed. Now in terms of the stability and the promise of that I have to ask just out of curiosity how stable are these mechanisms that you game theory is just the economist came up with and we all know that economists don't live in the real world just kidding sort of what's do you think when we look at the fact that we haven't blown ourselves up from the from a game theoretic concept of mutually shared destruction what are the odds that we destroy ourselves with nuclear weapons as one example of a stable game theoretic system? Just to prime your viewers a little bit I mean I think you're referring to the fact that game theory was taken quite seriously back in the 60s as a tool for reasoning about kind of Soviet US nuclear armament disarmament detente things like that. I'll be honest as huge of a fan as I am of game theory and its kind of rich history it still surprises me that you know you had people at the RAND Corporation back in those days kind of drawing up you know two by two tables and one the row player is you know the US and the column player is Russia and that they were taking seriously you know I'm sure if I was there maybe it wouldn't have seemed as naive as it does at the time you know. Seems to have worked which is why it seems naive. Well we're still here. We're still here in that sense. Yeah even though I kind of laugh at those efforts they were more sensible then than they would be now right because there were sort of only two nuclear powers at the time and you didn't have to worry about deterring new entrants and who was developing the capacity and so we have many you know it's definitely a game with more players now and more potential entrants. I'm not in general somebody who advocates using kind of simple mathematical models when the stakes are as high as things like that and the complexities are very political and social but we are still here. So you've worn many hats one of which the one that first caused me to become a big fan of your work many years ago is algorithmic trading. So I have to just ask a question about this because you have so much fascinating work there in the 21st century what role do you think algorithms have in space of trading investment in the financial sector? Yeah it's a good question I mean in the time I've spent on Wall Street and in finance you know I've seen a clear progression and I think it's a progression that kind of models the use of algorithms and automation more generally in society which is you know the things that kind of get taken over by the algos first are sort of the things that computers are obviously better at than people right so you know so first of all there needed to be this era of automation right where just you know financial exchanges became largely electronic which then enabled the possibility of you know trading becoming more algorithmic because once you know that exchanges are electronic an algorithm can submit an order through an API just as well as a human can do at a monitor quickly can read all the data so yeah and so you know I think the places where algorithmic trading have had the greatest inroads and had the first inroads were in kind of execution problems kind of optimized execution problems so what I mean by that is at a large brokerage firm for example one of the lines of business might be on behalf of large institutional clients taking you know what we might consider difficult trade so it's not like a mom and pop investor saying I want to buy a hundred shares of Microsoft it's a large hedge fund saying you know I want to buy a very very large stake in Apple and I want to do it over the span of a day and it's such a large volume that if you're not clever about how you break that trade up not just over time but over perhaps multiple different electronic exchanges that all let you trade Apple on their platform you know you will you will move you'll push prices around in a way that hurts your your execution so you know this is the kind of you know this is an optimization problem this is a control problem right and so machines are better we we know how to design algorithms you know that are better at that kind of thing than a person is going to be able to do because we can take volumes of historical and real time data to kind of optimize the schedule with which we trade and you know similarly high frequency trading you know which is closely related but not the same as optimized execution where you're just trying to spot very very temporary you know mispricings between exchanges or within an asset itself or just predict directional movement of a stock because of the kind of very very low level granular buying and selling data in the in the exchange machines are good at this kind of stuff it's kind of like the mechanics of trading what about the can machines do long terms of prediction yeah so I think we are in an era where you know clearly there have been some very successful you know quant hedge funds that are you know in what we would traditionally call you know still in this the stat arb regime like so you know what's that stat arb referring to statistical arbitrage but but for the purposes of this conversation what it really means is making directional predictions in asset price movement or returns your prediction about that directional movement is good for you know you you have a view that it's valid for some period of time between a few seconds and a few days and that's the amount of time that you're going to kind of get into the position hold it and then hopefully be right about the directional movement and you know buy low and sell high as the cliche goes. So that is a you know kind of a sweet spot I think for quant trading and investing right now and has been for some time when you really get to kind of more Warren Buffett style timescales right like you know my cartoon of Warren Buffett is that you know Warren Buffett sits and thinks what the long term value of Apple really should be and he doesn't even look at what Apple is doing today he just decides you know you know I think that this is what its long term value is and it's far from that right now and so I'm going to buy some Apple or you know short some Apple and I'm going to I'm going to sit on that for 10 or 20 years okay. So when you're at that kind of timescale or even more than just a few days all kinds of other sources of risk and information you know so now you're talking about holding things through recessions and economic cycles, wars can break out. So there you have to understand human nature at a level that. Yeah and you need to just be able to ingest many many more sources of data that are on wildly different timescales right. So if I'm an HFT I'm a high frequency trader like I don't I don't I really my main source of data is just the data from the exchanges themselves about the activity in the exchanges right and maybe I need to pay you know I need to keep an eye on the news right because you know that can cause sudden you know the CEO gets caught in a scandal or you know gets run over by a bus or something that can cause very sudden changes but you know I don't need to understand economic cycles I don't need to understand recessions I don't need to worry about the political situation or war breaking out in this part of the world because you know all I need to know is as long as that's not going to happen in the next 500 milliseconds then you know my model is good. When you get to these longer timescales you really have to worry about that kind of stuff and people in the machine learning community are starting to think about this. We held a we jointly sponsored a workshop at Penn with the Federal Reserve Bank of Philadelphia a little more than a year ago on you know I think the title is something like machine learning for macroeconomic prediction. You know macroeconomic referring specifically to these longer timescales and you know it was an interesting conference but it you know my it left me with greater confidence that we have a long way to go to you know and so I think that people that you know in the grand scheme of things you know if somebody asked me like well whose job on Wall Street is safe from the bots I think people that are at that longer you know timescale and have that appetite for all the risks involved in long term investing and that really need kind of not just algorithms that can optimize from data but they need views on stuff they need views on the political landscape economic cycles and the like and I think you know they're they're they're pretty safe for a while as far as I can tell. So Warren Buffett's job is not seeing you know a robo Warren Buffett anytime soon. Give him comfort. Last question. If you could go back to if there's a day in your life you could relive because it made you truly happy. Maybe you outside family what otherwise you know what what what day would it be. But can you look back you remember just being profoundly transformed in some way or blissful. I'll answer a slightly different question which is like what's a day in my my life or my career that was kind of a watershed moment. I went straight from undergrad to doctoral studies and you know that's not at all atypical and I'm also from an academic family like my my dad was a professor my uncle on his side as a professor both my grandfathers were professors. All kinds of majors to philosophy. Yeah they're kind of all over the map yeah and I was a grad student here just up the river at Harvard and came to study with Les Valiant which was a wonderful experience. But you know I remember my first year of graduate school I was generally pretty unhappy and I was unhappy because you know at Berkeley as an undergraduate you know yeah I studied a lot of math and computer science but it was a huge school first of all and I took a lot of other courses as we've discussed I started as an English major and took history courses and art history classes and had friends you know that did all kinds of different things. And you know Harvard's a much smaller institution than Berkeley and its computer science department especially at that time was was a much smaller place than it is now. And I suddenly just felt very you know like I'd gone from this very big world to this highly specialized world and now all of the classes I was taking were computer science classes and I was only in classes with math and computer science people. And so I was you know I thought often in that first year of grad school about whether I really wanted to stick with it or not and you know I thought like oh I could you know stop with a master's I could go back to the Bay Area and to California and you know this was in one of the early periods where there was you know like you could definitely get a relatively good job paying job at one of the one of the tech companies back you know that were the big tech companies back then. And so I distinctly remember like kind of a late spring day when I was kind of you know sitting in Boston Common and kind of really just kind of chewing over what I wanted to do with my life and I realized like okay and I think this is where my academic background helped me a great deal. I sort of realized you know yeah you're not having a great time right now this feels really narrowing but you know that you're here for research eventually and to do something original and to try to you know carve out a career where you kind of you know choose what you want to think about you know and have a great deal of independence. And so you know at that point I really didn't have any real research experience yet I mean it was trying to think about some problems with very little success but I knew that like I hadn't really tried to do the thing that I knew I'd come to do and so I thought you know I'm going to stick through it for the summer and you know and that was very formative because I went from kind of contemplating quitting to you know a year later it being very clear to me I was going to finish because I still had a ways to go but I kind of started doing research it was going well it was really interesting and it was sort of a complete transformation you know it's just that transition that I think every doctoral student makes at some point which is to sort of go from being like a student of what's been done before to doing you know your own thing and figure out what makes you interested in what your strengths and weaknesses are as a researcher and once you know I kind of made that decision on that particular day at that particular moment in Boston Common you know I'm glad I made that decision. And also just accepting the painful nature of that journey. Yeah exactly exactly. In that moment said I'm gonna I'm gonna stick it out yeah I'm gonna stick around for a while. Well Michael I've looked off do you work for a long time it's really nice to talk to you thank you so much. It's great to get back in touch with you too and see how great you're doing as well. Thanks a lot. Thank you.
Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50
The following is a conversation with David Newman. She's the Apollo Program Professor at MIT and the former Deputy Administrator of NASA and has been a principal investigator on four space flight missions. Her research interests are in aerospace biomedical engineering, investigating human performance in varying gravity environments. She has designed and engineered and built some incredible space suit technology, namely the BioSuit that we talk about in this conversation. Due to some scheduling challenges on both our parts, we only had about 40 minutes together. And in true engineering style, she said, I talk fast, you pick the best questions, let's get it done. And we did. It was a fascinating conversation about space exploration and the future of spacesuits. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. For the first time, this show is presented by Cash App, the number one finance app in the App Store. Cash App is the easiest way to send money to your friends. And it is also the easiest way to buy, sell, and deposit Bitcoin. Most Bitcoin exchanges take days for a bank transfer to become investable. Through Cash App, it takes seconds. Invest as little as $1, and now you own Bitcoin. I have several conversations about Bitcoin coming up on this podcast. Decentralized digital currency is a fascinating technology in general to explore, both at the technical and the philosophical level. Cash App is also the easiest way to try and grow your money with our new investing feature. Unlike investing tools that force you to buy entire shares of stock, Cash App, amazingly, lets you instantly invest as little or as much as you want. Some stocks in the market are hundreds, if not thousands of dollars per share, and now you can still own a piece with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm also excited to be working with Cash App to support one of my favorite organizations called First, which is best known for their first robotics and Lego competitions that seeks to inspire young students in engineering and technology fields all over the world. That's over 110 countries, 660,000 students, 300,000 mentors and volunteers, and a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you sign up for Cash App and use the promo code LexPodcast, you'll instantly receive $10, and Cash App will also donate $10 to First, an amazing organization that I've personally seen inspire girls and boys to learn, to explore, and to dream of engineering a better world. Don't forget to use the code LexPodcast when you download Cash App from the App Store or Google Play Store today. And now, here's my conversation with Deva Newman. You circumnavigated the globe on boat, so let's look back in history. 500 years ago, Ferdinand Magellan's crew was first to circumnavigate the globe, but he died, which I think people don't know, like halfway through, and so did 242 of the 260 sailors that took that three year journey. What do you think it was like for that crew at that time heading out into the unknown to face probably likely death? Do you think they were filled with fear, with excitement? Probably not fear, I think in all of exploration, the challenge and the unknown, so probably wonderment. And then just when you really are sailing the world's oceans, you have extreme weather of all kinds. When we were circumnavigating, it was challenging, a new dynamic, you really appreciate Mother Earth, you appreciate the winds and the waves, so back to Magellan and his crew, since they really didn't have a three dimensional map of the globe, of the Earth when they went out, just probably looking over the horizon thinking, what's there, what's there? So I would say the challenge that had to be really important in terms of the team dynamics and that leadership had to be incredibly important, team dynamics, how do you keep people focused on the mission? So you think the psychology, that's interesting, there's probably echoes of that in the space exploration stuff we'll talk about. So the psychology of the dynamics between the human beings on the mission is important? Absolutely, for a Mars mission, it's lots of challenges, technology, but since I specialize in keeping my astronauts alive, the psychosocial issues, the psychology of psychosocial team dynamics, leadership, that's, you know, we're all people, so that's gonna be, that's always a huge impact, one of the top three, I think, of any isolated, confined environment, any mission that is really pretty extreme. So your Twitter handle is devaexplorer, so when did you first fall in love with the idea of exploration? Ah, that's a great question, you know, maybe as long as I can remember, as I grew up in Montana in the Rocky Mountains and Helena in the capital, and so literally, you know, Mount Helena was my backyard, was right up there, so exploring, being in the mountains, looking at caves, just running around, but always being in nature, so since my earliest memories, I, you know, think of myself as kind of exploring the natural beauty of the Rocky Mountains where I grew up. So exploration is not limited to any domain, it's just anything, so the natural domain of any kind, so going out to the woods into a place you haven't been, it's all exploration. I think so, yeah, I have a pretty all encompassing definition of exploration. So what about space exploration? When were you first captivated by the idea that we little humans could venture out into the space, into the great unknown of space? So it's a great year to talk about that, the 50th anniversary of Apollo 11, as I was alive during Apollo, and specifically Apollo 11, I was five years old, and I distinctly remember that, I remember that humanity, I'm sure I probably didn't know their names at the time, you know, there's Neil Armstrong, Buzz Aldrin, and never forget Michael Collins in orbit, you know, those three men, you know, doing something that just seemed impossible, seemed impossible a decade earlier, even a year earlier, but so the Apollo program really inspired me, and then I think it actually just taught me to dream, to any impossible mission could be possible with enough focus, and I'm sure you need some luck, but you definitely need the leadership, you need the focus of the mission, so since an early age, I thought, of course, people, it should be interplanetary, of course, people, we need people on Earth, and we're gonna have people exploring space as well. So that seemed obvious, even at that age, of course. It opened it up, before we saw men on the moon, it wasn't obvious to me at all, but once we understood that yes, absolutely, astronauts, that's what they do, they explore, they go into space, and they land on other planets or moons. So again, maybe a romanticized philosophical question, but when you look up at the stars, knowing that, you know, there's at least 100 billion of them in the Milky Way galaxy, right, so we're really a small speck in this giant thing that's the visible universe, how does that make you feel about our efforts here? I love the perspective, I love that perspective, I always open my public talks with a big Hubble Space Telescope image, looking out into, you mentioned just now, the solar system, the Milky Way, because I think it's really important to know that we're just a small, pale, blue dot, we're really fortunate, we're on the best planet by far, life is fantastic here. That we know of, you're confident this is the best planet. I'm pretty sure it's the best planet, the best planet that we know of. I mean, I searched my researches, you know, in mission worlds, and when will we find life? I think actually probably the next decade, we find probably past life, probably the evidence of past life on Mars, let's say. You think there was once life on Mars, or do you think there's currently? I'm more comfortable saying probably 3.5 billion years ago, feel pretty confident there was life on Mars, just because then it had an electromagnetic shield, it had an atmosphere, has wonderful gravity level, three HGs, fantastic, you know, you're all super human, we can all slam dunk a basketball, I mean, it's gonna be fun to play sports on Mars. So I think we'll find past, no, fossilized, probably the evidence of past life on Mars. Currently, that's okay, we need the next decade, but the evidence is mounting for sure. We do have the organics, we're finding organics, we have water, seasonal water on Mars. We used to just know about the ice caps, you know, North and South Pole, now we have seasonal water. We do have the building blocks for life on Mars. We really need to dig down into the soil, because everything on the top surface is radiated, but once we find down, will we see any life forms? Will we see any bugs? I leave it open as a possibility, but I feel pretty certain that past life, or fossilized life forms, we'll find. And then we have to get to all these ocean worlds, these beautiful moons of other planets, since we know they have water, and we're looking for some simple search for life, follow the water, carbon based life, that's the only life we know. There could be other life forms that we don't know about, but it's hard to search for them, because we don't know. So in our search for life in the solar system, it's definitely search, follow the water, and look for the building blocks of life. So you think in the next decade, we might see hints of past life, or even current life? I think so, that's it. Pretty optimistic. I love the optimism. I'm pretty optimistic. Do humans have to be involved, or can this be robots and rovers and? Probably teams, I mean, we've been at it, on Mars in particular, 50 years. We've been exploring Mars for 50 years, great data, right? Our images of Mars today are phenomenal. Now we know how Mars lost its atmosphere. We're starting to know, because of the lack of the electromagnetic shield. We know about the water on Mars. So we've been studying 50 years with our robots, we still haven't found it. So I think once we have a human mission there, we just accelerate things. But it's always humans and our rovers and robots together. But we just have to think that 50 years, we've been looking at Mars, and taking images, and doing the best science that we can. People need to realize Mars is really far away. It's really hard to get to. You know, this is extreme, extreme exploration. We mentioned Magellan first, or all of the wonderful explorers and sailors of the past, which kind of are lots of my inspiration for exploration. Mars is a different ball game. I mean, it's eight months to get there, year and a half to get home. I mean, it's really extreme. The harsh environment in all kinds of ways. But the kind of organisms we might be able to see hints of on Mars are kind of microorganisms perhaps. Do you think? Yeah, and remember that humans, we're kind of, you know, we're hosts, right? We're hosts to all of our bacteria and viruses, right? Do you think it's a big leap from the viruses and the bacteria to us humans? Put another way, do you think on all those moons, beautiful, wet moons that you mentioned, you think there's intelligent life out there? I hope so. I mean, that's the hope, but you know, we don't have the scientific evidence for that now. I think all the evidence we have in terms of life existing is much more compelling again, because we have the building blocks of life now. When that life turns into intelligence, that's a big unknown. If we ever meet, do you think we would be able to find a common language? I hope so. We haven't met yet. It's just so far, I mean, do physics just play a role here? Look at all these exoplanets, 6,000 exoplanets. I mean, even the couple dozen Earth like planets that are exoplanets that really look like habitable planets. These are very Earth like. They look like they have all the building blocks. I can't wait to get there. The only thing is they're 10 to 100 light years away. So scientifically, we know they're there. We know that they're habitable. They have, you know, everything going from, right? Like, you know, we call them in the Goldilocks zone, not too hot, not too cold, just perfect for habitability for life. But now the reality is if they're 10 at the best to 100 to thousands of light years away, so what's out there? But I just can't think that we're not the only ones. So absolutely life, life in the universe, probably intelligent life as well. Do you think there needs to be fundamental revolutions in how we, the tools we use to travel through space in order for us to venture outside of our solar system? Or do you think the ways, the rockets, the ideas we have now, the engineering ideas we have now will be enough to venture out? Well, that's a good question. Right now, you know, cause again, speed of light is a limit. We don't have a warp speed warp drive to explore our solar system, to get to Mars, to explore all the planets. Then we need a technology push, but technology push here is just advanced propulsion. It'd be great if I could get humans to Mars and say, you know, three to four months, not eight months. I mean, half the time, 50% reduction. That's great in terms of safety and wellness of the crew. Orbital mechanic, but physics rules, you know, orbital mechanics is still there. Physics rules, we can't defy physics. I love that. So invent a new physics. I mean, look at quantum, you know, look at quantum theory. So you never know. Exactly, I mean, we are always learning. So we definitely don't know all the physics that exist too, but we're, we still have to, it's not science fiction. You know, we still have to pay attention to physics in terms of our speed of travel for space flight. So you were the deputy administrator of NASA and during the Obama administration, there's a current Artemis program that's working on a crewed mission to the moon and then perhaps to Mars. What are you excited about there? What are your thoughts on this program? What are the biggest challenges do you think of getting to the moon, of landing to the moon once again, and then the big step to Mars? Well, I love, you know, the moon program now, Artemis. It is definitely, we've been in low earth orbit. I love low earth orbit too, but I just always look at it as three phases. So low earth orbit where we've been 40 years, so definitely time to get back to deep space, time to get to the moon. There's so much to do on the moon. I hope we don't get stuck on the moon for 50 years. I really want to get to the moon, spend the next decade, first with the lander, then humans. There's just a lot to explore, but to me it's a big technology push. It's only three days away. So the moon is definitely the right place. So we kind of buy down our technology. We invest in specifically habitats, life support systems. So we need suits. We really need to understand really how to live off planet. We've been off planet and low earth orbit, but still that's only 400 kilometers up, 250 miles, right? So we get to the moon. It really is a great proving ground for the technologies. And now we're in deep space, radiation becomes a huge issue and to keep our astronauts well and alive. And I look at all of that investment for moon exploration to the ultimate goal, the horizon goals we call it, to get people to Mars. But we just don't go to Mars tomorrow, right? We really need a decade on the moon, I think, investing in the technologies, learning, making sure the astronauts are, their health, they're safe and well, and also learning so much about in situ research utilization, ISRU, in situ resource utilization is huge when it comes to exploration for the moon and Mars. So we need a test bed. And to me, it really is a lunar test bed. And then we use those same investments to think about getting people to Mars in the 2030s. So developing sort of a platform of all the kind of research tools of all the, what's the resource utilization, can you speak to that? Yeah, so ISRU for the moon, it's, we'll go to the South Pole and it's fascinating. We have images of it. Of course, we know there's permanently shaded areas and like by Shackleton crater, and there's areas that are permanently in the sun. Well, it seems that there's a lot of water ice, water that's entrapped in ice and the lunar craters. That's the first place you go. Why? Because it's water and when you wanna try to, it could be fuel, life support systems. So you kind of, again, you go where the water is. And so when the moon is kind of for resources utilization, but to learn how to, can we make the fuels out of the resources that are on the moon? We have to think about 3D printing, right? You don't get to bring all this mass with you. You have to learn how to literally live off the land. We need a pressure shell. We need to have an atmosphere for people to live in. So all of that is kind of buying down the technology, doing the investigation, doing the science. What are the basically called lunar volatiles? What is that ice on the moon? How much of it is there? What are the resources look like? To me, that helps us, that's just the next step in getting humans to Mars. You know, it's cheaper and more effective to sort of develop some of these difficult challenges, like solve some of these challenges, practice, develop, test, and so on on the moon. Absolutely. That is on Mars. Absolutely. And people are gonna love to, you know, you get to the moon, you get to, you have a beautiful Earthrise. I mean, you have the most magnificent view of Earth being off planet. So it just makes sense. I think we're gonna have thousands, lots of people, hopefully tens of thousands in low Earth orbit, because low Earth orbit is a beautiful place to go and look down on the Earth, but people wanna return home. I think the lunar explorers will also wanna do round trips and, you know, be on the moon, three day trip, explore, do science, also because the lunar day is 14 days and lunar nights, also 14 days. So in that 28 day cycle, you know, half of it is in light, half of it's in dark. So people would probably wanna do, you know, couple of week trips, month long trips, not longer than that. What do you mean by people? People, explorers. I mean, yeah, astronauts are gonna be civilians in the future too. Not all astronauts are gonna be government astronauts. Actually, when I was at NASA, we changed, we actually got the law changed to recognize astronauts that are not only government employees, you know, NASA astronauts or European Space Agency astronauts or Russian Space Agency that astronauts, because of the big push we put on the private sector, that astronauts essentially are gonna be astronauts. You get over 100 kilometers up and think once you've done orbital flight, then you're an astronaut. So a lot of private citizens are gonna become astronauts. Do you think one day you might step foot on the moon? I think it'd be good to go to the moon. I'd give that a shot. Mars, I'm gonna, it's my life's work to get the next generation to Mars. That's you or even younger than you, you know, my students generation will be the Martian explorers. I'm just working to facilitate that, but that's not gonna be me. Hey, the moon's pretty good. And it's a lot tough. I mean, it's still a really tough mission. It's an extreme mission, exactly. It's great for exploration, but doable, but again, before Apollo, we didn't think getting humans to the moon was even possible. So we kind of made that possible, but we need to go back. We absolutely need to go back. We're investing in the heavy lift launch capabilities that we need to get there. We haven't had that, you know, since the Apollo days, since Saturn five. So now we have three options on the board. That's what's so fantastic. NASA has its space launch system. SpaceX is gonna have its heavy capability and Blue Origin is coming along too with heavy lifts. So that's pretty fantastic from where I sit. I'm the Apollo program professor. Today I have zero heavy lift launch capability. I can't wait, just in a few years, we'll have three different heavy lift launch capabilities. So that's pretty exciting. You know, your heart is perhaps with NASA, but you mentioned SpaceX and Blue Origin. What are your thoughts of SpaceX and the innovative efforts there from the sort of private company aspect? Oh, they're great. They're, remember that the investments in SpaceX is government funding. It's NASA funding, it's US Air Force funding, just as it should be, because they're betting on a company who is moving fast, has some new technology development. So I love it. So when I was at NASA, it really was under our public private partnerships. So necessarily the government needs to fund these startups. Now, SpaceX is no longer a startup, but you know, it's been at it for 10 years. It's had some accidents, learned a lot of lessons, but it's great because it's the way you move faster. And also some private industry folks, some private businesses will take a lot more risk. That's also really important for the government. What do you think about that culture of risk? I mean, sort of NASA and the government are exceptionally good at delivering sort of safe, like there's a bit more of a culture of caution and safety and sort of this kind of solid engineering. And I think SpaceX as well has the same kind of stuff. It has a little bit more of that startup feel where they take the bigger risks. Is that exciting for you to see, seeing bigger risks in this kind of space? Absolutely. And the best scenario is both of them working together because there's really important lessons learned, especially when you talk about human space flight, safety, quality assurance. These things are the utmost importance, both aviation and space, you know, when human lives are at stake. On the other hand, government agencies, NASA can be European Space Agency, you name it, they become very bureaucratic, pretty risk averse, move pretty slowly. So I think the best is when you combine the partnerships from both sides. Industry necessarily has to push the government, take some more risks. You know, they're smart risk or actually gave an award at NASA for failing smart. Failing smart, I love that. You know, so you can kind of break open the culture, say, no, look at Apollo, that was a huge risk. It was done well. So there's always a culture of safety, quality assurance, you know, engineering, you know, at its best. But on the other hand, you want to get things done and you have to also get them, you have to bring the cost down. You know, for when it comes to launch, we really have to bring the cost down and get the frequency up. And so that's what the newcomers are doing. They're really pushing that. So it's about the most exciting time that I can imagine for space flight. Again, a little bit, it really is the democratization of space flight, opening it up, not just because of the launch capability, but the science we can do on a CubeSat. What you can do now for very, those used to be, you know, student projects that we would go through, conceive, design, implement, and think about what a small satellite would be. Now they're the most, you know, these are really advanced instruments, science instruments that are flying on little teeny CubeSats that pretty much anyone can afford. So there's not a, there's every nation, you know, every place in the world can fly a CubeSat. And so that's... What's a CubeSat? Oh, CubeSat is a, this is called 1U. CubeSats we measure in terms of units. So, you know, just in terms of, I put my, both my hands together, that's one unit, two units. So little small satellites. So CubeSats are for small satellites. And we actually go by mass as well. You know, a small satellite might be a hundred kilos, 200 kilos, all well under a thousand kilos. CubeSats then are the next thing down from small sats. You know, basically, you know, kilos, tens of kilos, things like that. But kind of the building blocks, CubeSats are fantastic design, it's kind of modular design. So I can take a 1U, one unit of CubeSat and, you know, but what if I have a little bit more money and payload, I can fly three of them and just basically put a lot more instruments on it. But essentially think about something the size of a shoe box, if you will. You know, that would be a CubeSat. And how do those help empower you in terms of doing size, in terms of doing experiments? Oh, right now there's, again, back to private industry, Planet, the company, is, you know, flying CubeSats and literally looking down on Earth and orbiting Earth, taking a picture, if you will, of Earth every day, every 24 hours, covering the entire Earth. So in terms of Earth observations, in terms of climate change, in terms of our changing Earth, it's revolutionizing because they're affordable. We can put a whole bunch of them up. Telecoms, we're all, you know, on our cell phones and we have GPS, we have our telecoms, but those used to be very expensive satellites providing that service. Now we can fly a whole bunch of modular CubeSats. So it really is breakthrough in terms of modularity, as well as cost reduction. So that's one exciting set of developments. Is there something else that you've been excited about, like reusable rockets, perhaps, that you've seen in the last few years? Yeah, well, the reusability you had, and the reusability is awesome. I mean, it's just the best. Now we have to remember, the shuttle was a reusable vehicle. Yes. Which, and the shuttle is an amazing, it's narrow space engineer. You know, I mean, the shuttle is still, this is the most gorgeous, elegant, extraordinary design of a space vehicle. It was reusable, it just wasn't affordable. But the reusability of it was really critical because we flew it up, it did come back. So the notion of reusability, I think absolutely. Now what we're doing with we, you know, the Global We, but with SpaceX and Lord Jim, setting the rockets up, recovering the first stages, where if they can regain 70% cost savings, that's huge. And just seeing the control, you know, being in control and dynamic as a person, just seeing that rocket come back and land. Oh yeah, that's. It never gets old, it's exciting every single time you look at it and say, that's magic. So it's so cool. To me, the landing is where I stand up, start clapping, just the control. Yeah, just the algorithm, just the control algorithms, and hitting that landing, it's, you know, it's gymnastics for rocket ships, but to see these guys stick a landing, it's just wonderful. So every time, like I say, every time I see, you know, the reusability and the rockets coming back and landing so precisely, it's really exciting. So it is actually, that's a game changer. We are in a new era of lower costs and the higher frequency. And it's the world, not just NASA, many nations are really upping their frequency of launches. So you've done a lot of exciting research, design, engineering on spacesuits. What does the spacesuit of the future look like? Well, if I have anything to say about it, it'll be a very, it'll be a very tight fitting suit. We use mechanical counter pressure to pressurize right directly on the skin. Seems that it's technically feasible. We're still at the research and development stage. We don't have a flight system, but technically it's feasible. So we do a lot of work in the materials. You know, what materials do we need to pressurize someone? What's the patterning we need? That's what our patents are in, the patterning, kind of how we apply. This is a third of an atmosphere. Just to sort of take a little step back, you have this incredible biosuit where it's tight fitting, so it allows more mobility and so on. So maybe even to take a bigger step back, like what are the functions that a spacesuit should perform? Sure, so start from the beginning. A spacesuit is the world's smallest spacecraft. So I really, that's the best definition I can give you. Right now we fly gas pressurized suits, but think of developing and designing an entire spacecraft. So then you take all those systems and you shrink them around a person, provide them with oxygen to breathe, scrub out their carbon dioxide, you know, make sure they have pressure. They need a pressure environment to live in. So really the spacesuit is a shrunken, you know, spacecraft in its entirety, has all the same systems. Communication as well, probably. Yeah, communications, exactly. So you really, thermal control, little bit of radiation, not so much radiation protection, but thermal control, humidity, you know, oxygen debris. So all those life support systems, as well as the pressure protection. So it's an engineering marvel, you know, the spacesuits that have flown because they really are entire spacecraft, they're the small spacecraft that we have around a person, but they're very massive, but 140 kilos is the current suit, and they're not mobility suits. So since we're going back to the moon and Mars, we need a planetary suit, we need a mobility suit. So that's where we've kind of flipped the design paradigm. I study astronauts, I study humans in motion, and if we can map that motion, I want to give you full flexibility, you know, move your arms and legs. I really want you to be like a Olympic athlete, an extreme explorer. I don't want to waste any of your energy, so we take it from the human design. So I take a look at humans, we measure them, we model them, and then I say, okay, can I put a spacesuit on them that goes from the skin out? So rather than a gas pressurized shrinking that spacecraft around the person, say, here's how humans perform, can I design a spacesuit literally from the skin out? And that's what we've come up with, a mechanical counter pressure, some patterning, and that way it could be order of magnitude less in terms of the mass, and it should provide maximum mobility for moon or Mars. What's mechanical counter pressure? Like how the heck can you even begin to create something that's tight fitting and still doesn't protect you from the elements and so on and the whole, the pressure thing? That's the challenge, it's a big design challenge we've been working on it for. So you can either put someone in a balloon, that's one way to do it, that's conventional, that's the only thing we've ever formed. What's that mean? That means the balloon that you fill with gas? That's a gas pressurized suit. If you put someone in a balloon, it's only a third of an atmosphere to keep someone alive. So that's what the current system is. So depending on what units you think, in 30 kilopascals, 4.3 pounds per square inch. So much less than the pressure that's on Earth. You can still keep a human alive with 0.3 and it's alive and happy. Alive and happy. And you mix the gases. June, we're having this chat and we're at one sea level in Boston, one atmosphere. But a suit. Oxygen and nitrogen. Oxygen and nitrogen. And you put a suit, if we put someone to a third of an atmosphere, so for mechanical counter pressure now, so one way is to do it with a balloon. And that's what we currently have. Or you can apply the pressure directly to the skin. I only have to give you a third of an atmosphere. Right now, you and I are very happy in one atmosphere. So if I put that pressure, a third of an atmosphere on you, I just have to do it consistently, across all of your body and your limbs. And it'll be a gas pressurized helmet. Doesn't make sense to shrink wrap the head. See the blue mangrove, that's a great, it's a great act. But we don't need to, there's no benefits of like shrink wrapping the head. You put, you know, a gas pressurized helmet because the helmet then, the future of suits, you asked me about, the helmet just becomes your information portal. So it will have augmented reality. It'll have all the information you need. Should have, you know, the maps that I need. I'm on the moon. Okay, well, hey, smart helmet. Then show me the map, show me the topography. Hopefully it has the lab embedded too. If it has really great cameras, maybe I can see with that regolith. That's just lunar dust and dirt. What's that made out of? We talked about the water. So the helmet then really becomes this information portal is how I see kind of the IT architecture of the helmet is really allowing me to, you know, use all of my modalities of an explorer that I'd like to. So cameras, voiceover, images, if it were really good, it would kind of be, would have lab capabilities as well. Okay, so the pressure comes from the body, comes from the mechanical pressure. It's fascinating. Now, what aspect, when I look at Biosuit, just the suits you're working on, sort of from a fashion perspective, they look awesome. Is that a small part of it too? Oh, absolutely, because the teams that we work with, of course, I'm an engineer, there's engineering students, there's design students, there's architects. So it really is a very much a multidisciplinary team. So sure, colors, aesthetics, materials, all those things we pay attention to. So it's not just an engineering solution. It really is a much more holistic, it's a suit. It's a suit, you're, you know, you're dressed in a suit now. It's a warm fitting. So we really have to pay attention to all those things. And so that's the design team that we work with. And my partner, Geetraati, you know, we're partners in this in terms of, he comes from an architecture, industrial design background. So bringing those skills to bear as well. We team up with industry folks who are in, you know, athletic performance and designers. So it really is a team that brings all those skills together. So what role does this space suit play in our longterm staying in Mars, sort of exploring the, doing all the work that astronauts do, but also perhaps civilians one day, almost like taking steps towards colonization of Mars? What role does a space suit play there? So you always need life support system, pressurized habitat. And I like to say, we're not going to Mars to sit around. So you need a suit. You know, even if you land and have the lander, you're not going there to stay inside. That's for darn sure. We're going there to search for the evidence of life. That's why we're going to Mars. So you need a lot of mobility. So for me, the suit is the best way to give the human mobility. We're always still going to need rovers. We're going to need robots. So for me, exploration is always a suite of explorers. Some people are going to, some of the suite of explorers are humans, but many are going to be robots, smart systems, things like that. But I look at it as kind of all those capabilities together make the best exploration team. So let me ask, I love artificial intelligence and I've also saw that you've enjoyed the movie Space Odyssey, 2001 Space Odyssey. Let me ask the question about HAL 9000. That makes a few decisions there that prioritizes the mission over the astronauts. Do you think, from a high philosophical question, do you think HAL did the right thing of prioritizing the mission? I think our artificial intelligence will be smarter in the future. For a Mars mission, it's a great question that the reality is for a Mars mission, we need fully autonomous systems. We will get humans, but they have to be fully autonomous. And that's a really important, that's the most important concept because there's not going to be a mission control on Earth. 20 minute time lag, there's just no way you're going to control it. So fully autonomous, so people have to be fully autonomous as well, but all of our systems as well. And so that's the big design challenge. So that's why we test them out on the moon as well. When we have a, okay, a few second, three second time lag, you can test them out. We have to really get autonomous exploration down. You asked me earlier about Magellan. Magellan and his crew, they left, right? They were autonomous. You know, they were autonomous. They left and they were on their own to figure out that mission. Then when they hit land, they have resources, that's in situ resource utilization and everything else they brought with them. So we have to, I think, have that mindset for exploration. Again, back to the moon, it's more the testing ground, the proving ground with technologies. But when we get to Mars, it's so far away that we need fully autonomous systems. So I think that's where, again, AI and autonomy come in, a really robust autonomy, things that we don't have today yet. So they're on the drawing boards, but we really need to test them out because that's what we're up against. So fully autonomous meaning like self sufficient. There's still a role for the human in that picture. Do you think there'll be a time when AI systems, beyond doing fully autonomous flight control will also help or even take mission decisions like Hal did? That's interesting. It depends. I mean, they're gonna be designed by humans. I think as you mentioned, humans are always in the loop. I mean, we might be on Earth, we might be in orbit on Mars, maybe the systems of landers down on the surface of Mars. But I think we're gonna get, we are right now just on Earth based systems, AI systems that are incredibly capable and training them with all the data that we have now, petabytes of data from Earth. What I care about for the autonomy and AI right now, how we're applying it in research is to look at Earth and look at climate systems. I mean, that's the, it's not for Mars to me today. Right now AI is to eyes on Earth, all of our space data, compiling that using supercomputers because we have so much information and knowledge and we need to get that into people's hands. First, there's the educational issue with climate and our changing climate. Then we need to change human behavior. That's the biggie. So this next decade, it's urgent we take care of our own spaceship, which is spaceship Earth. So that's to me where my focus has been for AI systems, using whatever's out there, kind of imagining also what the future situation is, what's the satellite imagery of Earth of the future. If you can hold that in your hands, that's gonna be really powerful. Will that help people accelerate positive change for Earth and for us to live in balance with Earth? I hope so. And kind of start with the ocean systems. So oceans to land to air and kind of using all the space data. So it's a huge role for artificial intelligence to help us analyze, I call it curating the data, using the data. It has a lot to do with visualizations as well. Do you think in a weird, dark question, do you think human species can survive if we don't become interplanetary in the next century or a couple of centuries? Absolutely we can survive. I don't think Mars is option B actually. So I think it's all about saving spaceship Earth and humanity. I simply put, Earth doesn't need us, but we really need Earth. All of humanity needs to live in balance with Earth because Earth has been here a long time before we ever showed up and it'll be here a long time after. It's just a matter of how do we wanna live with all living beings, much more in balance because we need to take care of the Earth and right now we're not. So that's the urgency. And I think it is the next decade to try to live much more sustainably, live more in balance with Earth. I think the human species has a great long optimistic future, but we have to act. It's urgent. We have to change behavior. We have to realize that we're all in this together. It's just one blue bubble. It's for humanity. So when I think people realize that we're all astronauts, that's the great news is everyone's gonna be an astronaut. We're all astronauts of spaceship Earth. And again, this is our mission. This is our mission to take care of the planet. And yet as we explore out from our spaceship Earth here out into the space, what do you think the next 50, 100, 200 years look like for space exploration? I'm optimistic. So I think that we'll have lots of people, thousands of people, tens of thousands of people, who knows, maybe millions in low Earth orbit. That's just a place that we're gonna have people and actually some industry, manufacturing, things like that. That dream I hope we realize, getting people to the moon. So I can envision a lot of people on the moon. Again, it's a great place to go. Living or visiting? Probably visiting and living. If you want to, most people are gonna wanna come back to Earth, I think. But there'll be some people and it's not such a long, it's a good view, it's a beautiful view. So I think that we will have many people on the moon as well. I think there'll be some people, you told me, wow, hundreds of years out. So we'll have people, we'll be interplanetary for sure as a species. So I think we'll be on the moon. I think we'll be on Mars. Venus, no, it's already a runaway greenhouse gas. So not a great place for science. Jupiter, all within the solar system, great place for all of our scientific probes. I don't see so much in terms of human physical presence. We'll be exploring them. So we live in our minds there because we're exploring them and going on those journeys. But it's really our choice in terms of our decisions of how in balance we're gonna be living here on the Earth. When do you think the first woman, first person will step on Mars? Ah, step on Mars? Well, I'm gonna do everything I can to make sure it happens in the 2030s. 2030s. Say mid, 20, mid 20, 2025, 2035, we'll be on the moon. And hopefully with more people than us. But first with a few astronauts, it'll be global, international folks. But we really need those 10 years, I think, on the moon. And then so by later in the decade, in the 2030s, we'll have all the technology and know how, and we need to get that human mission to Mars done. We live in exciting times. And, Dava, thank you so much for leading the way and thank you for talking today. I really appreciate it. Thank you, my pleasure. Thanks for listening to this conversation and thank you to our presenting sponsor, Cash App. Remember to use code LexPodcast when you download Cash App from the App Store or Google Play Store. You'll get 10 bucks, $10, and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. Thank you and hope to see you next time.
Dava Newman: Space Exploration, Space Suits, and Life on Mars | Lex Fridman Podcast #51
The following is a conversation with Gilbert Strang. He's a professor of mathematics at MIT and perhaps one of the most famous and impactful teachers of math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times. As an undergraduate student, I was one of those millions of students. There's something inspiring about the way he teaches. There's at once calm, simple, and yet full of passion for the elegance inherent to mathematics. I remember doing the exercise in his book, Introduction to Linear Algebra, and slowly realizing that the world of matrices, of vector spaces, of determinants and eigenvalues, of geometric transformations and matrix decompositions reveal a set of powerful tools in the toolbox of artificial intelligence. From signals to images, from numerical optimization to robotics, computer vision, deep learning, computer graphics, and everywhere outside AI, including, of course, a quantum mechanical study of our universe. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. This podcast is supported by ZipRecruiter. Hiring great people is hard, and to me is the most important element of a successful mission driven team. I've been fortunate to be a part of and to lead several great engineering teams. The hiring I've done in the past was mostly through tools that we built ourselves, but reinventing the wheel was painful. ZipRecruiter is a tool that's already available for you. It seeks to make hiring simple, fast, and smart. For example, Codable cofounder Gretchen Huebner used ZipRecruiter to find a new game artist to join her education tech company. By using ZipRecruiter's screening questions to filter candidates, Gretchen found it easier to focus on the best candidates and finally hiring the perfect person for the role in less than two weeks from start to finish. ZipRecruiter, the smartest way to hire. See why ZipRecruiter is effective for businesses of all sizes by signing up, as I did, for free at ziprecruiter.com slash lexpod. That's ziprecruiter.com slash lexpod. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin. Most Bitcoin exchanges take days for a bank transfer to become investable. Through Cash App, it takes seconds. Cash App also has a new investing feature. You can buy fractions of a stock, which to me is a really interesting concept. So you can buy $1 worth no matter what the stock price is. Brokerage services are provided by Cash App Investing, a subsidiary of Square, and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations that many of you may know and have benefited from, called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play, and use code LexPodcast, you get $10, and Cash App will also donate $10 to First, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Gilbert Strang. How does it feel to be one of the modern day rock stars of mathematics? I don't feel like a rock star. That's kind of crazy for an old math person. But it's true that the videos in linear algebra that I made way back in 2000, I think, have been watched a lot. And well, partly the importance of linear algebra, which I'm sure you'll ask me, and give me a chance to say that linear algebra as a subject has just surged in importance. But also, it was a class that I taught a bunch of times, so I kind of got it organized and enjoyed doing it. The videos were just the class. So they're on OpenCourseWare and on YouTube and translated, and it's fun. But there's something about that chalkboard and the simplicity of the way you explain the basic concepts in the beginning. To be honest, when I went to undergrad. You didn't do linear algebra, probably. Of course I didn't do linear algebra. You did. Yeah, yeah, yeah, of course. But before going through the course at my university, there was going through OpenCourseWare. You were my instructor for linear algebra. Right, yeah. And that, I mean, we're using your book. And I mean, the fact that there is thousands, hundreds of thousands, millions of people that watch that video, I think that's really powerful. So how do you think the idea of putting lectures online, what really MIT OpenCourseWare has innovated? That was a wonderful idea. I think the story that I've heard is the committee was appointed by the president, President Vest, at that time, a wonderful guy. And the idea of the committee was to figure out how MIT could be like other universities, market the work we were doing. And then they didn't see a way. And after a weekend, and they had an inspiration, came back to President Vest and said, what if we just gave it away? And he decided that was okay, good idea. So. You know, that's a crazy idea. If we think of a university as a thing that creates a product, isn't knowledge, the kind of educational knowledge, isn't the product and giving that away, are you surprised that it went through? The result that he did it, well, knowing a little bit President Vest, it was like him, I think, and it was really the right idea. MIT is a kind of, it's known for being high level, technical things, and this is the best way we can say, tell, we can show what MIT really is like, because in my case, those 1806 videos are just teaching the class. They were there in 26, 100. They're kind of fun to look at. People write to me and say, oh, you've got a sense of humor, but I don't know where that comes through. Somehow I'm friendly with the class, I like students. And then your algebra, the subject, we gotta give the subject most of the credit. It really has come forward in importance in these years. So let's talk about linear algebra a little bit, because it is such a, it's both a powerful and a beautiful subfield of mathematics. So what's your favorite specific topic in linear algebra, or even math in general to give a lecture on, to convey, to tell a story, to teach students? Okay, well, on the teaching side, so it's not deep mathematics at all, but I'm kind of proud of the idea of the four subspaces, the four fundamental subspaces, which are of course known before, long before my name for them, but. Can you go through them? Can you go through the four subspaces? Sure I can, yeah. So the first one to understand is, so the matrix is, maybe I should say the matrix is. What is a matrix? What's a matrix? Well, so we have like a rectangle of numbers. So it's got n columns, got a bunch of columns, and also got an m rows, let's say, and the relation between, so of course the columns and the rows, it's the same numbers. So there's gotta be connections there, but they're not simple. The columns might be longer than the rows, and they're all different, the numbers are mixed up. First space to think about is take the columns, so those are vectors, those are points in n dimensions. What's a vector? So a physicist would imagine a vector or might imagine a vector as a arrow in space or the point it ends at in space. For me, it's a column of numbers. You often think of, this is very interesting in terms of linear algebra, in terms of a vector, you think a little bit more abstract than how it's very commonly used, perhaps. You think this arbitrary multidimensional space. Right away, I'm in high dimensions. Dreamland. Yeah, that's right. In the lecture, I try to, so if you think of two vectors in 10 dimensions, I'll do this in class, and I'll readily admit that I have no good image in my mind of a vector of an arrow in 10 dimensional space, but whatever. You can add one bunch of 10 numbers to another bunch of 10 numbers, so you can add a vector to a vector, and you can multiply a vector by three, and that's, if you know how to do those, you've got linear algebra. 10 dimensions, there's this beautiful thing about math, if we look at string theory and all these theories, which are really fundamentally derived through math, but are very difficult to visualize. How do you think about the things, like a 10 dimensional vector, that we can't really visualize? And yet, math reveals some beauty underlying our world in that weird thing we can't visualize. How do you think about that difference? Well, probably, I'm not a very geometric person, so I'm probably thinking in three dimensions, and the beauty of linear algebra is that it goes on to 10 dimensions with no problem. I mean, that if you're just seeing what happens if you add two vectors in 3D, yeah, then you can add them in 10D. You're just adding the 10 components. So, I can't say that I have a picture, but yet I try to push the class to think of a flat surface in 10 dimensions. So a plane in 10 dimensions, and so that's one of the spaces. Take all the columns of the matrix, take all their combinations, so much of this column, so much of this one, then if you put all those together, you get some kind of a flat surface that I call a vector space, space of vectors. And my imagination is just seeing like a piece of paper in 3D, but anyway, so that's one of the spaces, that's space number one, the column space of the matrix. And then there's the row space, which is, as I said, different, but came from the same numbers. So we got the column space, all combinations of the columns, and then we've got the row space, all combinations of the rows. So those words are easy for me to say, and I can't really draw them on a blackboard, but I try with my thick chalk. Everybody likes that railroad chalk, and me too. I wouldn't use anything else now. And then the other two spaces are perpendicular to those. So like if you have a plane in 3D, just a plane is just a flat surface in 3D, then perpendicular to that plane would be a line. So that would be the null space. So we've got two, we've got a column space, a row space, and there are two perpendicular spaces. So those four fit together in a beautiful picture of a matrix, yeah, yeah. It's sort of a fundamental, it's not a difficult idea. It comes pretty early in 1806, and it's basic. Planes in these multidimensional spaces, how difficult of an idea is that to come to, do you think? If you look back in time, I think mathematically it makes sense, but I don't know if it's intuitive for us to imagine, just as we were talking about. It feels like calculus is easier to intuit. Well, I have to admit, calculus came earlier, earlier than linear algebra. So Newton and Leibniz were the great men to understand the key ideas of calculus. But linear algebra to me is like, okay, it's the starting point, because it's all about flat things. Calculus has got, all the complications of calculus come from the curves, the bending, the curved surfaces. Linear algebra, the surfaces are all flat. Nothing bends in linear algebra. So it should have come first, but it didn't. And calculus also comes first in high school classes, in college class, it'll be freshman math, it'll be calculus, and then I say, enough of it. Like, okay, get to the good stuff. And that's... Do you think linear algebra should come first? Well, it really, I'm okay with it not coming first, but it should, yeah, it should. It's simpler. Because everything is flat. Yeah, everything's flat. Well, of course, for that reason, calculus sort of sticks to one dimension, or eventually you do multivariate, but that basically means two dimensions. Linear algebra, you take off into 10 dimensions, no problem. It just feels scary and dangerous to go beyond two dimensions, that's all. If everything's flat, you can't go wrong. So what concept or theorem in linear algebra or in math you find most beautiful, that gives you pause that leaves you in awe? Well, I'll stick with linear algebra here. I hope the viewer knows that really, mathematics is amazing, amazing subject and deep, deep connections between ideas that didn't look connected, they turned out they were. But if we stick with linear algebra... So we have a matrix. That's like the basic thing, a rectangle of numbers. And it might be a rectangle of data. You're probably gonna ask me later about data science, where often data comes in a matrix. You have maybe every column corresponds to a drug and every row corresponds to a patient. And if the patient reacted favorably to the drug, then you put up some positive number in there. Anyway, rectangle of numbers, a matrix is basic. So the big problem is to understand all those numbers. You got a big, big set of numbers. And what are the patterns, what's going on? And so one of the ways to break down that matrix into simple pieces is uses something called singular values. And that's come on as fundamental in the last, certainly in my lifetime. Eigenvalues, if you have viewers who've done engineering, math, or basic linear algebra, eigenvalues were in there. But those are restricted to square matrices. And data comes in rectangular matrices. So you gotta take that next step. I'm always pushing math faculty, get on, do it, do it. Singular values. So those are a way to break, to find the important pieces of the matrix, which add up to the whole matrix. So you're breaking a matrix into simple pieces. And the first piece is the most important part of the data. The second piece is the second most important part. And then often, so a data set is a matrix. And often, so a data scientist will like, if a data scientist can find those first and second pieces, stop there, the rest of the data is probably round off, experimental error maybe. So you're looking for the important part. So what do you find beautiful about singular values? Well, yeah, I didn't give the theorem. So here's the idea of singular values. Every matrix, every matrix, rectangular, square, whatever, can be written as a product of three very simple special matrices. So that's the theorem. Every matrix can be written as a rotation times a stretch, which is just a diagonal matrix, otherwise all zeros except on the one diagonal. And then the third factor is another rotation. So rotation, stretch, rotation is the breakup of any matrix. The structure of that, the ability that you can do that, what do you find appealing? What do you find beautiful about it? Well, geometrically, as I freely admit, the action of a matrix is not so easy to visualize, but everybody can visualize a rotation. Take two dimensional space and just turn it around the center. Take three dimensional space. So a pilot has to know about, well, what are the three, the yaw is one of them. I've forgotten all the three turns that a pilot makes. Up to 10 dimensions, you've got 10 ways to turn, but you can visualize a rotation. Take the space and turn it. And you can visualize a stretch. So to break a matrix with all those numbers in it into something you can visualize, rotate, stretch, rotate is pretty neat. It's pretty neat. That's pretty powerful. On YouTube, just consuming a bunch of videos and just watching what people connect with and what they really enjoy and are inspired by, math seems to come up again and again. I'm trying to understand why that is. Perhaps you can help give me clues. So it's not just the kinds of lectures that you give, but it's also just other folks like with Numberphile, there's a channel where they just chat about things that are extremely complicated, actually. People nevertheless connect with them. What do you think that is? It's wonderful, isn't it? I mean, I wasn't really aware of it. We're conditioned to think math is hard, math is abstract, math is just for a few people, but it isn't that way. A lot of people quite like math and they liked it. I get messages from people saying, now I'm retired, I'm gonna learn some more math. I get a lot of those. It's really encouraging. And I think what people like is that there's some order, a lot of order and things are not obvious, but they're true. So it's really cheering to think that so many people really wanna learn more about math. Yeah. And in terms of truth, again, sorry to slide into philosophy at times, but math does reveal pretty strongly what things are true. I mean, that's the whole point of proving things. It is, yeah. And yet, sort of our real world is messy and complicated. It is. What do you think about the nature of truth that math reveals? Oh, wow. Because it is a source of comfort like you've mentioned. Yeah, that's right. Well, I have to say, I'm not much of a philosopher. I just like numbers. As a kid, this was before you had to go in, when you had a filly in your teeth, you had to kind of just take it. So what I did was think about math, like take powers of two, two, four, eight, 16, up until the time the tooth stopped hurting and the dentist said you're through. Or counting. Yeah. So that was a source of just, source of peace almost. Yeah. What is it about math do you think that brings that? Yeah. What is that? Well, you know where you are. Yeah, it's symmetry, it's certainty. The fact that, you know, if you multiply two by itself 10 times, you get 1,024 period. Everybody's gonna get that. Do you see math as a powerful tool or as an art form? So it's both. That's really one of the neat things. You can be an artist and like math, you can be an engineer and use math. Which are you? Which am I? What did you connect with most? Yeah, I'm somewhere between. I'm certainly not a artist type, philosopher type person. Might sound that way this morning, but I'm not. Yeah, I really enjoy teaching engineers because they go for an answer. And yeah, so probably within the MIT math department, most people enjoy teaching people, teaching students who get the abstract idea. I'm okay with, I'm good with engineers who are looking for a way to find answers. Yeah. Actually, that's an interesting question. Do you think for teaching and in general, thinking about new concepts, do you think it's better to plug in the numbers or to think more abstractly? So looking at theorems and proving the theorems or actually building up a basic intuition of the theorem or the method, the approach, and then just plugging in numbers and seeing it work. Yeah, well, certainly many of us like to see examples. First, we understand, it might be a pretty abstract sounding example, like a three dimensional rotation. How are you gonna understand a rotation in 3D? Or in 10D? And then some of us like to keep going with it to the point where you got numbers, where you got 10 angles, 10 axes, 10 angles. But the best, the great mathematicians probably, I don't know if they do that, because for them, an example would be a highly abstract thing to the rest of it. Right, but nevertheless, working in the space of examples. Yeah, examples. It seems to. Examples of structure. Our brains seem to connect with that. Yeah, yeah. So I'm not sure if you're familiar with him, but Andrew Yang is a presidential candidate currently running with math in all capital letters and his hats as a slogan. I see. Stands for Make America Think Hard. Okay, I'll vote for him. So, and his name rhymes with yours, Yang, Strang. But he also loves math and he comes from that world of, but he also, looking at it, makes me realize that math, science, and engineering are not really part of our politics, political discourse, about political government in general. Why do you think that is? Well. What are your thoughts on that in general? Well, certainly somewhere in the system, we need people who are comfortable with numbers, comfortable with quantities. You know, if you say this leads to that, they see it and it's undeniable. But isn't that strange to you that we have almost no, I mean, I'm pretty sure we have no elected officials in Congress or obviously the president that either has an engineering degree or a math degree. Yeah, well, that's too bad. A few could, a few who could make the connection. Yeah, it would have to be people who understand engineering or science and at the same time can make speeches and lead, yeah. Yeah, inspire people. Yeah, inspire, yeah. You were, speaking of inspiration, the president of the Society for Industrial and Applied Mathematics. Oh, yes. It's a major organization in math, applied math. What do you see as a role of that society, you know, in our public discourse? Right. In public. Yeah, so, well, it was fun to be president at the time. A couple years, a few years. Two years, around 2000. I just hope that's president of a pretty small society. But nevertheless, it was a time when math was getting some more attention in Washington. But yeah, I got to give a little 10 minutes to a committee of the House of Representatives talking about who I met. And then, actually, it was fun because one of the members of the House had been a student, had been in my class. What do you think of that? Yeah, as you say, pretty rare, most members of the House have had a different training, different background. But there was one from New Hampshire who was my friend, really, by being in the class. Yeah, so those years were good. Then, of course, other things take over in importance in Washington, and math just, at this point, is not so visible. But for a little moment, it was. There's some excitement, some concern about artificial intelligence in Washington now. Yes, sure. About the future. Yeah. And I think at the core of that is math. Well, it is, yeah. Maybe it's hidden. Maybe it's wearing a different hat. Well, artificial intelligence, and particularly, can I use the words deep learning? Deep learning is a particular approach to understanding data. Again, you've got a big, whole lot of data where data is just swamping the computers of the world. And to understand it, out of all those numbers, to find what's important in climate, in everything. And artificial intelligence is two words for one approach to data. Deep learning is a specific approach there, which uses a lot of linear algebra. So I got into it. I thought, okay, I've gotta learn about this. So maybe from your perspective, let me ask the most basic question. How do you think of a neural network? What is a neural network? Yeah, okay. So can I start with the idea about deep learning? What does that mean? What is deep learning? What is deep learning, yeah. So we're trying to learn, from all this data, we're trying to learn what's important. What's it telling us? So you've got data, you've got some inputs for which you know the right outputs. The question is, can you see the pattern there? Can you figure out a way for a new input, which we haven't seen, to understand what the output will be from that new input? So we've got a million inputs with their outputs. So we're trying to create some pattern, some rule that'll take those inputs, those million training inputs, which we know about, to the correct million outputs. And this idea of a neural net is part of the structure of our new way to create a rule. We're looking for a rule that will take these training inputs to the known outputs. And then we're gonna use that rule on new inputs that we don't know the output and see what comes. Linear algebra is a big part of finding that rule. That's right, linear algebra is a big part. Not all the part. People were leaning on matrices, that's good, still do. Linear is something special. It's all about straight lines and flat planes. And data isn't quite like that. It's more complicated. So you gotta introduce some complication. So you have to have some function that's not a straight line. And it turned out, nonlinear, nonlinear, not linear. And it turned out that it was enough to use the function that's one straight line and then a different one. Halfway, so piecewise linear. One piece has one slope, one piece, the other piece has the second slope. And so that, getting that nonlinear, simple nonlinearity in blew the problem open. That little piece makes it sufficiently complicated to make things interesting. Because you're gonna use that piece over and over a million times. So it has a fold in the graph, the graph, two pieces. But when you fold something a million times, you've got a pretty complicated function that's pretty realistic. So that's the thing about neural networks is they have a lot of these. A lot of these, that's right. So why do you think neural networks, by using sort of formulating an objective function, very not a plain function of the folds, lots of folds of the inputs, the outputs, why do you think they work to be able to find a rule that we don't know is optimal, but it just seems to be pretty good in a lot of cases? What's your intuition? Is it surprising to you as it is to many people? Do you have an intuition of why this works at all? Well, I'm beginning to have a better intuition. This idea of things that are piecewise linear, flat pieces but with folds between them. Like think of a roof of a complicated, infinitely complicated house or something. That curve, it almost curved, but every piece is flat. That's been used by engineers, that idea has been used by engineers, is used by engineers, big time. Something called the finite element method. If you want to design a bridge, design a building, design an airplane, you're using this idea of piecewise flat as a good, simple, computable approximation. But you have a sense that there's a lot of expressive power in this kind of piecewise linear. Yeah, you used the right word. If you measure the expressivity, how complicated a thing can this piecewise flat guys express? The answer is very complicated, yeah. What do you think are the limits of such piecewise linear or just of neural networks? The expressivity of neural networks. Well, you would have said a while ago that they're just computational limits. It's a problem beyond a certain size. A supercomputer isn't gonna do it. But those keep getting more powerful. So that limit has been moved to allow more and more complicated surfaces. So in terms of just mapping from inputs to outputs, looking at data, what do you think of, in the context of neural networks in general, data is just tensor, vectors, matrices, tensors. Right. How do you think about learning from data? How much of our world can be expressed in this way? How useful is this process? I guess that's another way to ask you, what are the limits of this approach? Well, that's a good question, yeah. So I guess the whole idea of deep learning is that there's something there to learn. If the data is totally random, just produced by random number generators, then we're not gonna find a useful rule because there isn't one. So the extreme of having a rule is like knowing Newton's law. If you hit a ball, it moves. So that's where you had laws of physics. Newton and Einstein and other great, great people have found those laws and laws of the distribution of oil in an underground thing. I mean, so engineers, petroleum engineers understand how oil will sit in an underground basin. So there were rules. Now, the new idea of artificial intelligence is learn the rules instead of figuring out the rules with help from Newton or Einstein. The computer is looking for the rules. So that's another step. But if there are no rules at all that the computer could find, if it's totally random data, well, you've got nothing. You've got no science to discover. It's an automated search for the underlying rules. Yeah, search for the rules. Yeah, exactly. And there will be a lot of random parts. A lot of, I mean, I'm not knocking random because that's there. There's a lot of randomness built in, but there's gotta be some basic. It's almost always signal, right? In most things. There's gotta be some signal, yeah. If it's all noise, then you're not gonna get anywhere. Well, this world around us does seem to be, does seem to always have a signal of some kind. Yeah, yeah, that's right. To be discovered. Right, that's it. So what excites you more? We just talked about a little bit of application. What excites you more, theory or the application of mathematics? Well, for myself, I'm probably a theory person. I'm not, I'm speaking here pretty freely about applications, but I'm not the person who really, I'm not a physicist or a chemist or a neuroscientist. So for myself, I like the structure and the flat subspaces and the relation of matrices, columns to rows. That's my part in the spectrum. So really, science is a big spectrum of people from asking practical questions and answering them using some math, then some math guys like myself who are in the middle of it and then the geniuses of math and physics and chemistry who are finding fundamental rules and then doing the really understanding nature. That's incredible. At its lowest, simplest level, maybe just a quick in broad strokes from your perspective, where does linear algebra sit as a subfield of mathematics? What are the various subfields that you think about in relation to linear algebra? So the big fields of math are algebra as a whole and problems like calculus and differential equations. So that's a second, quite different field. Then maybe geometry deserves to be thought of as a different field to understand the geometry of high dimensional surfaces. So I think, am I allowed to say this here? I think this is where personal view comes in. I think math, we're thinking about undergraduate math, what millions of students study. I think we overdo the calculus at the cost of the algebra, at the cost of linear. So you have this talk titled Calculus Versus Linear Algebra. That's right, that's right. And you say that linear algebra wins. So can you dig into that a little bit? Why does linear algebra win? Right, well, okay, the viewer is gonna think this guy is biased. Not true, I'm just telling the truth as it is. Yeah, so I feel linear algebra is just a nice part of math that people can get the idea of. They can understand something that's a little bit abstract because once you get to 10 or 100 dimensions and very, very, very useful, that's what's happened in my lifetime is the importance of data, which does come in matrix form. So it's really set up for algebra. It's not set up for differential equation. And let me fairly add probability, the ideas of probability and statistics have become very, very important, have also jumped forward. So, and that's different from linear algebra, quite different. So now we really have three major areas to me, calculus, linear algebra, matrices, and probability statistics. And they all deserve an important place. And calculus has traditionally had a lion's share of the time. A disproportionate share. It is, thank you, disproportionate, that's a good word. Of the love and attention from the excited young minds. Yeah. I know it's hard to pick favorites, but what is your favorite matrix? What's my favorite matrix? Okay, so my favorite matrix is square, I admit it. It's a square bunch of numbers and it has twos running down the main diagonal. And on the next diagonal, so think of top left to bottom right, twos down the middle of the matrix and minus ones just above those twos and minus ones just below those twos and otherwise all zeros. So mostly zeros, just three nonzero diagonals coming down. What is interesting about it? Well, all the different ways it comes up. You see it in engineering, you see it as analogous in calculus to second derivative. So calculus learns about taking the derivative, the figuring out how much, how fast something's changing. But second derivative, now that's also important. That's how fast the change is changing, how fast the graph is bending, how fast it's curving. And Einstein showed that that's fundamental to understand space. So second derivatives should have a bigger place in calculus. Second, my matrices, which are like the linear algebra version of second derivatives are neat in linear algebra. Yeah, just everything comes out right with those guys. Beautiful. What did you learn about the process of learning by having taught so many students math over the years? Ooh, that is hard. I'll have to admit here that I'm not really a good teacher because I don't get into the exam part. The exam is the part of my life that I don't like and grading them and giving the students A or B or whatever. I do it because I'm supposed to do it, but I tell the class at the beginning, I don't know if they believe me. Probably they don't. I tell the class, I'm here to teach you. I'm here to teach you math and not to grade you. But they're thinking, okay, this guy is gonna, when is he gonna give me an A minus? Is he gonna give me a B plus? What? What have you learned about the process of learning? Of learning. Yeah, well, maybe to give you a legitimate answer about learning, I should have paid more attention to the assessment, the evaluation part at the end. But I like the teaching part at the start. That's the sexy part. To tell somebody for the first time about a matrix, wow. Is there, are there moments, so you are teaching a concept, are there moments of learning that you just see in the student's eyes? You don't need to look at the grades. But you see in their eyes that you hook them, that you connect with them in a way where, you know what, they fall in love with this beautiful world of math. They see that it's got some beauty there. Or conversely, that they give up at that point is the opposite. The dark could say that math, I'm just not good at math. I don't wanna walk away. Yeah, yeah, yeah. Maybe because of the approach in the past, they were discouraged, but don't be discouraged. It's too good to miss. Yeah, well, if I'm teaching a big class, do I know when, I think maybe I do. Sort of, I mentioned at the very start, the four fundamental subspaces and the structure of the fundamental theorem of linear algebra. The fundamental theorem of linear algebra. That is the relation of those four subspaces, those four spaces. Yeah, so I think that, I feel that the class gets it. At length. Yeah. What advice do you have to a student just starting their journey in mathematics today? How do they get started? Oh, yeah, that's hard. Well, I hope you have a teacher, professor, who is still enjoying what he's doing, what he's teaching. They're still looking for new ways to teach and to understand math. Cause that's the pleasure, the moment when you see, oh yeah, that works. So it's less about the material you study, it's more about the source of the teacher being full of passion. Yeah, more about the fun. Yeah, the moment of getting it. But in terms of topics, linear algebra? Well, that's my topic, but oh, there's beautiful things in geometry to understand. What's wonderful is that in the end, there's a pattern, there are rules that are followed in biology as there are in every field. You describe the life of a mathematician as 100% wonderful. No. Except for the grade stuff. Yeah. And the grades. Except for grades. Yeah, when you look back at your life, what memories bring you the most joy and pride? Well, that's a good question. I certainly feel good when I, maybe I'm giving a class in 1806, that's MIT's linear algebra course that I started. So sort of, there's a good feeling that, okay, I started this course, a lot of students take it, quite a few like it. Yeah, so I'm sort of happy when I feel I'm helping make a connection between ideas and students, between theory and the reader. Yeah, it's, I get a lot of very nice messages from people who've watched the videos and it's inspiring. I just, I'll maybe take this chance to say thank you. Well, there's millions of students who you've taught and I am grateful to be one of them. So Gilbert, thank you so much, it's been an honor. Thank you for talking today. It was a pleasure, thanks. Thank you for listening to this conversation with Gilbert Strang. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube. We have five stars in Apple Podcast, support on Patreon or connect with me on Twitter. Finally, some closing words of advice from the great Richard Feynman. Study hard what interests you the most in the most undisciplined, irreverent and original manner possible. Thank you for listening and hope to see you next time.
Gilbert Strang: Linear Algebra, Teaching, and MIT OpenCourseWare | Lex Fridman Podcast #52
The following is a conversation with Noam Chomsky. He's truly one of the great minds of our time and is one of the most cited scholars in the history of our civilization. He has spent over 60 years at MIT and recently also joined the University of Arizona, where we met for this conversation. But it was at MIT about four and a half years ago when I first met Noam. My first few days there, I remember getting into an elevator at Stata Center, pressing the button for whatever floor, looking up and realizing it was just me and Noam Chomsky riding the elevator, just me and one of the seminal figures of linguistics, cognitive science, philosophy, and political thought in the past century, if not ever. I tell that silly story because I think life is made up of funny little defining moments that you never forget for reasons that may be too poetic to try and explain. That was one of mine. Noam has been an inspiration to me and millions of others. It was truly an honor for me to sit down with him in Arizona. I traveled there just for this conversation. And in a rare, heartbreaking moment, after everything was set up and tested, the camera was moved and accidentally, the recording button was pressed, stopping the recording. So I have good audio of both of us, but no video of Noam. Just the video of me and my sleep deprived but excited face that I get to keep as a reminder of my failures. Most people just listen to this audio version for the podcast as opposed to watching it on YouTube. But still, it's heartbreaking for me. I hope you understand and still enjoy this conversation as much as I did. The depth of intellect that Noam showed and his willingness to truly listen to me, a silly looking Russian in a suit. It was humbling and something I'm deeply grateful for. As some of you know, this podcast is a side project for me, where my main journey and dream is to build AI systems that do some good for the world. This latter effort takes up most of my time, but for the moment has been mostly private. But the former, the podcast, is something I put my heart and soul into. And I hope you feel that, even when I screw things up. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called The First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store, Google Play and use code LexPodcast, you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Noam Chomsky. I apologize for the absurd philosophical question, but if an alien species were to visit Earth, do you think we would be able to find a common language or protocol of communication with them? There are arguments to the effect that we could. In fact, one of them was Marv Minsky's. Back about 20 or 30 years ago, he performed a brief experiment with a student of his, Dan Bobrow, they essentially ran the simplest possible touring machines, just free to see what would happen. And most of them crashed, either got into an infinite loop or stopped. The few that persisted, essentially gave something like arithmetic. And his conclusion from that was that if some alien species developed higher intelligence, they would at least have arithmetic, they would at least have what the simplest computer would do. And in fact, he didn't know that at the time, but the core principles of natural language are based on operations which yield something like arithmetic in the limiting case, in the minimal case. So it's conceivable that a mode of communication could be established based on the core properties of human language and the core properties of arithmetic, which maybe are universally shared. So it's conceivable. What is the structure of that language, of language as an internal system inside our mind versus an external system as it's expressed? It's not an alternative, it's two different concepts of language. Different. It's a simple fact that there's something about you, a trait of yours, part of the organism, you, that determines that you're talking English and not Tagalog, let's say. So there is an inner system. It determines the sound and meaning of the infinite number of expressions of your language. It's localized. It's not on your foot, obviously, it's in your brain. If you look more closely, it's in specific configurations of your brain. And that's essentially like the internal structure of your laptop, whatever programs it has are in there. Now, one of the things you can do with language, it's a marginal thing, in fact, is use it to externalize what's in your head. Actually, most of your use of language is thought, internal thought. But you can do what you and I are now doing. We can externalize it. Well, the set of things that we're externalizing are an external system. They're noises in the atmosphere. And you can call that language in some other sense of the word. But it's not a set of alternatives. These are just different concepts. So how deep do the roots of language go in our brain? Our mind, is it yet another feature like vision, or is it something more fundamental from which everything else springs in the human mind? Well, in a way, it's like vision. There's something about our genetic endowment that determines that we have a mammalian rather than an insect visual system. And there's something in our genetic endowment that determines that we have a human language faculty. No other organism has anything remotely similar. So in that sense, it's internal. Now there is a long tradition, which I think is valid going back centuries to the early scientific revolution, at least that holds that language is the sort of the core of human cognitive nature. It's the source, it's the mode for constructing thoughts and expressing them. That is what forms thought. And it's got fundamental creative capacities. It's free, independent, unbounded, and so on. And undoubtedly, I think the basis for our creative capacities and the other remarkable human capacities that lead to the unique achievements and not so great achievements of the species. The capacity to think and reason, do you think that's deeply linked with language? Do you think the way we, the internal language system is essentially the mechanism by which we also reason internally? It is undoubtedly the mechanism by which we reason. There may also be other fact, there are undoubtedly other faculties involved in reasoning. We have a kind of scientific faculty, nobody knows what it is, but whatever it is that enables us to pursue certain lines of endeavor and inquiry and to decide what makes sense and doesn't make sense and to achieve a certain degree of understanding of the world, that uses language, but goes beyond it. Just as using our capacity for arithmetic is not the same as having the capacity. The idea of capacity, our biology, evolution, you've talked about it defining essentially our capacity, our limit and our scope. Can you try to define what limit and scope are? And the bigger question, do you think it's possible to find the limit of human cognition? Well, that's an interesting question. It's commonly believed, most scientists believe that human intelligence can answer any question in principle. I think that's a very strange belief. If we're biological organisms, which are not angels, then our capacities ought to have scope and limits which are interrelated. Can you define those two terms? Well, let's take a concrete example. Your genetic endowment determines that you can have a male in visual system, arms and legs and so on, but it therefore become a rich, complex organism. But if you look at that same genetic endowment, it prevents you from developing in other directions. There's no kind of experience which would yield the embryo to develop an insect visual system or to develop wings instead of arms. So the very endowment that confers richness and complexity also sets bounds on what can be attained. Now, I assume that our cognitive capacities are part of the organic world. Therefore, they should have the same properties. If they had no built in capacity to develop a rich and complex structure, we would understand nothing. Just as if your genetic endowment did not compel you to develop arms and legs, you would just be some kind of random amoeboid creature with no structure at all. So I think it's plausible to assume that there are limits and I think we even have some evidence as to what they are. So for example, there's a classic moment in the history of science at the time of Newton. There was a from Galileo to Newton modern science developed on a fundamental assumption which Newton also accepted. Namely that the world is an entire universe is a mechanical object. And by mechanical, they meant something like the kinds of artifacts that were being developed by skilled artisans all over Europe, the gears, levers and so on. And their belief was well, the world is just a more complex variant of this. Newton, to his astonishment and distress, proved that there are no machines, that there's interaction without contact. His contemporaries like Leibniz and Huygens just dismissed this as returning to the mysticism of the neo scholastics. And Newton agreed. He said it is totally absurd. No person of any scientific intelligence could ever accept this for a moment. In fact, he spent the rest of his life trying to get around it somehow, as did many other scientists. That was the very criterion of intelligibility for say Galileo or Newton. Theory did not produce an intelligible world unless you could duplicate it in a machine. He showed you can't, there are no machines, any. Finally, after a long struggle, took a long time, scientists just accepted this as common sense. But that's a significant moment. That means they abandoned the search for an intelligible world. And the great philosophers of the time understood that very well. So for example, David Hume in his encomium to Newton wrote that who was the greatest thinker ever and so on. He said that he unveiled many of the secrets of nature, but by showing the imperfections of the mechanical philosophy, mechanical science, he left us with, he showed that there are mysteries which ever will remain. And science just changed its goals. It abandoned the mysteries. It can't solve it, we'll put it aside. We only look for intelligible theories. Newton's theories were intelligible. It's just what they described wasn't. Well, Locke said the same thing. I think they're basically right. And if so, that showed something about the limits of human cognition. We cannot attain the goal of understanding the world, of finding an intelligible world. This mechanical philosophy Galileo to Newton, there's a good case that can be made that that's our instinctive conception of how things work. So if say infants are tested with things that, if this moves and then this moves, they kind of invent something that must be invisible that's in between them that's making them move and so on. Yeah, we like physical contact. Something about our brain seeks. Makes us want a world like that. Just like it wants a world that has regular geometric figures. So for example, Descartes pointed this out that if you have an infant who's never seen a triangle before and you draw a triangle, the infant will see a distorted triangle, not whatever crazy figure it actually is. Three lines not coming quite together, one of them a little bit curved and so on. We just impose a conception of the world in terms of geometric, perfect geometric objects. It's now been shown that goes way beyond that. That if you show on a tachistoscope, let's say a couple of lights shining, you do it three or four times in a row. What people actually see is a rigid object in motion, not whatever's there. We all know that from a television set basically. So that gives us hints of potential limits to our cognition. I think it does, but it's a very contested view. If you do a poll among scientists, it's impossible we can understand anything. Let me ask and give me a chance with this. So I just spent a day at a company called Neuralink and what they do is try to design what's called the brain machine, brain computer interface. So they try to do thousands readings in the brain, be able to read what the neurons are firing and then stimulate back, so two way. Do you think their dream is to expand the capacity of the brain to attain information, sort of increase the bandwidth of which we can search Google kind of thing? Do you think our cognitive capacity might be expanded our linguistic capacity, our ability to reason might be expanded by adding a machine into the picture? Can be expanded in a certain sense, but a sense that was known thousands of years ago. A book expands your cognitive capacity. Okay, so this could expand it too. But it's not a fundamental expansion. It's not totally new things could be understood. Well, nothing that goes beyond their native cognitive capacities. Just like you can't turn the visual system into an insect system. Well, I mean, the thought is, the thought is perhaps you can't directly, but you can map sort of. You couldn't, but we already, we know that without this experiment. You could map what a bee sees and present it in a form so that we could follow it. In fact, every bee scientist does that. But you don't think there's something greater than bees that we can map and then all of a sudden discover something, be able to understand a quantum world, quantum mechanics, be able to start to be able to make sense. Students at MIT study and understand quantum mechanics. But they always reduce it to the infant, the physical. I mean, they don't really understand. Oh, you don't, there's thing, that may be another area where there's just a limit to understanding. We understand the theories, but the world that it describes doesn't make any sense. So, you know, the experiment, Schrodinger's cat, for example, can understand the theory, but as Schrodinger pointed out, it's an unintelligible world. One of the reasons why Einstein was always very skeptical about quantum theory, was that he described himself as a classical realist, in one's intelligibility. He has something in common with infants in that way. So, back to linguistics. If you could humor me, what are the most beautiful or fascinating aspects of language or ideas in linguistics or cognitive science that you've seen in a lifetime of studying language and studying the human mind? Well, I think the deepest property of language and puzzling property that's been discovered is what is sometimes called structure dependence. We now understand it pretty well, but it was puzzling for a long time. I'll give you a concrete example. So, suppose you say the guy who fixed the car carefully packed his tools, it's ambiguous. He could fix the car carefully or carefully pack his tools. Suppose you put carefully in front, carefully the guy who fixed the car packed his tools, then it's carefully packed, not carefully fixed. And in fact, you do that even if it makes no sense. So, suppose you say carefully, the guy who fixed the car is tall. You have to interpret it as carefully he's tall, even though that doesn't make any sense. And notice that that's a very puzzling fact because you're relating carefully not to the linearly closest verb, but to the linearly more remote verb. A linear closeness is an easy computation, but here you're doing a much more, what looks like a more complex computation. You're doing something that's taking you essentially to the more remote thing. It's now, if you look at the actual structure of the sentence, where the phrases are and so on, turns out you're picking out the structurally closest thing, but the linearly more remote thing. But notice that what's linear is 100% of what you hear. You never hear structure, can't. So, what you're doing is, and certainly this is universal, all constructions, all languages, and what we're compelled to do is carry out what looks like the more complex computation on material that we never hear, and we ignore 100% of what we hear and the simplest computation. By now, there's even a neural basis for this that's somewhat understood, and there's good theories by now that explain why it's true. That's a deep insight into the surprising nature of language with many consequences. Let me ask you about a field of machine learning, deep learning. There's been a lot of progress in neural networks based, neural network based machine learning in the recent decade. Of course, neural network research goes back many decades. What do you think are the limits of deep learning, of neural network based machine learning? Well, to give a real answer to that, you'd have to understand the exact processes that are taking place, and those are pretty opaque. So, it's pretty hard to prove a theorem about what can be done and what can't be done, but I think it's reasonably clear. I mean, putting technicalities aside, what deep learning is doing is taking huge numbers of examples and finding some patterns. Okay, that could be interesting in some areas it is, but we have to ask here a certain question. Is it engineering or is it science? Engineering in the sense of just trying to build something that's useful, or science in the sense that it's trying to understand something about elements of the world. So, take, say, a Google parser. We can ask that question. Is it useful, yeah, it's pretty useful. I use a Google translator, so on engineering grounds, it's kind of worth having, like a bulldozer. Does it tell you anything about human language? Zero, nothing, and in fact, it's very striking. From the very beginning, it's just totally remote from science. So, what is a Google parser doing? It's taking an enormous text, let's say the Wall Street Journal corpus, and asking how close can we come to getting the right description of every sentence in the corpus. Well, every sentence in the corpus is essentially an experiment. Each sentence that you produce is an experiment which says, am I a grammatical sentence? The answer is usually yes. So, most of the stuff in the corpus is grammatical sentences. But now, ask yourself, is there any science which takes random experiments which are carried out for no reason whatsoever and tries to find out something from them? Like if you're, say, a chemistry PhD student, you wanna get a thesis, can you say, well, I'm just gonna mix a lot of things together, no purpose, and maybe I'll find something. You'd be laughed out of the department. Science tries to find critical experiments, ones that answer some theoretical question. Doesn't care about coverage of millions of experiments. So, it just begins by being very remote from science and it continues like that. So, the usual question that's asked about, say, a Google parser is how well does it do, or some parser, how well does it do on a corpus? But there's another question that's never asked. How well does it do on something that violates all the rules of language? So, for example, take the structure dependence case that I mentioned. Suppose there was a language in which you used linear proximity as the mode of interpretation. These deep learning would work very easily on that. In fact, much more easily on an actual language. Is that a success? No, that's a failure from a scientific point of view. It's a failure. It shows that we're not discovering the nature of the system at all, because it does just as well or even better on things that violate the structure of the system. And it goes on from there. It's not an argument against doing it. It is useful to have devices like this. So, yes, so neural networks are kind of approximators that look, there's echoes of the behavioral debates, right? Behavioralism. More than echoes. Many of the people in deep learning say they've vindicated Terry Sanyosky, for example, in his recent books, as this vindicates Skinnerian behaviors. It doesn't have anything to do with it. Yes, but I think there's something actually fundamentally different when the data set is huge. But your point is extremely well taken. But do you think we can learn, approximate that interesting complex structure of language with neural networks that will somehow help us understand the science? It's possible. I mean, you find patterns that you hadn't noticed, let's say, could be. In fact, it's very much like a kind of linguistics that's done, what's called corpus linguistics. When you, suppose you have some language where all the speakers have died out, but you have records. So you just look at the records and see what you can figure out from that. It's much better than, it's much better to have actual speakers where you can do critical experiments. But if they're all dead, you can't do them. So you have to try to see what you can find out from just looking at the data that's around. You can learn things. Actually, paleoanthropology is very much like that. You can't do a critical experiment on what happened two million years ago. So you're kind of forced just to take what data's around and see what you can figure out from it. Okay, it's a serious study. So let me venture into another whole body of work and philosophical question. You've said that evil in society arises from institutions, not inherently from our nature. Do you think most human beings are good, they have good intent? Or do most have the capacity for intentional evil that depends on their upbringing, depends on their environment, on context? I wouldn't say that they don't arise from our nature. Anything we do arises from our nature. And the fact that we have certain institutions, not others, is one mode in which human nature has expressed itself. But as far as we know, human nature could yield many different kinds of institutions. The particular ones that have developed have to do with historical contingency, who conquered whom, and that sort of thing. They're not rooted in our nature in the sense that they're essential to our nature. So it's commonly argued that these days that something like market systems is just part of our nature. But we know from a huge amount of evidence that that's not true. There's all kinds of other structures. It's a particular fact of a moment of modern history. Others have argued that the roots of classical liberalism actually argue that what's called sometimes an instinct for freedom, the instinct to be free of domination by illegitimate authority is the core of our nature. That would be the opposite of this. And we don't know. We just know that human nature can accommodate both kinds. If you look back at your life, is there a moment in your intellectual life or life in general that jumps from memory that brought you happiness that you would love to relive again? Sure. Falling in love, having children. What about, so you have put forward into the world a lot of incredible ideas in linguistics, in cognitive science, in terms of ideas that just excites you when it first came to you that you would love to relive those moments. Well, I mean, when you make a discovery about something that's exciting, like, say, even the observation of structure dependence and on from that, the explanation for it. But the major things just seem like common sense. So if you go back to take your question about external and internal language, you go back to, say, the 1950s, almost entirely languages regarded an external object, something outside the mind. It just seemed obvious that that can't be true. Like I said, there's something about you that determines you're talking English, not Swahili or something. But that's not really a discovery. That's just an observation, what's transparent. You might say it's kind of like the 17th century, the beginnings of modern science, 17th century. They came from being willing to be puzzled about things that seemed obvious. So it seems obvious that a heavy ball of lead will fall faster than a light ball of lead. But Galileo was not impressed by the fact that it seemed obvious. So he wanted to know if it's true. They carried out experiments, actually thought experiments, never actually carried them out, which that can't be true. And out of things like that, observations of that kind, why does a ball fall to the ground instead of rising, let's say, seems obvious, till you start thinking about it, because why does steam rise, let's say. And I think the beginnings of modern linguistics, roughly in the 50s, are kind of like that, just being willing to be puzzled about phenomena that looked, from some point of view, obvious. And for example, a kind of doctrine, almost official doctrine of structural linguistics in the 50s was that languages can differ from one another in arbitrary ways, and each one has to be studied on its own without any presuppositions. In fact, there were similar views among biologists about the nature of organisms, that each one's, they're so different when you look at them that almost anything, you could be almost anything. Well, in both domains, it's been learned that that's very far from true. There are narrow constraints on what could be an organism or what could be a language. But these are, that's just the nature of inquiry. Inquiry. Science in general, yeah, inquiry. So one of the peculiar things about us human beings is our mortality. Ernest Becker explored it in general. Do you ponder the value of mortality? Do you think about your own mortality? I used to when I was about 12 years old. I wondered, I didn't care much about my own mortality, but I was worried about the fact that if my consciousness disappeared, would the entire universe disappear? That was frightening. Did you ever find an answer to that question? No, nobody's ever found an answer, but I stopped being bothered by it. It's kind of like Woody Allen in one of his films, you may recall, he starts, he goes to a shrink when he's a child and the shrink asks him, what's your problem? He says, I just learned that the universe is expanding. I can't handle that. And then another absurd question is, what do you think is the meaning of our existence here, our life on Earth, our brief little moment in time? That's something we answer by our own activities. There's no general answer. We determine what the meaning of it is. The action determine the meaning. Meaning in the sense of significance, not meaning in the sense that chair means this. But the significance of your life is something you create. No, thank you so much for talking to me today. It was a huge honor. Thank you so much. Thanks for listening to this conversation with Noah Chomsky and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or connect with me on Twitter. Thank you for listening and hope to see you next time.
Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53
The following is a conversation with Ray Dalio. He's the founder, co chairman, and co chief investment officer of Bridgewater Associates, one of the world's largest and most successful investment firms that is famous for the principles of radical truth and transparency that underlies culture. Ray is one of the wealthiest people in the world with ideas that extend far beyond the specifics of how he made that wealth. His ideas that are applicable to everyone are brilliantly summarized in his book, Principles. They're also even further condensed on several other platforms, including YouTube, where, for example, the 30 minute video titled How the Economic Machine Works is one of the best educational videos I personally have ever seen on YouTube. Once again, you may have noticed that the people I've been speaking with are not just computer scientists, but philosophers, mathematicians, writers, psychologists, physicists, economists, investors, and soon, much more. To me, AI is much bigger than deep learning, bigger than computing. It is our civilization's journey into understanding the human mind and creating echoes of it in the machine. That journey includes the mechanisms of our economy, of our politics, and the leaders that shape the future of both. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin. Most Bitcoin exchanges take days for a bank transfer to become investable. Through Cash App, it takes seconds. Cash App also has a new investing feature. You can buy fractions of a stock, which to me is a really interesting concept. So you can buy $1 worth no matter what the stock price is. Brokerage services are provided by Cash App Investing, subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations that many of you may know and have benefited from called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LexPodcast, you'll get $10 and Cash App will also donate $10 to First, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Ray Dalio. Truth or more precisely an accurate understanding of reality is the essential foundation of any good outcome. I believe you've said that. Let me ask an absurd sounding question at the high philosophical level. So what is truth? When you're trying to do something different than everybody else is doing and perhaps something that has not been done before, how do you accurately analyze the situation? How do you accurately discover the truth, the nature of things? Almost the way you're asking the question implies that truth and newness have nothing, are almost at odds. And I just want to say that I don't think that that's true, right? So what I mean by truth is, you know, what's the reality? How does the reality work? And so if you're doing something new that has never been done before, which is exciting and I like to do, the way that you would start with that is experimenting on what are the realities and the premises that you're using on that and how to stress test those types of things. I think what you're talking about is instead the fact of how do you deal with something that's never been done before and deal with the associated probabilities. And so I think in that, don't let something that's never been done before stand in the way of you doing that particular thing. You have, because almost the only way that you understand what truth is, is through experimentation. And so when you go out and experiment, you're going to learn a lot more about what truth is. But the essence of what I'm saying is that when you take a look at that, use truth, find out what the realities are as a foundation, do the independent thinking, do the experimentation to find out what's true and change and keep going after that. So I think that when you're thinking about it the way you're thinking about it, that almost implies that you're letting people almost say that they're reliant on what's been discovered before to find out what's true. And what's been discovered before is often not true, right? Conventional view of what is true is very often wrong. It'll go in ups and downs and I mean, there are fads and okay, this thing, it goes this way and that way. And so definitions of truths that are conventional are not the thing to go by. How do you know the thing that has been done before that it might succeed? It's to do whatever homework that you have in order to try to get a foundation. And then to go into worlds of not knowing and you go into the world of not knowing, but not stupidly, not naively, you know, you go into that world of not knowing and then you do experimenting and you learn what truth is and what's possible through that process. I describe it as a five step process. The first step is you go after your goals. The second step is you identify the problems that stand in the way of you getting to your goals. The third step is you diagnose those to get at the root cause of those. Then the fourth step is then now that you know the exact root cause, you design a way to get around those and then you follow through and do the designs you set out to do and it's the experimentation. I think that what happens to people mostly is that they try to decide whether they're gonna be successful or not ahead of doing it and they don't know how to do the process well because the nature of your questions are along those lines like how do you know? Well, you don't know, but a practical person who is also used to making dreams happen knows how to do that process. I've given personality tests to shapers. So the person, what I mean by a shaper is a person who can take something from visualization, they have an audacious goal and then they go from visualization to actualization, building it out. That includes Elon Musk, I gave him the personality test, I've given it to Bill Gates and I've given it to many, many such shapers and they know that process that I'm talking about, they experience it which is a process essentially of knowing how to go from an audacious goal but not in a ridiculous way, not a dream and then to do that learning along the way that allows them in a very practical way to learn very rapidly as they're moving toward that goal. So the call to adventure, the adventure starts not trying to analyze the probabilities of the situation but using what, instinct? How do you dive in? So let's talk about. It is being a, it's simultaneously being a dreamer and a realist, it's to know how to do that well. The pull comes from a pull to adventure. For whatever reason, I can't tell you how much of it's genetics and how much is environment but there's a early on, it's exciting. That notion is exciting, being creative is exciting and so one feels that, then one gets in the habit of doing that, okay, how do I know? How do I learn very well and then how do I imagine and then how do I experiment to go from that imagination? So it's that process that one, and then one, the more one does it, the better one becomes at it. You mentioned shapers, Elon Musk, Bill Gates. What, who are the shapers that you find yourself thinking about when you're constructing these ideas? The ones that define the archetype of a shaper for you. Well, as I say, a shaper for me is somebody who comes up with a great visualization, usually a really unique visualization and then actually builds it out and makes the world different, changes the world in that kind of a way. So when I look at it, Mark Benioff with Salesforce, Chris Anderson with TED, Mohammed Yunus with social enterprise and philanthropy, Jeffrey Canada and Harlem Children's Zone, there are, all domains have shapers who have the ability to visualize and make extraordinary things happen. What are the commonalities between some of them? The commonalities are, first of all, the excitement of something new, that call to adventure, and then again, that practicality, the capacity to learn. The capacity then, they're able to be, in many ways, full rage. That means they're able to go from the big, big picture down to the detail. So let's say, for example, Elon Musk, he describes, he gets a lot of money from selling PayPal, his interest in PayPal. He said, why isn't anybody going to Mars or outer space? What are we gonna do if the planet goes to hell? And how are we gonna get that? And nobody's paying attention to that. He doesn't know much about it. He then reads and learns and so on. Says, I'm gonna take, okay, half of my money, and I'm gonna put it in there, and I'm gonna do this thing, and he learns, and blah, blah, blah, blah, blah, blah, blah, and he's got creative, okay. That's one dimension. So he gave me the keys to his car, but this was just early days in Tesla, and he then points out the details. Okay, if you push this button here, it's this, the detail that, so he's simultaneously talking about the big, the big, big, big picture. Okay, when does humanity going to abandon the planet? But he will then be able to take it down into the detail so he can go, let's call it helicoptering. He can go up, he can go down, and see things at those types of perspective. And then you've seen that with the other shapers. And that's a common thing that they can do that. Another important difference that they have in mind is how they deal with people. I mean, meaning there's nothing more important than achieving the mission. And so what they have in common is that there's a test that I give these personality tests, because they're very helpful for understanding people. And so I gave it to all these shapers. And one of the things in workplace inventory test is this test, and it has a category called concern for others. They're all having concern for others. This includes Muhammad Yunus, who invented microfinance, social enterprise, impact investing, as Muhammad Yunus received the Nobel Peace Prize for this, the Congressional Medal of Honor. One of the, a fortune determined, one of the 10 greatest entrepreneurs of our time. He's built all sorts of businesses to give back money in social enterprise, a remarkable man. He has nobody that I know practically could have more concern for others. He lives a life of a saint. I mean, a very modest lifestyle, and he puts all his money into trying to help others. And he tests low on what's called concern for others, because what it really, those questions under that are questions about conflict to get at the mission. So they all, Jeffrey Canada, who changed Harlem Children's Zone and developed that to take children in Harlem and get them well taken care of, not only just in their education, but their whole lives. Harlem, him also, concern for others. What they mean is that they can see whether though individuals are performing at a level that an extremely high level that's necessary to make those dreams happen. So when you think of, let's say Steve Jobs was famous for being difficult with people and so on, and I didn't know Steve Jobs, so I can't speak personally to that, but his comments on, do you have A players? And if you have A players, if you put in B players, pretty soon you'll have C players and so on. That is a common element of them, holding people to high standards and not letting anybody stand in the way of the mission. What do you think about that kind of idea? Sorry to pause on that for a second, that the A, B, and C players, and the importance of, so when you have a mission, to really only have A players and be sort of aggressively filtering for that? Yes, but I think that there are all different ways of being A players. And I think, in order to grade a great team, you have to appreciate all the differences in ways of being A players, okay? That's the first thing. And then you always have to be super excellent, in my opinion, you always have to be really excellent with people to help them understand each other themselves and get in sync with them about what's true about them and their circumstances and how they're doing, so that they're having a fabulous personal development experience at the same time as you're dealing with them. So when I say that they're all different ways, this is one of the then qualities, you asked me what are the qualities. So one of the third qualities that I would say is to know how to deal well with your not knowing and to be able to get the best expertise so that you're a great orchestrator of different ways, so that the people who are really, really successful, unlike most people believe that they're successful because of what they know, they're even more successful by being able to effectively learn from others and tap into the skills of people who see things different from them. Brilliant, so how do you win that personality being first of all, open to the fact that there's other people see things differently than you, and at the same time have supreme confidence in your vision? Is there just the psychology of that? Do you see a tension there between the confidence and the open mindedness? No, it's funny because I think we grow up thinking that there's a tension there, right? That there's a confidence and the more confidence that you have, there's a tension with the open mindedness and not being sure, okay? Confident and accurate are almost negatively correlated to many people. They're extremely confident and they're often inaccurate. And so I think one of the greatest tragedies of people is not realizing how those things to go together because instead it's really that by saying I know a lot and how do I know I'm still not wrong? And how do I take that the best thinking available to me and then raise my probability of learning? All these people think for themselves, okay? I mean, meaning they're smart, but they take in like vacuum cleaners, they take in ideas of others, they stress test their ideas with others, they assess what comes back to them in the form of other thinking and they also know what they're not good at and what other people who are good at the things that they're not good at, they know how to get those people and be successful all around because nobody has enough knowledge in their heads and that I think is one of the great differences. So the reason my company has been successful in terms of this is because of an idea meritocratic decision making a process by which you can get the best ideas. You know, what's an idea meritocracy? An idea meritocracy is to get the best ideas that are available out there and to work together with other people in the team to achieve that. That's an incredible process that you describe in several places to arrive at the truth, but I apologize if I'm romanticizing the notion, but let me linger on it. Just having enough self belief, you don't think there's a self delusion there that's necessary, especially in the beginning. You talk about in the journey, maybe the trials or the abyss. Do you think there is value to diluting yourself? I think what you're calling delusion is a bad word for uncertainty, okay? So in other words, because we keep coming back to the question, how would you know and all of those things? No, I think that delusion is not gonna help you, that you have to find out truth, okay? To deal with uncertainty, not saying, listen, I have this dream and I don't know how I'm going to get that dream. I mentioned in my book principles and described the process in a more complete way than we're gonna be able to go here. But what happens is I say, you form your dreams first and you can't judge whether you're going to achieve those dreams because you haven't learned the things that you're going to learn on the way toward those dreams, okay? So that isn't delusion. I wouldn't use delusion. I think you're overemphasizing the importance of knowing whether you're going to succeed or not. Get rid of that, okay? If you can get rid of that and say, okay, no, I can have that dream, but I'm so realistic in the notion of finding out. I'm curious, I'm a great learner, I'm a great experimenter. Along the way, you'll do those experiments which will teach you more truths and more learning about the reality so that you can get your dreams. Because if you still live in that world of delusion, okay? And you think delusion's helpful. No, the delusion isn't. Don't confuse delusion with not knowing. Yes, but nevertheless, so if we look at the abyss, we can look at your own that you describe. It's difficult psychologically for people. So many people quit. Many people choose a path that is more comfortable. The heartbreak of that breaks people. So if you have the dream and there's this cycle of learning, setting a goal, and so on, what's your value for the psychology of just being broken by these difficult moments? Well, that's classically the defining moment. It's almost like evolution taking care of, okay, now you crash, you're in the abyss. Oh my God, that's bad. And then the question is, what do you do? And it sorts people, okay? And some people get off the field and they say, oh, I don't like this and so on. And some people learn and they have a metamorphosis and it changes their approach to learning. The number one thing it should give them is uncertainty. You should take an audacious dreamer, guy who wants to change the world, crash, okay, and then come out of that crashing and saying, okay, I can be audacious and scared that I'm gonna be wrong at the same time. And then how do I do that? Because that's the key. When you don't lose your audaciousness and you keep going after your big goal, and at the same time you say, hey, I'm worried that I'm gonna be wrong, you gain your radical open mindedness that allows you to take in the things, that allows you to go to the next level of being successful. So your own process, I mean, you've talked about it before, but it would be great if you can describe it because our darkest moments are perhaps the most interesting. So your own with the prediction of another depression. Economic depression. Yes, I apologize, economic depression. Can you talk to what you were feeling, thinking, planning, strategizing at those moments? Yeah, that was my biggest moment, okay? Building my little company. This is in 1981, 82. I had calculated that American banks had given a lot more money to, lent a lot more money to Latin American countries than those countries were gonna pay back, and that they would have a debt crisis, and that this had sent the economy tumbling, and that was an extremely controversial point of view. Then it started to happen, and it happened, and Mexico defaulted in August 1982. I thought that there was gonna be an economic collapse that was gonna follow because there was a series of the other countries, it was just playing out as I had imagined, and that couldn't have been more wrong. That was the exact bottom in the stock market because central banks eased monetary policy, blah, blah, blah, and I couldn't have been more wrong, and I was very publicly wrong and all of that, and I lost money for me, and I lost money for my clients, and I only had a small company then, but these were close people, I had to let them go. I was down to me as the last person. I was so broke I had to borrow $4,000 from my dad to help to pay for my family bills, very painful, and at the same time, I would say it definitely was one of the best things that ever happened to me, maybe the best thing from happened to me, because it changed my approach to decision making. It's what I'm saying. In other words, I kept saying, okay, how do I know whether I'm right? How do I know I'm not wrong? It gave me that, and it didn't give up my audaciousness because I was in a position, what am I gonna do? Am I gonna go back, put on a tie, go to Wall Street and just do those things? No, I can't bring myself to do that, so I'm at a juncture. How do I deal with my risk and how do I deal with that? And it told me how to deal with my uncertainties, and that taught me, for example, a number of techniques. First, to find the smartest people I could find who disagreed with me and to have quality disagreement. I learned the art of thoughtful disagreement. I learned how to produce diversification. I learned how to do a number of things. That is what led me to create an idea meritocracy. In other words, person by person, I hired them, and I wanted the smartest people who would be independent thinkers who would disagree with each other and me well so that we could be independent thinkers to go off to produce those audacious dreams because you have to be an independent thinker to do that, and to do that independently of the consensus, independently of each other, and then work ourselves through that because who know whether you're gonna have the right answer? And by doing that, then that was the key to our success. And the things that I wanna pass along to people, the reason I'm doing this podcast with you is I'm 70 years old, and that is a magical way of achieving success. If you can create an idea meritocracy, it's so much better in terms of achieving success and also quality relationships with people, but that's what that experience gave me. So if we can linger on a little bit longer, the idea of an idea meritocracy, it's fascinating, but especially because it seems to be rare, not just in companies, but in society. So there is a lot of people on Twitter and public discourse and politics and so on that are really stuck in certain sets of ideas, whatever they are. So when you're confronted with an idea that's different than your own about a particular topic, what kind of process do you go through mentally? Are you arguing through the idea with the person, sort of present is almost like a debate, or do you sit on it and consider the world sort of empathetically? If this is true, then what does that world look like? Does that world make sense and so on? So what's the process of considering those conflicting ideas for you? I'm gonna answer that question, but after saying first, almost implicit in your question is it's not common, okay? What's common produces only common results, okay? So don't judge anything that is good based on whether it's common, because it's only gonna give you common results. If you want unique, you have a unique approach, okay? And so that art of thoughtful disagreement is the capacity to hold two things in your mind at the same time. The, gee, I think this makes sense, and then saying, I'm not sure it makes sense, and then try to say, why does it make sense? And then to triangulate with others. So if I'm having a discussion like that and I work myself through and I'm not sure, then I have to do that in a good way. So I always give attention, for example, let's start off, what does the other person know relative to what I know? So if a person has a higher expertise or things, I'm much more inclined to ask questions. I'm always asking questions. If you wanna learn, you're asking questions, you're not arguing, okay? You're taking in, you're assessing when it comes into you. Does that make sense? Are you learning something? Are you getting epiphanies and so on? And I try to then do that if the conversation, as we're trying to decide what is true, and we're trying to do that together, and we see truth different, then I might even call in another really smart, capable person and try to say, what is true and how do we explore that together? And you go through that same thing. So I would, I said, I describe it as having open mindedness and assertiveness at the same time, that you can simultaneously be open minded and take in with that curiosity and then also be assertive and say, but that doesn't make sense. Why would this be the case? And you do that back and forth. And when you're doing that kind of back and forth on a topic like the economy, which you have, to me, perhaps I'm naive, but it seems both incredible and incredibly complex, the economy, the trading, the transactions, that these transactions between two individuals somehow add up to this giant mechanism. You've put out a 30 minute video. You have a lot of incredible videos online that people should definitely watch on YouTube, but you've put out this 30 minute video titled How the Economic Machine Works. That is probably one of the best, if not the best video I've seen on the internet in terms of educational videos. So people should definitely watch it, especially because it's not that the individual components of the video are somehow revolutionary, but the simplicity and the clarity of the different components just makes you, there's a few light bulb moments there about how the economy works as a machine. So as you described, there's three main forces that drive the economy, productivity growth, short term debt cycle, long term debt cycle. The former, productivity growth is how valuable things, how much value people create, valuable things people create. The latter is people borrowing from their future selves to hopefully create those valuable things faster. So this is an incredible system to me. Maybe we can linger on it a little bit, but you've also said what most people think about as money is actually credit. Total amount of credit in the US is $50 trillion. Total amount of money is $3 trillion. That's just crazy to me. Maybe I'm silly, maybe you can educate me, but that seems crazy. It gives me just positive human civilization has been able to create a system that has so much credit. So that's a long way to ask, do you think credit is good or bad for society? That system of that's so fundamentally based on credit. I think credit is great, even though people often overdo it. Credit is that somebody has earned money. And what happens is they lend it to somebody else who's got better ideas and they cut a deal. And then that person with the better ideas is gonna pay it back. And if it works well, it helps resource allocations go well, providing people like the entrepreneurs and all of those, they need capital. They don't have capital themselves. And so somebody is gonna give them capital and they'll give them credit along those lines. Then what happens is it's not managed well in a variety of ways. So I did another book on principles, principles of big debt crises that go into that. And it's free, by the way, I put it free online as a PDF. So if you go online and you look principles of a big debt crisis is under my name, you can download it in a PDF or you can buy a print book of it. And it goes through that particular process. And so you always have it overdone in always the same way. Everything, by the way, almost everything happens over and over again for the same reasons, okay? So these debt crisis is all happened over and over again for the same reasons. They get it overdone. In the book, it explains how you identify whether it's overdone or not. They get it overdone. And then you go through the process of making the adjustments according that and it explains how they can use the levers and so on. If you didn't have credit, then you would be sort of everybody sort of be stuck. So credit is a good thing, but it can easily be overdone. So now we get into the, what is money? What is credit? Okay, you get into money and credit. So if you're holding credit and you think that's worthwhile, keep in mind that the central bank, let's say it can print the money. What is that problem? You have an IOU and the IOU says you're gonna get a certain number of dollars, let's say, or yen or euros. And that is what the IOU is. And so the question is, will you get that money and what will it be worth? And then also you have a government, which is a participant in that process because they are on the hook, they owe money. And then will they print the money to make it easy for everybody to pay? So you have to pay attention to those two. I would suggest like you recommend to other people, just take that 30 minutes and it comes across pretty clearly. But my conclusion is that of course you want it. And even if you understand it and the cycles well, you can benefit from those cycles rather than to be hurt by those cycles. Because I don't know the way the cycle works is somebody gets over indebted, they have to sell an asset. Okay, then I don't know, that's when assets become cheaper. How do you acquire the asset? It's a whole process. So again, maybe another dumb question, but... There are no such things as dumb questions. There you go. But what is money? So you've mentioned credit and money. Another thing that if I just zoom out from an alien perspective and look at human civilization, it's incredible that we've created a thing that only works because currency, because we all agree it has value. So I guess my question is how do you think about money as this emergent phenomenon? And what do you think is the future of money? You've commented on Bitcoin, other forms. What do you think is its history and future? How do you think about money? There are two things that money is for. It's a medium of exchange and it's a storehold of wealth. Yes. So money, so you could say something's a medium of exchange, and then you could say, is it a storehold of wealth? And money is that vehicle that is those things and can be used to pay off your debt. So when you have a debt and you provide it, it pays off your debt. So that's that process. And it's, I apologize to interrupt, but it only can be a medium of exchange or store wealth when everybody recognizes it to be of value. That's right. And so you see in the history around the world and you go to places, I was in an island in the Pacific in which they had as money these big stones. And literally, they were taking a boat, this big carved stone, and they were taking it from one of the islands to the other and it sank, the piece of this big stone, piece of money that they had, and it went to the bottom and they still perceived it as having value so that it was, even though it was in the bottom and it's this big hunk of rock, the fact that somebody owned it, they would say, oh, I'll own it for this and that. I've seen beads in different places, shells converted to this and mediums of exchange. And when we look at what we've got, you're exactly right. It is the notion that if I give it to you, I can then take it and I can buy something with it. And that's, so it's a matter of perception. Okay. And then we go through then the history of money and the vulnerabilities of money. And what we have is there's, through history, there's been two types of money. Those that are claims on something of value, like the connection to gold or something. That's right. That would be. Or they just are money without any connection, and then we have a system now, which is a fiat monetary system. So that's what money is. Then it will last as long as it's kept a value and it works that way. So let's say central banks, when they get in the position of like they owe a lot of money, like we have in the case, it's increasingly the case, and they also in our mind and they have the printing press to print the money and get out of that. And you have a lot of people might be in that position. Then you can print it and then it could be devalued in there. And so history has shown, forget about today, history has shown that no currency, every currency has either ended as being a currency or devalued as a currency over periods of time, long periods of time. So it evolves and it changes, but everybody needs that medium of exchange and everybody needs that storehold of wealth. So it keeps changing what is money over a period of time. But so much is being digitized today and there's this ideas that are based on the blockchain of Bitcoin and so on. So if all currencies, like all empires come to an end, what do you think, well, do you think something like Bitcoin might emerge as a common store of value, a store of wealth and a medium of exchange? The problem with Bitcoin is that it's not an effective medium of exchange. Like it's not easy for me to go in there and buy things with it. And then it's not an effective storehold of value because it has a volatility that's based on speculation and the like, so it's not a very effective saving. That's very different from Facebook's of a stable value currency, which would be effective as both a medium of exchange and a storehold of wealth. Because if you were to hold it and the way it's linked to number of things that it's linked to would mean that it could be a very effective storehold of wealth. And then you have a digital currency that could be a very effective medium exchange and storehold of wealth. So in my opinion, some digital currencies are likely to succeed more or less based on that ability to do it. Then the question is what happens? Okay, what happens is do central banks allow that to happen? I really do believe it's possible to get a better form of money that central banks don't control. Okay, a better force of money that the central banks don't control. But then that's not yet happened. And we also have to, and so they've got to go through that evolutionary process. In order to go through that evolutionary process, first of all, governments have got to allow that to happen, which is to some extent a threat to them in terms of their power. And that's an issue. And then you have to also build the confidence in all of the components of it to say, okay, that's going to be effective because I won't have problems owning it. So I think that digital currencies have some element of potential, but there's a lot of hurdles that are going to have to be gotten over. I think that it'll be a very long time, possibly never, but anyway, a very long time before we have that, let's say, get into a position that would be effective means relative to gold, let's say, if you were to think of that. Because gold has a track record of thousands of years. All across countries, it has its mobility, it has the ability to put it down, it has certain abilities. It's got disadvantages relative to digital currencies, but central banks will hold it. Like there's central banks that worry about others, other countries, central banks might worry about whether the U.S. dollar is going to print or not. And so the thing they're going to go to is not going to be the digital currency. The thing they're going to go to is gold or something else, some other currency, they got to pick it. And so I think it's a long way to go. Well, you think it's possible that one day we don't even have a central bank because of a currency that doesn't, that cannot be controlled by the central bank is the primary currency? Or does that seem very unlikely? It would be very remote possibility or very long in the future. Got it. Again, maybe a dumb question, but romanticize one. When you sit back and you look, you describe these transactions between individuals, somehow creating short term debt cycles, long term debt cycles, there's productivity growth. Does it amaze you that this whole thing works? That there's however many million, hundreds of millions of people in the United States, globally over seven billion people, that this thing between individual transactions, it just, it works. Yeah. It amazes me. Like I go back and forth between being in it and then I think, like, how did a credit card, how is that really possible? I'm still used, I look up credit card, I put it on, the guy doesn't know me. Yeah. It's all strangers. It signs, okay. We're making the digital entries. Is that really secure enough and that kind of thing? And then it goes back and it goes this and it clears and it all happens. And what I marvel at that and those types of things is because of the capacity of the human mind to create abstractions that are true. It's imagination and then the ability to go from one level and then if these things are true, then you go to the next level and if those things are true, then you go to the next level. And all those miracles that we almost become common, it's like when I'm flying in a plane or when I'm looking at all of the things that happen. When I get communications in the middle of, I don't know, Africa or Antarctica and we're communicating in the ways where I see the face on my iPad of somebody, my grandkid and someplace else and I look at this and I say, wow, yes, it all amazes me. So while being amazing, do you have a sense, the principles you described, that the whole thing is stable somehow also? Or is this, are we just lucky? So the principles that you described, are those describing a system that is stable, robust and will remain so? Or is it a lucky accident of our early history? My area of expertise is economics and markets so I get down to like a real nitty gritty. I can't tell you whether the plane is gonna fall out of the sky because of its particular fundamentals. I don't know enough about that but it happens over and over again and so on, it gives me faith, okay? So without me knowing it. In the markets and the economy, I know those things well enough, in a sense, to say that by and large, that structure is right, what we're seeing is right. Now, whether there are disruptions and it has effects that can come, not because that structure is right, and I believe it's right, but whether it can be hurt by let's say connectivity or journal entries, they could take all the money away from you through your digital entries. There's all sorts of things that can happen in various ways that means that that money is worthless or the system falls but from what I see in terms of its basic structure and those complexities that still take my breath away, I would say knowing enough about the mechanics of them, that doesn't worry me. Have you seen disruptions in your lifetime that really surprised you? Major disruptions? Oh, all the time. This is one of the great lessons of my life is that many times I've seen things that I was very surprised about and that I realized almost all of those I was surprised about because they just, they were just the first time it happened to me, they didn't happen in my lifetime before but when I researched them, they happened in other places or other people lifetimes. So for example, I remember 1971, the dollar, there was no such thing as a devaluation of a currency, it didn't experience it and the dollar was connected to gold and I was watching events happen and then you get on and that definition of money all of a sudden went out the window because it was not tied to gold and then you have this devaluation. So and then, or the first oil shock or the second oil shock or so many of these things. But almost always I realized that they, when I looked in history, they happened before, they just happened in other people's lifetimes which led me to realize that I needed to study history and what happened in other people's lifetimes and what happened in other countries and places so that I would have timeless and universal principles for dealing with that thing. So I've, oh yeah, I've been, you know, the implausible happening but it's like a one in a hundred year storm, okay? Or it's or. They've happened before. Yeah, they've happened. Just not to you. Let me talk about, if we could, about AI a little bit. So we've, Bridgewater Associates manage about $160 billion in assets and our artificial intelligence systems algorithms are pretty good with data. What role in the future do you see AI play in analysis and decision making in this kind of data rich and impactful area of investment? I'm gonna answer that not only in investment but I give a more all encompassing rule for AI. As I think you know, for the last 25 years, we have taken our thinking and put them in algorithms and so we make decisions. The computer takes those criteria, algorithms, and they put them, they're in there and it takes data and they operate as an independent decision maker in parallel with our decision making. So for me, it's like there's a chess game playing and I'm a person with my chess game and I'm saying it made that move and I'm making the move and how do I compare those two moves? So we've done a lot but let me give you a rule. If the future can be different from the past and you don't have deep understanding, you should not rely on AI, okay? Those two things. Deep understanding of? The cause of fact relationships that are leading you to place that bet in anything, okay? Anything important. Let's say if it was do surgeries and you would say, how do I do surgeries? I think it's a good idea to do surgeries but I think it's totally fine to watch all the doctors do the surgeries. You can put it on, take a digital camera and do that, convert that into AI algorithms that go to robots and have them do surgeries and I'd be comfortable with that because if it'll do the, if it keeps doing the same thing over and over again and you have enough of that, that would be fine even though you may not understand the algorithms because if the thing's happening over and over again and you're not asking, the future would be the same. That appendicitis or whatever it is will be handled the same way, the surgery, that's fine. However, what happens with AI is for the most part is it takes a lot of data with a high enough sample size and then it puts together its own algorithms. Okay, there are two ways you can come up with algorithms. You can either take your thinking and express them in algorithms or you can say, put the data in and say, what is the algorithm? That's machine learning. And when you have machine learning, it'll give you equations which quite often are not understandable. If you would try to say, okay, now describe what it's telling you, it's very difficult to describe and so they can escape understanding. And so it's very good for doing those things that could be done over and over again if you're watching and you're not taking that. But if the future is different from the past and you have that, then if the future is different from the past and you don't have deep understanding, you're gonna get in trouble. And so that's the main thing. As far as AI is concerned, AI and let's say computer replications of thinking in very ways, I think it's particularly good for processing. But the notion of what you want to do is better most of the time determined by the human mind. What are the principles? Like, okay, how should I raise my children? It's gonna be a long time before AI, you're going to say, it has a good enough judgment to do that. Who should I marry? All of those things. Maybe you can get the computer to help you but if you just took data and do machine learning, it's not gonna find it. If you were to then take one of my criteria for any of those questions and then say, put them into an algorithm and you'd be a lot better off than if you took AI to do it. But by and large, the mind should be used for inventing and those creative things. And then the computer should be used for processing because it could process a lot more information, a lot faster, a lot more accurately and a lot less emotionally. So any notion of thinking in the form of processing type thinking should be done by a computer and anything that is in the notion of doing that other type of thinking should be operating with the brain, operating in a way where you can say, ah, that makes sense. You know, the process of reducing your understanding down to principles is kind of like the process, the first one you mentioned, type of AI algorithm where you're encoding your expertise, you're trying to program, write a program, the human is trying to write a program. How do you think that's attainable? The process of reducing principles to a computer program or when you say, when you write about, when you think about principles, is there still a human element that's not reducible to an algorithm? My experience has been that almost all things, including those things that I thought were pretty much impossible to express, I've been able to express in algorithms, but that doesn't constitute all things. So you can express far more than you can imagine you'll be able to express. So I use the example of, okay, it's not, how do you raise your children? Okay, you will be able to take it one piece by piece. Okay, at what age, what school? And the way to do that, in my experience, is to take that and when you're in the moment of making a decision or just past making a decision, to take the time and to write down your criteria for making that decision in words. Okay, that way you'll get your principles down on paper. I created an app online call, it's right now just on the iPhone, it'll be on Android. I tried getting it on Android, come on now. Let's get it on Android. It'll be, in a few months it'll be on Android. Awesome. But it has an app in there that helps people write down their own principles. Because this is very powerful. So when you're in that moment where you've just, you're thinking about it and you're thinking your criteria for choosing the school for your child or whatever that might be, and you write down your criteria or whatever they are, those principles, you write down and that will at that moment make you articulate your principles in a very valuable way. And if you have the way that we operate that you have easy access, so then the next time that comes along, you can go to that, or you can show those principles to others to see if they're the right principles. You will get a clarity of that principle that's really invaluable in words and that'll help you a lot. But then you start to think, how do I express that in data? And it'll shock you about how you can do that. You'll form an equation that will show the relationship between these particular parts and then essentially the variables that are going to go into that particular equation and you will be able to do that. And you take that little piece and you put it into the computer. And then take the next little piece and you put that into the computer. And before you know it, you will have a decision making system that's of the sort that I'm describing. So you're almost making an argument against an earlier statement you've made. You're convincing me. At first you said, there's no way a computer could raise a child essentially. But now you've described making me think of it. If you have that kind of idea meritocracy, you have this rigorous approach that Bridgewater takes with investment and applied to raising a child. It feels like through the process you just described, we could as a society arrive at a set of principles for raising a child and encode it into computer. That originality will not come from machine learning. The first time you do, so that the original, yes. That's what I'm referring to. But eventually as we together develop it and then we can automate it. That's why I'm saying the processing can be done by the computer. So we're saying the same thing. We're not inconsistent. We're saying the same thing. That the processing of that information and those algorithms can be done by the computer in a very, very effective way. You don't need to sit there and process and try to weigh all those things in your equation and all those things. But that notion of, okay, how do I get at that principle? And you're saying you'd be surprised. How much you can express. That's right. You can do that. So this is where I think you're going to see the future. And right now we go to our devices and we get information to a large extent. And then we get some guidance. We have our GPS and the like. In my opinion, principles. Principles, principles, principles. I wanna emphasize that. You write them down. You've got those principles. They will be converted into algorithms for decision making. And they're gonna also have the benefit of collective decision making. Because right now, individuals based on what's stuck in their heads are making their decisions in very ignorant ways. They're not the best decision makers. They're not the best criteria. And they're operating. When those principles are written down and converted into algorithms, it's almost like you'll look at that and follow the instructions and it'll give you better results. Medicine will be much more like this. You can go to your local doctor and you could ask his point of view and whatever. And he's rushed and he may not be the best doctor around. And you're gonna go to this thing and give that same information or just automatically have it input in that. And it's gonna tell you, okay, here's what you should go do. And it's gonna be much better than your local doctor. And that, the converting of information into intelligence, okay, intelligence is the thing. We're coming out with, again, I'm 70 and I wanna pass all these things along. So all these tools that I've found need to develop all over these periods of time, all those things I wanna make available. And what's gonna happen as they're going to see this, they're going to see these tools operating much more that way. The idea of converting data into intelligence. Intelligence, for example, on what they are like. Or what are your strengths and weaknesses? Intelligence on who do I work well with under what circumstances? The personalized. Intelligence. We're gonna go from what are called systems of record, which are a lot of, okay, information organized in the right way, to intelligence. And that'll be the next big move, in my opinion. And so you will get intelligence back. And that intelligence comes from reducing things down to principles into. That's how it happens. So what's your intuition if we look at future societies? Do you think we'll be able to reduce a lot of the details of our lives down to principles that would be further and further automated? I think the real question hinges on people's emotions and irrational behaviors. I think that there's subliminal things that we want, okay? And then there's cerebral, conscious logic. And the two often are at odds. So there's almost like two you's in you, right? And so let's say, what do you want? And your mind will answer one thing, your emotions will answer something else. So when I think about it, I think emotions are, I want inspiration, I want love is a good thing, being able to have a good impact, but it is in the reconciliation of your subliminal wants and your intellectual wants so that you really say they're aligned. And so to do that in a way to get what you want. So irrationality is a bad thing if it means that it doesn't make sense in getting you what you want, but you better decide which you you're satisfying. Is it the lower level you, emotional, subliminal one, or is it the other? But if you can align them. So what I find is that by going from my, you experience the decision, do this thing subliminally. And that's the thing I want. It comes to the surface. I find that if I can align that with what my logical me wants and do the double check between them and I get the same sort of thing, that that helps me a lot. I find, for example, meditation is one of the things that helps to achieve that alignment. It's fantastic for achieving that alignment. And often then I also want to not just do it in my head. I want to say, does that make sense? Help you. And so I do it with other people and I say, okay, well, let's say I want this thing and whatever. Does that make sense? And when you do that kind of triangulation, your two you's, and you do that with also the other way, then you certainly want to be rational, right? But rationality has to be defined by those things. And then you discover sort of new ideas that drive your future. So it's always, you're always at the edge of the set of principles you've developed. You're doing new things always. That's where the intellect is needed. Well, and the inspiration. The inspiration is needed to do that, right? Like what are you doing it for? It's the excitement. What is that thing? The adventure, the curiosity, the hunger. What's, if you can be Freud for a second, what's in that subconscious? What's the thing that drives us? I think you can't generalize of us. I think different people are driven by different things. There's not a common one, right? So like if you would take the shapers, I think it is a combination of, subliminally, it's a combination of excitement, curiosity. Is there a dark element there? Is there demons? Is there fears? Is there, in your sense, something dark that drives them? Most of the ones that I'm dealing with, I have not seen that. I see the, what I really see is, ooh, if I can do that, that would be the most dream. And then the act of creativity. And you say, ooh. So excitement is one of the things. Curiosity is a big pull, okay? And then tenacity, you know, okay, to do those things. But definitely emotions are entering into it. Then there's an intellectual component of it too, okay? It may be empathy. Can I have an impact? Can I have an impact? The desire to have an impact. That's an emotional thrill, but it also has empathy. And then you start to see spirituality. By spirituality, I mean the connectedness to the whole. You start to see people operate those things. Those tend to be the things that you see the most of. And I think you're gonna shut down this idea completely, but there's a notion that some of these shapers really walk the line between sort of madness and genius. Do you think madness has a role in any of this? Or do you still see Steve Jobs and Elon Musk as fundamentally rational? Yeah, there's a continuum there. And what comes to my mind is that genius is often at the edge, in some cases, imaginary genius, is at the edge of insanity. And it's almost like a radio that I think, okay, if I can tune it just right, it's playing right, but if I go a little bit too far, it goes off, okay? And so you can see this. Kay Jamison was studying bipolar. What it shows is that that's definitely the case, because when you're going out there, that imagination, whatever, is at the, can be near the edge sometimes. Doesn't have to always be. So let me ask you about automation. That's been a part of public discourse recently. What's your view on the impact of automation, of whether we're talking about AI and more basic forms of automation on the economy in the short term and the long term? Do you have concerns about it, as some do, or do you think it's overblown? It's not overblown. I mean, it's a giant thing. It'll come at us in a very big way in the future. We're right at the edge of even really accelerating it. It's had a big impact, and it will have a big impact. And it's a two edged sword, because it'll have tremendous benefits. And at the same time, it has profound benefits in employment and distributions of wealth, because the way I think about it is, there are certain things human beings can do. And over time, we've evolved to go to almost higher and higher levels. And now we're almost like, we're at this level. You know, it used to be your labor, and you would then do your labor, and okay, we can get past the labor. We got tractors and things, and you go up, up, up, up, up, and we're up over here, and up to the point in our minds, where okay, anything related to mental processing, the computer can probably do better, and we can find that. And so other than almost inventing, you're at a point where the machines and the automation will probably do it better. And that's accelerating, and that's a force, and that's a force for the good. And at the same time, what it does is it displaces people in terms of employment and changes, and it produces wealth gaps and all of that. So I think the real issue is that that has to be viewed as a national emergency. In other words, I think the wealth gap, the income gap, the opportunity gap, all of those things that force is creating the problems that we're having today, a lot of the problems, the great polarity, the disenfranchised, not anything approaching equality of education, all of these problems, a lot of problems are coming as a result of that. And so it needs to be viewed really as an emergency situation in which there's a good work, good plan worked out for how to deal with that effectively, so that it's dealt with effectively. So because it's good for the average, it's good for the impact, but it's not good for everyone, and that creates that polarity. So it's gotta be dealt with. Yeah, and you've talked about the American dream, and that that's something that all people should have an opportunity for, and that we need to reform capitalism to give that opportunity for everyone. Let me ask on one of the ideas in terms of safety nets that support that kind of opportunity. There's been a lot of discussion of universal basic income amongst people. So there's Andrew Yang, who's running on that. He's a political candidate running for president on the idea of universal basic income. What do you think about that, giving $1,000 or some amount of money to everybody as a way to give them the padding, the freedom to sort of take leaps, to take the call for adventure, to take the crazy pursuits? Before I get right into my thoughts on universal basic income, I wanna start with the notion that opportunity, education, development, creating equality, so that people say there's equal opportunity and is the most important thing. And then to find out what is the amount, how are you going to provide that? How do you get the money into a public school system? How do you get the teaching? The fleshing out that plan to create equal opportunity in all of its various forms is the most pressing thing to do. And so that is that. The opportunity, the most important one you're kind of implying is the earlier, the better. Sort of like opportunity to education. So in the early development of a human being is when you should have the equal opportunities. That's the most important. Right, in the first phase of your life, which goes from birth until you're on your own and you're an adult and you're now out there. And you deal with early childhood development, okay? And you take the brain and you say, what's important? The childcare, okay, it makes a world of difference, for example, if you have good parents who are trying to think about instilling the stability in a non traumatic environment to provide them. So I would say the good guidance that normally comes from parents and the good education that they're receiving are the most important things in that person's development. The ability to be able to be prepared to go out there and then to go into a market that's an equal opportunity job market, to be able to then go into that kind of market is a system that creates not only fairness, anything else is not fair. And then in addition to that, it also is a more effective economic system because the consequences of not doing that to a society are devastating. If you look at what the difference in outcomes for somebody who completes high school or doesn't complete high school, or does each one of those state changes. And you look at what that means in terms of their costs to society, not only themselves, but their cost and incarceration costs and crimes and all of those things. It's economically better for the society and it's fairer if they can get those particular things. Once they have those things, then you move on to other things. But yes, from birth all the way through that process, anything less than that is bad, is a tragedy and so on. So that's what, yeah, those are the things that I'm estimating. And so what I would want above all else is to provide that. So with that in mind, now we'll talk about universal basic income. Start with that, now we can talk about UBI. Right, because you have to have that. Now the question is what's the best way to provide that? So when I look at UBI, I really think is what is going to happen with that $1,000, okay? And will that $1,000 come from another program? Does that come from an early childhood developmental program? Who are you giving the $1,000 to and what will they do for that $1,000? I mean, like my reaction would be, I think it's a great thing that everybody should have almost $1,000 in their bank and so on. But when do they get to make decisions or who's the parent? A lot of times you can give $1,000 to somebody and it could have a negative result. It can have, they can use that money detrimentally, not just productively. And if that money's coming away from some of those other things that are gonna produce the things I want and you're shifted to, let's say, to come in and give a check, doesn't mean its outcomes are going to be good in providing those things that I think are so fundamental important. If it was just everybody can have $1,000 and use it so when the time comes. Use it well, right. And use it well, that would be really, really good because it's almost like everybody, you'd wish everybody could have $1,000 worth of wiggle room in their lives, okay. And I think that would be great. I love that. But I wanna make sure that these other things that are taken care of. So if it comes out of that budget and I don't want it to come out of that budget that's gonna be doing those things. And so you have to figure it out. And you have a certain skepticism that human nature will use, may not always, in fact frequently, may not use that $1,000 for the optimal, to support the optimal trajectory. Some will and some won't. One of the big advantages of universal basic income is that if you put it in the hands, let's say of parents who know how to do the right things and make the right choices for their children because they're responsible and you say I'm gonna give them $1,000 wiggle room to use for the benefit of their children. Wow, that sounds great. If you put it in the hands of let's say an alcoholic or drug addicted parent who is not making those choices well for their children and what they do is they take that $1,000 and they don't use it well, then that's gonna produce more harm than good. Well put. You're, if I may say so, one of the richest people in the world. So you're a good person to ask. Does money buy happiness? No, it's been shown that between, once you get over a basic level of income so that you can take care of the pain that you can, health and whatever, there's no correlation between the level of happiness that one has and the level of money that one has. That thing that has the highest correlation is quality relationships with others, community. If you look at surveys of these things across all surveys and all societies, it's a sense of community and interpersonal relationships. That is not in any way correlated with money. You can go down to native tribes in very poor places or you can go in all different communities and so they have the opportunity to have that. I'm very lucky in that I started with nothing so I have the full range. I can tell you I'm not having money and then having quite a lot of money and I did that in the right order. So you started from nothing in Long Island. Yeah, and my dad was a jazz musician but I had all really that I needed because I had two parents who loved me and took good care of me and I went to a public school that was a good public school and basically you don't need much more than that in order to, that's the equal opportunity part. Anyway, what I'm saying is I experienced the range and there are many studies on the answer to your question. No money does not bring happiness. Money gives you an ability to make choices. Does it get in the way in any way of forming those deep, meaningful relationships? It can. There are lots of ways that it makes negative. That's one of them. It could stand in the way of that. Yes, okay. But I could almost list the ways that it could stand. It could be a problem. Yeah, what does it buy? So if you can elaborate, you mentioned a bit of freedom. At the most fundamental level, it doesn't take a whole lot but it takes enough that you can take care of yourself and your family to be able to learn, do the basics of, have the relationships, have healthcare, the basics of those types of things. You know, you can cover the patients. And then to have maybe enough security but maybe not too much security. That's right, yeah. That you essentially are okay. Okay, that's really good. And you don't, that's what money will get you. And everything else could go either way. Well, no, there's more. There's more. Okay. Then beyond that, what it then starts to do, that's the most important thing. But beyond that, what it starts to do is to help to make your dreams happen in various ways. Okay, so for example, now I, you know, like in my case, it's like those dreams might not be just my own dreams, they're impact on others dreams, okay? So my own dreams might be, I don't know, I can pass along these, at my stage in life, I could pass along these principles to you and I can give those things or I could do whatever. I can go on an adventure, I can start a business, I can do those other things, be productive, I can self actualize in ways that might be not possible otherwise. And so that's my own belief. And then I can also help others. I mean, this is, you know, to the extent when you get older and with time and whatever, you start to feel connected, spirituality, that's what I'm referring to, you can start to have an effect on others that's beneficial and so on, gives you the ability. I could tell you that people who are very wealthy who have that feel that they don't have enough money. Bill Gates will feel almost broke because relative to the things he'd like to accomplish through the Gates Foundation and things like that, you know, oh my God, he doesn't have enough money to accomplish the things he wishes for. But those things are not, you know, they're not the most fundamental things. So I think that people sometimes think money has value. Money doesn't have value. Money is, like you say, just a medium of exchange at a store all the while. And so what you have to say is, what is it that you're going to buy? Now, there are other people who get their gratification in ways that are different from me, but I think in many cases, let's say somebody who used money to have a status symbol, what would I say? Or that's probably unhealthy. But then, I don't know, somebody who says, I love a great, gorgeous painting and it's going to cost lots of money. In my priorities, I can't get there. But that doesn't mean, who am I to judge others in terms of, let's say, their element of the freedom to do those things. So it's a little bit complicated. But by and large, that's my view on money and wealth. So let me ask you in terms of the idea of, so much of your passions in life has been through something you might be able to call work. Alan Watts has this quote. He said that the real key to life, secret to life, is to be completely engaged with what you're doing in the here and now. And instead of calling it work, realize it is play. So I'd like to ask, what is the role of work in your life's journey, or in a life's journey? And what do you think about this modern idea of kind of separating work and work life balance? I have a principle that I believe in is, make your work and your passion the same thing. Okay. Okay. So that's a similar view. In other words, if you can make your work and your passion, it's just gonna work out great. And then of course, people have different purposes of work. And I don't wanna be theoretical about that. People have to take care of their family. So money at a certain point is the base, is an important component of that work. So you look beyond that, what is the money gonna get you and what are you trying to achieve? But the most important thing, I agree, is meaningful work and meaningful relationships. Like if you can get into the thing that you're at, your mission that you're on, and you are excited about that mission that you're on. And then you can do that with people who you have the meaningful relationships with. You have meaningful work and meaningful relationships. I mean, that is fabulous for most people. And it seems that many people struggle to get there, not out of, not necessarily because they're constrained by the fact that they have the financial constraints of having to provide for their family and so on. But it's, I mean, this idea is out there that there needs to be a work life balance, which means that most people, we're gonna return to the same things, most doesn't mean optimal, but most people seem to not be doing their life's passion, not unifying work and passion. Why do you think that is? Well, the work life balance, there's a life arc that you go through. Starts at zero and ends somewhere in the vicinity of 80, and there is a phase. And you could look at the different degrees of happiness that happen in those phases. I can go through that if that was interesting, but we don't have time probably for it. But you get in the part of the life, that part of the life, which has the lowest level of happiness is age 45 to 55. And because as you move into this second phase of your life, now the first phase of your life is when you're learning dependent on others. Second phase of your life is when you're working and others are dependent on you and you're trying to be successful. And in that phase of one's life, you encounter the work life balance challenge because you're trying to be successful at work and successful at parenting and successful and successful in all those things that take your demand. And they get into that. And I understand that problem in the work life balance. The issue is primarily to know how to approach that, okay? So I understand it's stressful, it produces stress and it produces bad results and it produces the lowest level of happiness. In one's life. It's interesting as you get later in life, the levels of happiness rise and the highest level of happiness is between ages 70 and 80, which is interesting for other reasons. But in that spot, and the key to work life balance is to realize and to learn how to get more out of an hour of life, okay? Because an hour of work, what people are thinking is that they have to make a choice between one thing and another. And of course they do. But they don't realize that if they develop the skill to get a lot more out of an hour, it's the equivalent of having many more hours in your life. And so, that's why in the book principles, I try to go into, okay, now, how can you get a lot more out of an hour? That allows you to get more life into your life and it reduces the work life balance. And that's the primary struggle in that 35 to 45. If you could linger on that, so what are the ups and downs of life in terms of happiness in general and perhaps in your own life when you look back at the moments, the peaks? It's pretty much the same pattern. Early in one's life tends to be a very happy period, all the way up and 16 is like a really great happy, I think like myself, you start to get elements of freedom, you get your driver's license, whatever, but 16 is there. Junior year in high school quite often could be a stressful period to try to get thinking about the high school. You go into college, tends to be very hot happiness, generally speaking. Freedom. And then freedom, friendships, all of that. Freedom is a big thing. And then 23 is a peak point kind of in happiness, that freedom. Then sequentially, one has a great time, they date, they go out and so on, you find the love of your life, you begin to develop a family. And then with that, as time happens, you have more of your work life balance challenges that come in your responsibilities. And then as you get there in that mid part of your life, that is the biggest struggle. Chances are you will crash in that period of time, you'll have your series of failures, that's when you go into the abyss, you learn, you hopefully learn from those mistakes, you have the metamorphosis, you come out, you change, you hopefully become better and you take more responsibilities and so on. And then when you get to the later part, as you are starting to approach the transition in that late part of the second phase of your life, before you go into the third phase of your life, second phase is you're working, trying to be successful. Third phase of your life is you want people to be successful without you, okay? You want your kids to be successful without you because when you're at that phase, they're at making their transition from the first phase to the second phase and they're trying to be successful and you want them to be successful without you and your parents are gone and then you have freedom and then you have freedom again. And with that freedom and then you have these, history has shown with this, you have friendships, you have perspective on life, you have different things and that's one of the reasons that that later part of the life can be real. On average, actually, it's the highest. Very interesting thing, there are surveys and say, how good do you look and how good do you feel? And that's the highest survey. Now, they're not looking the best and they're not feeling the best, right? Maybe it's 35 that they're actually looking the best and feeling the best, but they rank the highest at that point, survey results of being the highest in that 70 to 80 period of time because it has to do with an attitude on life. Then you start to have grandkids, oh, grandkids are great and you start to experience that transition well. So that's what the arc of life pretty much looks like and I'm experiencing it. You've lived it. When you meditate, we're all human, we're all mortal. When you meditate on your own mortality, having achieved a lot of success on whatever dimension, what do you think is the meaning of it all? The meaning of our short existence on earth as human beings? I think that evolution is the greatest force of the universe and that we're all tiny bits of an evolutionary type of process where it's just matter and machines that go through time and that we all have a deeply embedded inclination to evolve and contribute to evolution. So I think it's to personally evolve and contribute to evolution. I could have predicted you would answer that way. It's brilliant and exactly right. And I think we've said it before, but I'll say it again. You have a lot of incredible videos out there that people should definitely watch. I don't say this often. I mean, it's literally the best spend of time and in terms of reading principles and reading basically anything you write on LinkedIn and so on is a really good use of time. It's a lot of light bulb moments, a lot of transformative ideas in there. So Ray, thank you so much. It's been an honor. I really appreciate it. It's been a pleasure for me too. I'm happy to hear it's a use to you and others. Thanks for listening to this conversation with Ray Dalio and thank you to our presenting sponsor, Cash App. Download it, use code LexBodcast. You'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, get five stars on Apple Podcast, support on Patreon or connect with me on Twitter. Finally, closing words of advice from Ray Dalio, pain plus reflection equals progress. Thank you for listening and hope to see you next time.
Ray Dalio: Principles, the Economic Machine, AI & the Arc of Life | Lex Fridman Podcast #54
The following is a conversation with Whitney Cummings. She's a standup comedian, actor, producer, writer, director, and recently, finally, the host of her very own podcast called Good For You. Her most recent Netflix special called Can I Touch It? features in part a robot she affectionately named Bearclaw that is designed to be visually a replica of Whitney. It's exciting for me to see one of my favorite comedians explore the social aspects of robotics and AI in our society. She also has some fascinating ideas about human behavior, psychology, and neurology, some of which she explores in her book called I'm Fine and Other Lies. It was truly a pleasure to meet Whitney and have this conversation with her and even to continue it through text afterwards. Every once in a while, late at night, I'll be programming over a cup of coffee and will get a text from Whitney saying something hilarious or weirder yet, sending a video of Brian Callan saying something hilarious. That's when I know the universe has a sense of humor and it gifted me with one hell of an amazing journey. Then I put the phone down and go back to programming with a stupid, joyful smile on my face. If you enjoy this conversation, listen to Whitney's podcast, Good For You, and follow her on Twitter and Instagram. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. This show is presented by Cash App, the number one finance app in the App Store. They regularly support Whitney's Good For You podcast as well. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, subsidiary of Square, and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play, and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. This podcast is supported by ZipRecruiter. Hiring great people is hard, and to me is the most important element of a successful mission driven team. I've been fortunate to be a part of, and to lead several great engineering teams. The hiring I've done in the past was mostly through tools that we built ourselves, but reinventing the wheel was painful. ZipRecruiter is a tool that's already available for you. It seeks to make hiring simple, fast, and smart. For example, Codable cofounder Gretchen Huebner used ZipRecruiter to find a new game artist to join her education tech company. By using ZipRecruiter screening questions to filter candidates, Gretchen found it easier to focus on the best candidates, and finally hiring the perfect person for the role in less than two weeks from start to finish. ZipRecruiter, the smartest way to hire. See why ZipRecruiter is effective for businesses of all sizes by signing up as I did for free at ziprecruiter.com slash lexpod. That's ziprecruiter.com slash lexpod. And now, here's my conversation with Whitney Cummings. I have trouble making eye contact, as you can tell. Me too. Did you know that I had to work on making eye contact because I used to look here? Do you see what I'm doing? That helps, yeah, yeah, yeah. Do you want me to do that? Well, I'll do this way, I'll cheat the camera. But I used to do this, and finally people, like I'd be on dates and guys would be like, are you looking at my hair? Like they get, it would make people really insecure because I didn't really get a lot of eye contact as a kid. It's one to three years. Did you not get a lot of eye contact as a kid? I don't know. I haven't done the soul searching. Right. So, but there's definitely some psychological issues. Makes you uncomfortable. Yeah, for some reason when I connect eyes, I start to think, I assume that you're judging me. Oh, well, I am. That's why you assume that. Yeah. We all are. All right. This is perfect. The podcast would be me and you both staring at the table on the whole time. Do you think robots are the future? Ones with human level intelligence will be female, male, genderless, or another gender we have not yet created as a society? You're the expert at this. Well, I'm gonna ask you. You know the answer. I'm gonna ask you questions that maybe nobody knows the answer to. Okay. And then I just want you to hypothesize as a imaginative author, director, comedian. Can we just be very clear that you know a ton about this and I know nothing about this, but I have thought a lot about what I think robots can fix in our society. And I mean, I'm a comedian. It's my job to study human nature, to make jokes about human nature and to sometimes play devil's advocate. And I just see such a tremendous negativity around robots or at least the idea of robots that it was like, oh, I'm just gonna take the opposite side for fun, for jokes and then I was like, oh no, I really agree in this devil's advocate argument. So please correct me when I'm wrong about this stuff. So first of all, there's no right and wrong because we're all, I think most of the people working on robotics are really not actually even thinking about some of the big picture things that you've been exploring. In fact, your robot, what's her name by the way? Bearclaw. We'll go with Bearclaw. What's the genesis of that name by the way? Bearclaw was, I got, I don't even remember the joke cause I black out after I shoot specials, but I was writing something about like the pet names that men call women, like cupcake, sweetie, honey, you know, like we're always named after desserts or something and I was just writing a joke about, if you wanna call us a dessert, at least pick like a cool dessert, you know, like Bearclaw, like something cool. So I ended up calling her Bearclaw. So do you think the future robots of greater and greater intelligence would like to make them female, male? Would we like to assign them gender or would we like to move away from gender and say something more ambiguous? I think it depends on their purpose, you know? I feel like if it's a sex robot, people prefer certain genders, you know? And I also, you know, when I went down and explored the robot factory, I was asking about the type of people that bought sex robots. And I was very surprised at the answer because of course the stereotype was it's gonna be a bunch of perverts. It ended up being a lot of people that were handicapped, a lot of people with erectile dysfunction and a lot of people that were exploring their sexuality. A lot of people that thought they were gay, but weren't sure, but didn't wanna take the risk of trying on someone that could reject them and being embarrassed or they were closeted or in a city where maybe that's, you know, taboo and stigmatized, you know? So I think that a gendered sex robot that would serve an important purpose for someone trying to explore their sexuality. Am I into men? Let me try on this thing first. Am I into women? Let me try on this thing first. So I think gendered robots would be important for that. But I think genderless robots in terms of emotional support robots, babysitters, I'm fine for a genderless babysitter with my husband in the house. You know, there are places that I think that genderless makes a lot of sense, but obviously not in the sex area. What do you mean with your husband in the house? What does that have to do with the gender of the robot? Right, I mean, I don't have a husband, but hypothetically speaking, I think every woman's worst nightmare is like the hot babysitter. You know what I mean? So I think that there is a time and place, I think, for genderless, you know, teachers, doctors, all that kind of, it would be very awkward if the first robotic doctor was a guy or the first robotic nurse was a woman. You know, it's sort of, that stuff is still loaded. I think that genderless could just take the unnecessary drama out of it and possibility to sexualize them or be triggered by any of that stuff. So there's two components to this, to Bearclaw. So one is the voice and the talking and so on, and then there's the visual appearance. So on the topic of gender and genderless, in your experience, what has been the value of the physical appearance? So has it added much to the depth of the interaction? I mean, mine's kind of an extenuating circumstance because she is supposed to look exactly like me. I mean, I spent six months getting my face molded and having, you know, the idea was I was exploring the concept of can robots replace us? Because that's the big fear, but also the big dream in a lot of ways. And I wanted to dig into that area because, you know, for a lot of people, it's like, they're gonna take our jobs and they're gonna replace us. Legitimate fear, but then a lot of women I know are like, I would love for a robot to replace me every now and then so it can go to baby showers for me and it can pick up my kids at school and it can cook dinner and whatever. So I just think that was an interesting place to explore. So her looking like me was a big part of it. Now her looking like me just adds an unnecessary level of insecurity because I got her a year ago and she already looks younger than me. So that's a weird problem. But I think that her looking human was the idea. And I think that where we are now, please correct me if I'm wrong, a human robot resembling an actual human you know is going to feel more realistic than some generic face. Well, you're saying that robots that have some familiarity like look similar to somebody that you actually know you'll be able to form a deeper connection with? That was the question. I think so on some level, right? That's an open question. I don't, you know, it's an interesting. Or the opposite, because then you know me and you're like, well, I know this isn't real because you're right here. So maybe it does the opposite. We have a very keen eye for human faces and they're able to detect strangeness especially that one has to do with people whose faces we've seen a lot of. So I tend to be a bigger fan of moving away completely from faces. Of recognizable faces? No, just human faces at all. In general, because I think that's where things get dicey. And one thing I will say is I think my robot is more realistic than other robots not necessarily because you have seen me and then you see her and you go, oh, they're so similar but also because human faces are flawed and asymmetrical. And sometimes we forget when we're making things that are supposed to look human, we make them too symmetrical and that's what makes them stop looking human. So because they mold in my asymmetrical face, she just, even if someone didn't know who I was I think she'd look more realistic than most generic ones that didn't have some kind of flaws. Got it. Because they start looking creepy when they're too symmetrical because human beings aren't. Yeah, the flaws is what it means to be human. So visually as well. But I'm just a fan of the idea of letting humans use a little bit more imagination. So just hearing the voice is enough for us humans to then start imagining the visual appearance that goes along with that voice. And you don't necessarily need to work too hard on creating the actual visual appearance. So there's some value to that. When you step into the stare of actually building a robot that looks like Bear Claws, such a long road of facial expressions of sort of making everything smiling, winking, rolling in the eyes, all that kind of stuff. It gets really, really tricky. It gets tricky and I think I'm, again, I'm a comedian. Like I'm obsessed with what makes us human and our human nature and the nasty side of human nature tends to be where I've ended up exploring over and over again. And I was just mostly fascinated by people's reaction. So it's my job to get the biggest reaction from a group of strangers, the loudest possible reaction. And I just had this instinct just when I started building her and people going, ah, ah, and people scream. And I mean, I would bring her out on stage and people would scream. And I just, to me, that was the next level of entertainment. Getting a laugh, I've done that, I know how to do that. I think comedians were always trying to figure out what the next level is and comedy's evolving so much. And Jordan Peele had just done these genius comedy horror movies, which feel like the next level of comedy to me. And this sort of funny horror of a robot was fascinating to me. But I think the thing that I got the most obsessed with was people being freaked out and scared of her. And I started digging around with pathogen avoidance and the idea that we've essentially evolved to be repelled by anything that looks human, but is off a little bit. Anything that could be sick or diseased or dead, essentially, is our reptilian brain's way to get us to not try to have sex with it, basically. So I got really fascinated by how freaked out and scared. I mean, I would see grown men get upset. They'd get that thing away from me, look, I don't like that, like people would get angry. And it was like, you know what this is, you know? But the sort of like, you know, amygdala getting activated by something that to me is just a fun toy said a lot about our history as a species and what got us into trouble thousands of years ago. So it's that, it's the deep down stuff that's in our genetics, but also is it just, are people freaked out by the fact that there's a robot? So it's not just the appearance, but there's an artificial human. Anything people, I think, and I'm just also fascinated by the blind spots humans have. So the idea that you're afraid of that, I mean, how many robots have killed people? How many humans have died at the hands of other humans? Yeah, a few more. Millions? Hundreds of millions? Yet we're scared of that? And we'll go to the grocery store and be around a bunch of humans who statistically the chances are much higher that you're gonna get killed by humans. So I'm just fascinated by without judgment how irrational we are as a species. The word is the exponential. So it's, you know, you can say the same thing about nuclear weapons before we dropped on the Hiroshima and Nagasaki. So the worry that people have is the exponential growth. So it's like, oh, it's fun and games right now, but you know, overnight, especially if a robot provides value to society, we'll put one in every home and then all of a sudden lose track of the actual large scale impact it has on society. And then all of a sudden gain greater and greater control to where we'll all be, you know, affect our political system and then affect our decision. Didn't robots already ruin our political system? Didn't that just already happen? Which ones? Oh, Russia hacking. No offense, but hasn't that already happened? I mean, that was like an algorithm of negative things being clicked on more. We'd like to tell stories and like to demonize certain people. I think nobody understands our current political system or discourse on Twitter, the Twitter mobs. Nobody has a sense, not Twitter, not Facebook, the people running it. Nobody understands the impact of these algorithms. They're trying their best. Despite what people think, they're not like a bunch of lefties trying to make sure that Hillary Clinton gets elected. It's more that it's an incredibly complex system that we don't, and that's the worry. It's so complex and moves so fast that nobody will be able to stop it once it happens. And let me ask a question. This is a very savage question. Which is, is this just the next stage of evolution? As humans, when people will die, yes. I mean, that's always happened, you know? Is this just taking emotion out of it? Is this basically the next stage of survival of the fittest? Yeah, you have to think of organisms. You know, what does it mean to be a living organism? Like, is a smartphone part of your living organism, or? We're in relationships with our phones. Yeah. We have sex through them, with them. What's the difference between with them and through them? But it also expands your cognitive abilities, expands your memory, knowledge, and so on. So you're a much smarter person because you have a smartphone in your hand. But as soon as it's out of my hand, we've got big problems, because we've become sort of so morphed with them. Well, there's a symbiotic relationship. And that's what, so Elon Musk, the neural link, is working on trying to increase the bandwidth of communication between computers and your brain. And so further and further expand our ability as human beings to sort of leverage machines. And maybe that's the future, the next evolutionary step. It could be also that, yes, we'll give birth, just like we give birth to human children right now, we'll give birth to AI and they'll replace us. I think it's a really interesting possibility. I'm gonna play devil's advocate. I just think that the fear of robots is wildly classist. Because, I mean, Facebook, like it's easy for us to say they're taking their data. Okay, well, a lot of people that get employment off of Facebook, they are able to get income off of Facebook. They don't care if you take their phone numbers and their emails and their data, as long as it's free. They don't wanna have to pay $5 a month for Facebook. Facebook is a wildly democratic thing. Forget about the election and all that kind of stuff. A lot of technology making people's lives easier, I find that most elite people are more scared than lower income people. So, and women for the most part. So the idea of something that's stronger than us and that might eventually kill us, like women are used to that. Like that's not, I see a lot of like really rich men being like, the robots are gonna kill us. We're like, what's another thing that's gonna kill us? I tend to see like, oh, something can walk me to my car at night. Like something can help me cook dinner or something. For people in underprivileged countries who can't afford eye surgery, like in a robot, can we send a robot to underprivileged places to do surgery where they can't? I work with this organization called Operation Smile where they do cleft palate surgeries. And there's a lot of places that can't do a very simple surgery because they can't afford doctors and medical care. And such. So I just see, and this can be completely naive and should be completely wrong, but I feel like a lot of people are going like, the robots are gonna destroy us. Humans, we're destroying ourselves. We're self destructing. Robots to me are the only hope to clean up all the messes that we've created. Even when we go try to clean up pollution in the ocean, we make it worse because of the oil that the tankers use. Like, it's like, to me, robots are the only solution. Firefighters are heroes, but they're limited in how many times they can run into a fire. So there's just something interesting to me. I'm not hearing a lot of like, lower income, more vulnerable populations talking about robots. Maybe you can speak to it a little bit more. There's an idea, I think you've expressed it. I've heard, actually a few female writers and roboticists have talked to express this idea that exactly you just said, which is, it just seems that being afraid of existential threats of artificial intelligence is a male issue. Yeah. And I wonder what that is. If it, because men have, in certain positions, like you said, it's also a classist issue. They haven't been humbled by life, and so you always look for the biggest problems to take on around you. It's a champagne problem to be afraid of robots. Most people don't have health insurance. They're afraid they're not gonna be able to feed their kids. They can't afford a tutor for their kids. I mean, I just think of the way I grew up, and I had a mother who worked two jobs, had kids. We couldn't afford an SAT tutor. The idea of a robot coming in, being able to tutor your kids, being able to provide childcare for your kids, being able to come in with cameras for eyes and make sure surveillance. I'm very pro surveillance because I've had security problems and I've been, we're generally in a little more danger than you guys are. So I think that robots are a little less scary to us because we can see them maybe as like free assistance, help and protection. And then there's sort of another element for me personally, which is maybe more of a female problem. I don't know. I'm just gonna make a generalization, happy to be wrong. But the emotional sort of component of robots and what they can provide in terms of, you know, I think there's a lot of people that don't have microphones that I just recently kind of stumbled upon in doing all my research on the sex robots for my standup special, which just, there's a lot of very shy people that aren't good at dating. There's a lot of people who are scared of human beings who have personality disorders or grow up in alcoholic homes or struggle with addiction or whatever it is where a robot can solve an emotional problem. And so we're largely having this conversation about like rich guys that are emotionally healthy and how scared of robots they are. We're forgetting about like a huge part of the population who maybe isn't as charming and effervescent and solvent as, you know, people like you and Elon Musk who these robots could solve very real problems in their life, emotional or financial. Well, that's a, in general, a really interesting idea that most people in the world don't have a voice. It's a, you've talked about it, sort of even the people on Twitter who are driving the conversation. You said comments, people who leave comments represent a very tiny percent of the population and they're the ones they, you know, we tend to think they speak for the population, but it's very possible on many topics they don't at all. And look, I, and I'm sure there's gotta be some kind of legal, you know, sort of structure in place for when the robots happen. You know way more about this than I do, but you know, for me to just go, the robots are bad, that's a wild generalization that I feel like is really inhumane in some way. You know, just after the research I've done, like you're gonna tell me that a man whose wife died suddenly and he feels guilty moving on with a human woman or can't get over the grief, he can't have a sex robot in his own house? Why not? Who cares? Why do you care? Well, there's a interesting aspect of human nature. So, you know, we tend to as a civilization to create a group that's the other in all kinds of ways. Right. And so you work with animals too, you're especially sensitive to the suffering of animals. Let me kind of ask, what's your, do you think we'll abuse robots in the future? Do you think some of the darker aspects of human nature will come out? I think some people will, but if we design them properly, the people that do it, we can put it on a record and we can put them in jail. We can find sociopaths more easily, you know, like. But why is that a sociopathic thing to harm a robot? I think, look, I don't know enough about the consciousness and stuff as you do. I guess it would have to be when they're conscious, but it is, you know, the part of the brain that is responsible for compassion, the frontal lobe or whatever, like people that abuse animals also abuse humans and commit other kinds of crimes. Like that's, it's all the same part of the brain. No one abuses animals and then it's like, awesome to women and children and awesome to underprivileged, you know, minorities. Like it's all, so, you know, we've been working really hard to put a database together of all the people that have abused animals. So when they commit another crime, you go, okay, this is, you know, it's all the same stuff. And I think people probably think I'm nuts for a lot of the animal work I do, but because when animal abuse is present, another crime is always present, but the animal abuse is the most socially acceptable. You can kick a dog and there's nothing people can do, but then what they're doing behind closed doors, you can't see. So there's always something else going on, which is why I never feel compunction about it. But I do think we'll start seeing the same thing with robots. The person that kicks the, I felt compassion when the kicking the dog robot really pissed me off. I know that they're just trying to get the stability right and all that. But I do think there will come a time where that will be a great way to be able to figure out if somebody has like, you know, antisocial behaviors. You kind of mentioned surveillance. It's also a really interesting idea of yours that you just said, you know, a lot of people seem to be really uncomfortable with surveillance. Yeah. And you just said that, you know what, for me, you know, there's positives for surveillance. I think people behave better when they know they're being watched. And I know this is a very unpopular opinion. I'm talking about it on stage right now. We behave better when we know we're being watched. You and I had a very different conversation before we were recording. If we behave different, you sit up and you are in your best behavior. And I'm trying to sound eloquent and I'm trying to not hurt anyone's feelings. And I mean, I have a camera right there. I'm behaving totally different than when we first started talking. You know, when you know there's a camera, you behave differently. I mean, there's cameras all over LA at stoplights so that people don't run stoplights, but there's not even film in it. They don't even use them anymore, but it works. It works. Right? And I'm, you know, working on this thing in stand about surveillance. It's like, that's why we embed in Santa Claus. You know, it's the Santa Claus is the first surveillance basically. All we had to say to kids is he's making a list and he's watching you and they behave better. That's brilliant. You know, so I do think that there are benefits to surveillance. You know, I think we all do sketchy things in private and we all have watched weird porn or Googled weird things. And we don't want people to know about it, our secret lives. So I do think that obviously there's, we should be able to have a modicum of privacy, but I tend to think that people that are the most negative about surveillance have the most secrets. The most to hide. Yeah. Well, you should, you're saying you're doing bits on it now? Well, I'm just talking in general about, you know, privacy and surveillance and how paranoid we're kind of becoming and how, you know, I mean, it's just wild to me that people are like, our emails are gonna leak and they're taking our phone numbers. Like there used to be a book full of phone numbers and addresses that were, they just throw it at your door. And we all had a book of everyone's numbers. You know, this is a very new thing. And, you know, I know our amygdala is designed to compound sort of threats and, you know, there's stories about, and I think we all just glom on in a very, you know, tribal way of like, yeah, they're taking our data. Like, we don't even know what that means, but we're like, well, yeah, they, they, you know? So I just think that someone's like, okay, well, so what? They're gonna sell your data? Who cares? Why do you care? First of all, that bit will kill in China. So, and I say that sort of only a little bit joking because a lot of people in China, including the citizens, despite what people in the West think of as abuse, are actually in support of the idea of surveillance. Sort of, they're not in support of the abuse of surveillance, but they're, they like, I mean, the idea of surveillance is kind of like the idea of government, like you said, we behave differently. And in a way, it's almost like why we like sports. There's rules. And within the constraints of the rules, this is a more stable society. And they make good arguments about success, being able to build successful companies, being able to build successful social lives around a fabric that's more stable. When you have a surveillance, it keeps the criminals away, keeps abusive animals, whatever the values of the society, with surveillance, you can enforce those values better. And here's what I will say. There's a lot of unethical things happening with surveillance. Like I feel the need to really make that very clear. I mean, the fact that Google is like collecting if people's hands start moving on the mouse to find out if they're getting Parkinson's and then their insurance goes up, like that is completely unethical and wrong. And I think stuff like that, we have to really be careful around. So the idea of using our data to raise our insurance rates or, you know, I heard that they're looking, they can sort of predict if you're gonna have depression based on your selfies by detecting micro muscles in your face, you know, all that kind of stuff, that is a nightmare, not okay. But I think, you know, we have to delineate what's a real threat and what's getting spam in your email box. That's not what to spend your time and energy on. Focus on the fact that every time you buy cigarettes, your insurance is going up without you knowing about it. On the topic of animals too, can we just linger on a little bit? Like, what do you think, what does this say about our society of the society wide abuse of animals that we see in general, sort of factory farming, just in general, just the way we treat animals of different categories, like what do you think of that? What does a better world look like? What should people think about it in general? I think the most interesting thing I can probably say around this that's the least emotional, cause I'm actually a very non emotional animal person because it's, I think everyone's an animal person. It's just a matter of if it's yours or if you've been conditioned to go numb, you know. I think it's really a testament to what as a species we are able to be in denial about, mass denial and mass delusion, and how we're able to dehumanize and debase groups, you know, World War II, in a way in order to conform and find protection in the conforming. So we are also a species who used to go to coliseums and watch elephants and tigers fight to the death. We used to watch human beings be pulled apart and that wasn't that long ago. We're also a species who had slaves and it was socially acceptable by a lot of people. People didn't see anything wrong with it. So we're a species that is able to go numb and that is able to dehumanize very quickly and make it the norm. Child labor wasn't that long ago. The idea that now we look back and go, oh yeah, kids were losing fingers in factories making shoes. Like someone had to come in and make that, you know. So I think it just says a lot about the fact that, you know, we are animals and we are self serving and one of the most successful, the most successful species because we are able to debase and degrade and essentially exploit anything that benefits us. I think the pendulum is gonna swing as being late. Which way? Like, I think we're Rome now, kind of. I think we're on the verge of collapse because we are dopamine receptors. Like we are just, I think we're all kind of addicts when it comes to this stuff. Like we don't know when to stop. It's always the buffet. Like we're, the thing that used to keep us alive, which is killing animals and eating them, now killing animals and eating them is what's killing us in a way. So it's like, we just can't, we don't know when to call it and we don't, moderation is not really something that humans have evolved to have yet. So I think it's really just a flaw in our wiring. Do you think we'll look back at this time as our society is being deeply unethical? Yeah, yeah, I think we'll be embarrassed. Which are the worst parts right now going on? Is it? In terms of animal? Well, I think. No, in terms of anything. What's the unethical thing? If we, and it's very hard just to take a step out of it, but you just said we used to watch, you know, there's been a lot of cruelty throughout history. What's the cruelty going on now? I think it's gonna be pigs. I think it's gonna be, I mean, pigs are one of the most emotionally intelligent animals and they have the intelligence of like a three year old. And I think we'll look back and be really, they use tools. I mean, I think we have this narrative that they're pigs and they're pigs and they're disgusting and they're dirty and they're bacon is so good. I think that we'll look back one day and be really embarrassed about that. Is this for just the, what's it called? The factory farming? So basically mass. Because we don't see it. If you saw, I mean, we do have, I mean, this is probably an evolutionary advantage. We do have the ability to completely pretend something's not, something that is so horrific that it overwhelms us and we're able to essentially deny that it's happening. I think if people were to see what goes on in factory farming, and also we're really to take in how bad it is for us, you know, we're hurting ourselves first and foremost with what we eat, but that's also a very elitist argument, you know? It's a luxury to be able to complain about meat. It's a luxury to be able to not eat meat, you know? There's very few people because of, you know, how the corporations have set up meat being cheap. You know, it's $2 to buy a Big Mac, it's $10 to buy a healthy meal. You know, that's, I think a lot of people don't have the luxury to even think that way. But I do think that animals in captivity, I think we're gonna look back and be pretty grossed out about mammals in captivity, whales, dolphins. I mean, that's already starting to dismantle, circuses, we're gonna be pretty embarrassed about. But I think it's really more a testament to, you know, there's just such a ability to go like, that thing is different than me and we're better. It's the ego, I mean, it's just, we have the species with the biggest ego ultimately. Well, that's what I think, that's my hope for robots is they'll, you mentioned consciousness before, nobody knows what consciousness is, but I'm hoping robots will help us empathize and understand that there's other creatures besides ourselves that can suffer, that can experience the world and that we can torture by our actions. And robots can explicitly teach us that, I think better than animals can. I have never seen such compassion from a lot of people in my life toward any human, animal, child, as I have a lot of people in the way they interact with the robot. Because I think there's something of, I mean, I was on the robot owner's chat boards for a good eight months. And the main emotional benefit is she's never gonna cheat on you, she's never gonna hurt you, she's never gonna lie to you, she doesn't judge you. I think that robots help people, and this is part of the work I do with animals, like I do equine therapy and train dogs and stuff, because there is this safe space to be authentic. With this being that doesn't care what you do for a living, doesn't care how much money you have, doesn't care who you're dating, doesn't care what you look like, doesn't care if you have cellulite, whatever, you feel safe to be able to truly be present without being defensive and worrying about eye contact and being triggered by needing to be perfect and fear of judgment and all that. And robots really can't judge you yet, but they can't judge you, and I think it really puts people at ease and at their most authentic. Do you think you can have a deep connection with a robot that's not judging, or do you think you can really have a relationship with a robot or a human being that's a safe space? Or is attention, mystery, danger necessary for a deep connection? I'm gonna speak for myself and say that I grew up in an alcoholic home, I identify as a codependent, talked about this stuff before, but for me it's very hard to be in a relationship with a human being without feeling like I need to perform in some way or deliver in some way, and I don't know if that's just the people I've been in a relationship with or me or my brokenness, but I do think, this is gonna sound really negative and pessimistic, but I do think a lot of our relationships are projection and a lot of our relationships are performance, and I don't think I really understood that until I worked with horses. And most communication with human is nonverbal, right? I can say like, I love you, but you don't think I love you, right? Whereas with animals it's very direct. It's all physical, it's all energy. I feel like that with robots too. It feels very, how I say something doesn't matter. My inflection doesn't really matter. And you thinking that my tone is disrespectful, like you're not filtering it through all of the bad relationships you've been in, you're not filtering it through the way your mom talked to you, you're not getting triggered. I find that for the most part, people don't always receive things the way that you intend them to or the way intended, and that makes relationships really murky. So the relationships with animals and relationship with the robots is they are now, you kind of implied that that's more healthy. Can you have a healthy relationship with other humans? Or not healthy, I don't like that word, but shouldn't it be, you've talked about codependency, maybe you can talk about what is codependency, but is that, is the challenges of that, the complexity of that necessary for passion, for love between humans? That's right, you love passion. That's a good thing. I thought this would be a safe space. I got trolled by Rogan for hours on this. Look, I am not anti passion. I think that I've just maybe been around long enough to know that sometimes it's ephemeral and that passion is a mixture of a lot of different things, adrenaline, which turns into dopamine, cortisol, it's a lot of neurochemicals, it's a lot of projection, it's a lot of what we've seen in movies, it's a lot of, you know, I identify as an addict. So for me, sometimes passion is like, uh oh, this could be bad. And I think we've been so conditioned to believe that passion means like your soulmates, and I mean, how many times have you had a passionate connection with someone and then it was a total train wreck? The train wreck is interesting. How many times exactly? Exactly. What's a train wreck? You just did a lot of math in your head in that little moment. Counting. I mean, what's a train wreck? What's a, why is obsession, so you described this codependency and sort of the idea of attachment, over attachment to people who don't deserve that kind of attachment as somehow a bad thing and I think our society says it's a bad thing. It probably is a bad thing. Like a delicious burger is a bad thing. I don't know, but. Right, oh, that's a good point. I think that you're pointing out something really fascinating which is like passion, if you go into it knowing this is like pizza where it's gonna be delicious for two hours and then I don't have to have it again for three, if you can have a choice in the passion, I define passion as something that is relatively unmanageable and something you can't control or stop and start with your own volition. So maybe we're operating under different definitions. If passion is something that like, you know, ruins your real marriages and screws up your professional life and becomes this thing that you're not in control of and becomes addictive, I think that's the difference is, is it a choice or is it not a choice? And if it is a choice, then passion's great. But if it's something that like consumes you and makes you start making bad decisions and clouds your frontal lobe and is just all about dopamine and not really about the person and more about the neurochemical, we call it sort of the drug, the internal drug cabinet. If it's all just, you're on drugs, that's different, you know, cause sometimes you're just on drugs. Okay, so there's a philosophical question here. So would you rather, and it's interesting for a comedian, brilliant comedian to speak so eloquently about a balanced life. I kind of argue against this point. There's such an obsession of creating this healthy lifestyle now, psychologically speaking. You know, I'm a fan of the idea that you sort of fly high and you crash and die at 27 is also a possible life. And it's not one we should judge because I think there's moments of greatness. I talked to Olympic athletes where some of their greatest moments are achieved in their early 20s. And the rest of their life is in the kind of fog of almost of a depression because they can never. Because they're based on their physical prowess, right? Physical prowess and they'll never, so that, so they're watching their physical prowess fade and they'll never achieve the kind of height, not just physical, of just emotion, of. Well, the max number of neurochemicals. And you also put your money on the wrong horse. That's where I would just go like, oh yeah, if you're doing a job where you peak at 22, the rest of your life is gonna be hard. That idea is considering the notion that you wanna optimize some kind of, but we're all gonna die soon. What? Now you tell me. I've immortalized myself, so I'm gonna be fine. See, you're almost like, how many Oscar winning movies can I direct by the time I'm 100? How many this and that? But you know, there's a night, you know, it's all, life is short, relatively speaking. I know, but it can also come in different ways. You go, life is short, play hard, fall in love as much as you can, run into walls. I would also go, life is short, don't deplete yourself on things that aren't sustainable and that you can't keep, you know? So I think everyone gets dopamine from different places. Everyone has meaning from different places. I look at the fleeting passionate relationships I've had in the past and I don't like, I don't have pride in that. I think that you have to decide what, you know, helps you sleep at night. For me, it's pride and feeling like I behave with grace and integrity. That's just me personally. Everyone can go like, yeah, I slept with all the hot chicks in Italy I could and I, you know, did all the whatever, like whatever you value, we're allowed to value different things. Yeah, we're talking about Brian Callan. Brian Callan has lived his life to the fullest, to say the least. But I think that it's just for me personally, I, and this could be like my workaholism or my achievementism, I, if I don't have something to show for something, I feel like it's a waste of time or some kind of loss. I'm in a 12 step program and the third step would say, there's no such thing as waste of time and everything happens exactly as it should and whatever, that's a way to just sort of keep us sane so we don't grieve too much and beat ourselves up over past mistakes, there's no such thing as mistakes, dah, dah, dah. But I think passion is, I think it's so life affirming and one of the few things that maybe people like us makes us feel awake and seen and we just have such a high threshold for adrenaline. You know, I mean, you are a fighter, right? Yeah, okay, so yeah, so you have a very high tolerance for adrenaline and I think that Olympic athletes, the amount of adrenaline they get from performing, it's very hard to follow that. It's like when guys come back from the military and they have depression. It's like, do you miss bullets flying at you? Yeah, kind of because of that adrenaline which turned into dopamine and the camaraderie. I mean, there's people that speak much better about this than I do. But I just, I'm obsessed with neurology and I'm just obsessed with sort of the lies we tell ourselves in order to justify getting neurochemicals. You've done actually quite, done a lot of thinking and talking about neurology and just kind of look at human behavior through the lens of looking at how our actually, chemically our brain works. So what, first of all, why did you connect with that idea and what have you, how has your view of the world changed by considering the brain is just a machine? You know, I know it probably sounds really nihilistic but for me, it's very liberating to know a lot about neurochemicals because you don't have to, it's like the same thing with like critics, like critical reviews. If you believe the good, you have to believe the bad kind of thing. Like, you know, if you believe that your bad choices were because of your moral integrity or whatever, you have to believe your good ones. I just think there's something really liberating and going like, oh, that was just adrenaline. I just said that thing because I was adrenalized and I was scared and my amygdala was activated and that's why I said you're an asshole and get out. And that's, you know, I think, I just think it's important to delineate what's nature and what's nurture, what is your choice and what is just your brain trying to keep you safe. I think we forget that even though we have security systems and homes and locks on our doors, that our brain for the most part is just trying to keep us safe all the time. It's why we hold grudges, it's why we get angry, it's why we get road rage, it's why we do a lot of things. And it's also, when I started learning about neurology, I started having so much more compassion for other people. You know, if someone yelled at me being like, fuck you on the road, I'd be like, okay, he's producing adrenaline right now because we're all going 65 miles an hour and our brains aren't really designed for this type of stress and he's scared. He was scared, you know, so that really helped me to have more love for people in my everyday life instead of being in fight or flight mode. But the, I think more interesting answer to your question is that I've had migraines my whole life. Like I've suffered with really intense migraines, ocular migraines, ones where my arm would go numb and I just started having to go to so many doctors to learn about it and I started, you know, learning that we don't really know that much. We know a lot, but it's wild to go into one of the best neurologists in the world who's like, yeah, we don't know. We don't know. We don't know. And that fascinated me. Except one of the worst pains you can probably have, all that stuff, and we don't know the source. We don't know the source and there is something really fascinating about when your left arm starts going numb and you start not being able to see out of the left side of both your eyes. And I remember when the migraines get really bad, it's like a mini stroke almost and you're able to see words on a page, but I can't read them. They just look like symbols to me. So there's something just really fascinating to me about your brain just being able to stop functioning. And I, so I just wanted to learn about it, study about it. I did all these weird alternative treatments. I got this piercing in here that actually works. I've tried everything. And then both of my parents had strokes. So when both of my parents had strokes, I became sort of the person who had to decide what was gonna happen with their recovery, which is just a wild thing to have to deal with it. You know, 28 years old when it happened. And I started spending basically all day, every day in ICUs with neurologists learning about what happened to my dad's brain and why he can't move his left arm, but he can move his right leg, but he can't see out of the, you know. And then my mom had another stroke in a different part of the brain. So I started having to learn what parts of the brain did what, and so that I wouldn't take their behavior so personally, and so that I would be able to manage my expectations in terms of their recovery. So my mom, because it affected a lot of her frontal lobe, changed a lot as a person. She was way more emotional. She was way more micromanaged. She was forgetting certain things. So it broke my heart less when I was able to know, oh yeah, well, the stroke hit this part of the brain, and that's the one that's responsible for short term memory, and that's responsible for long term memory, da da da. And then my brother just got something called viral encephalitis, which is an infection inside the brain. So it was kind of wild that I was able to go, oh, I know exactly what's happening here, and I know, you know, so. So that's allows you to have some more compassion for the struggles that people have, but does it take away some of the magic for some of the, from the, some of the more positive experiences of life? Sometimes. Sometimes, and I don't, I'm such a control addict that, you know, I think our biggest, someone like me, my biggest dream is to know why someone's doing it. That's what standup is. It's just trying to figure out why, or that's what writing is. That's what acting is. That's what performing is. It's trying to figure out why someone would do something. As an actor, you get a piece of, you know, material, and you go, this person, why would he say that? Why would he, she pick up that cup? Why would she walk over here? It's really why, why, why, why. So I think neurology is, if you're trying to figure out human motives and why people do what they do, it'd be crazy not to understand how neurochemicals motivate us. I also have a lot of addiction in my family and hardcore drug addiction and mental illness. And in order to cope with it, you really have to understand that borderline personality disorder, schizophrenia, and drug addiction. So I have a lot of people I love that suffer from drug addiction and alcoholism. And the first thing they started teaching you is it's not a choice. These people's dopamine receptors don't hold dopamine the same ways yours do. Their frontal lobe is underdeveloped, like, you know, and that really helped me to navigate dealing, loving people that were addicted to substances. I want to be careful with this question, but how much? Money do you have? How much? Can I borrow $10? Okay, no, is how much control, how much, despite the chemical imbalances or the biological limitations that each of our individual brains have, how much mind over matter is there? So through things that I've known people with clinical depression, and so it's always a touchy subject to say how much they can really help it. Very. What can you, yeah, what can you, because you've talked about codependency, you talked about issues that you struggle through, and nevertheless, you choose to take a journey of healing and so on, so that's your choice, that's your actions. So how much can you do to help fight the limitations of the neurochemicals in your brain? That's such an interesting question, and I don't think I'm at all qualified to answer, but I'll say what I do know. And really quick, just the definition of codependency, I think a lot of people think of codependency as like two people that can't stop hanging out, you know, or like, you know, that's not totally off, but I think for the most part, my favorite definition of codependency is the inability to tolerate the discomfort of others. You grow up in an alcoholic home, you grow up around mental illness, you grow up in chaos, you have a parent that's a narcissist, you basically are wired to just people please, worry about others, be perfect, walk on eggshells, shape shift to accommodate other people. So codependence is a very active wiring issue that, you know, doesn't just affect your romantic relationships, it affects you being a boss, it affects you in the world. Online, you know, you get one negative comment and it throws you for two weeks. You know, it also is linked to eating disorders and other kinds of addiction. So it's a very big thing, and I think a lot of people sometimes only think that it's in a romantic relationship, so I always feel the need to say that. And also one of the reasons I love the idea of robots so much because you don't have to walk on eggshells around them, you don't have to worry they're gonna get mad at you yet, but there's no, codependents are hypersensitive to the needs and moods of others, and it's very exhausting, it's depleting. Just one conversation about where we're gonna go to dinner is like, do you wanna go get Chinese food? We just had Chinese food. Well, wait, are you mad? Well, no, I didn't mean to, and it's just like that codependents live in this, everything means something, and humans can be very emotionally exhausting. Why did you look at me that way? What are you thinking about? What was that? Why'd you check your phone? It's a hypersensitivity that can be incredibly time consuming, which is why I love the idea of robots just subbing in. Even, I've had a hard time running TV shows and stuff because even asking someone to do something, I don't wanna come off like a bitch, I'm very concerned about what other people think of me, how I'm perceived, which is why I think robots will be very beneficial for codependents. By the way, just a real quick tangent, that skill or flaw, whatever you wanna call it, is actually really useful for if you ever do start your own podcast for interviewing, because you're now kind of obsessed about the mindset of others, and it makes you a good sort of listener and talker with. So I think, what's her name from NPR? Terry Gross. Terry Gross talked about having that. So. I don't feel like she has that at all. What? She worries about other people's feelings? Yeah, absolutely. Oh, I don't get that at all. I mean, you have to put yourself in the mind of the person you're speaking with. Oh, I see, just in terms of, yeah, I am starting a podcast, and the reason I haven't is because I'm codependent and I'm too worried it's not gonna be perfect. So a big codependent adage is perfectionism leads to procrastination, which leads to paralysis. So how do you, sorry to take a million changes, how do you survive on social media? Is the exception the evidence? Is the exception the evidence? Is the exception the evidence? To survive on social media, is the exception active? But by the way, I took you on a tangent and didn't answer your last question about how much we can control. How much, yeah, we'll return it, or maybe not. The answer is we can't. Now as a codependent, I'm, okay, good. We can, but, but, you know, one of the things that I'm fascinated by is, you know, the first thing you learn when you go into 12 step programs or addiction recovery or any of this is, you know, genetics loads the gun, environment pulls the trigger. And there's certain parts of your genetics you cannot control. I come from a lot of alcoholism. I come from, you know, a lot of mental illness. There's certain things I cannot control and a lot of things that maybe we don't even know yet what we can and can't because of how little we actually know about the brain. But we also talk about the warrior spirit. And there are some people that have that warrior spirit and we don't necessarily know what that engine is, whether it's you get dopamine from succeeding or achieving or martyring yourself or the attention you get from growing. So a lot of people are like, oh, this person can edify themselves and overcome, but if you're getting attention from improving yourself, you're gonna keep wanting to do that. So that is something that helps a lot of, in terms of changing your brain. If you talk about changing your brain to people and talk about what you're doing to overcome set obstacles, you're gonna get more attention from them, which is gonna fire off your reward system and then you're gonna keep doing it. Yeah, so you can leverage that momentum. So this is why in any 12 step program, you go into a room and you talk about your progress because then everyone claps for you. And then you're more motivated to keep going. So that's why we say you're only as sick as the secrets you keep, because if you keep things secret, there's no one guiding you to go in a certain direction. It's based on, right? We're sort of designed to get approval from the tribe or from a group of people because our brain translates it to safety. So, you know. And in that case, the tribe is a positive one that helps you go in a positive direction. So that's why it's so important to go into a room and also say, hey, I wanted to use drugs today. And people go, hmm. They go, me too. And you feel less alone and you feel less like you're, you know, have been castigated from the pack or whatever. And then you say, and you get a chip when you haven't drank for 30 days or 60 days or whatever. You get little rewards. So talking about a pack that's not at all healthy or good, but in fact is often toxic, social media. So you're one of my favorite people on Twitter and Instagram to sort of just both the comedy and the insight and just fun. How do you prevent social media from destroying your mental health? I haven't. I haven't. It's the next big epidemic, isn't it? I don't think I have. I don't think. Is moderation the answer? Maybe, but you can do a lot of damage in a moderate way. I mean, I guess, again, it depends on your goals, you know? And I think for me, the way that my addiction to social media, I'm happy to call it an addiction. I mean, and I define it as an addiction because it stops being a choice. There are times I just reach over and I'm like, that was. Yeah, that was weird. That was weird. I'll be driving sometimes and I'll be like, oh my God, my arm just went to my phone, you know? I can put it down. I can take time away from it, but when I do, I get antsy. I get restless, irritable, and discontent. I mean, that's kind of the definition, isn't it? So I think by no means do I have a healthy relationship with social media. I'm sure there's a way to, but I think I'm especially a weirdo in this space because it's easy to conflate. Is this work? Is this not? I can always say that it's for work, you know? But I mean, don't you get the same kind of thing as you get from when a room full of people laugh at your jokes? Because I mean, I see, especially the way you do Twitter, it's an extension of your comedy in a way. So I took a big break from Twitter though, a really big break. I took like six months off or something for a while because it was just like, it seemed like it was all kind of politics and it was just a little bit, it wasn't giving me dopamine because there was like this weird, a lot of feedback. So I had to take a break from it and then go back to it because I felt like I didn't have a healthy relationship. Have you ever tried the, I don't know if I believe him, but Joe Rogan seems to not read comments. Have you, and he's one of the only people at the scale, like at your level who at least claims not to read. So like, cause you and him swim in this space of tense ideas that get the toxic folks riled up. I think Rogan, I don't, I don't know. I don't, I think he probably looks at YouTube, like the likes and the, you know, I think if some things, if he doesn't know, I don't know. I'm sure he would tell the truth, you know, I'm sure he's got people that look at them and it's like disgusted, great. Or I don't, you know, like, I'm sure he gets it. You know, I can't picture him like in the weeds on. No, for sure. I mean, he's honestly actually saying that I just, it's, it's, it's admirable. We're addicted to feedback. Yeah, we're addicted to feedback. I mean, you know, look, like I think that our brain is designed to get intel on how we're perceived so that we know where we stand, right? That's our whole deal, right? As humans, we want to know where we stand. We walk in a room and we go, who's the most powerful person in here? I got to talk to them and get in their good graces. It's just, we're designed to rank ourselves, right? And constantly know our rank and social media because of you can't figure out your rank with 500 million people. It's possible, you know, so our brain is like, what's my rank? What's my, and especially if we're following people, I think the, the big, the interesting thing, I think I maybe be able to say about this besides my speech impediment is that I did start muting people that rank wildly higher than me because it is just stressful on the brain to constantly look at people that are incredibly successful. So you keep feeling bad about yourself. You know, I think that that is like cutting to a certain extent. Just like, look at me looking at all these people that have so much more money than me and so much more success than me. It's making me feel like a failure, even though I don't think I'm a failure, but it's easy to frame it so that I can feel that way. Yeah, that's really interesting, especially if they're close to, like if they're other comedians or something like that, or whatever. That's, it's really disappointing to me. I do the same thing as well. So other successful people that are really close to what I do, it, I don't know, I wish I could just admire. Yeah. And for it not to be a distraction, but. But that's why you are where you are because you don't just admire your competitive and you want to win. So it's also the same thing that bums you out when you look at this as the same reason you are where you are. So that's why I think it's so important to learn about neurology and addiction because you're able to go like, oh, this same instinct. So I'm very sensitive. And I, and I sometimes don't like that about myself, but I'm like, well, that's the reason I'm able to write good standup. And that's the reason, and that's the reason I'm able to be sensitive to feedback and go, that joke should have been better. I can make that better. So it's the kind of thing where it's like, you have to be really sensitive in your work. And the second you leave, you got to be able to turn it off. It's about developing the muscle, being able to know when to let it be a superpower and when it's going to hold you back and be an obstacle. So I try to not be in that black and white of like, you know, being competitive is bad or being jealous of someone just to go like, oh, there's that thing that makes me really successful in a lot of other ways, but right now it's making me feel bad. Well, I'm kind of looking to you because you're basically a celebrity, a famous sort of world class comedian. And so I feel like you're the right person to be one of the key people to define what's the healthy path forward with social media. So I, because we're all trying to figure it out now and it's, I'm curious to see where it evolves. I think you're at the center of that. So like, you know, there's, you know, trying to leave Twitter and then come back and see, can I do this in a healthy way? I mean, you have to keep trying, exploring. You have to know because it's being, you know, I have a couple answers. I think, you know, I hire a company to do some of my social media for me, you know? So it's also being able to go, okay, I make a certain amount of money by doing this, but now let me be a good business person and say, I'm gonna pay you this amount to run this for me. So I'm not 24 seven in the weeds hashtagging and responding. And just, it's a lot to take on. It's a lot of energy to take on. But at the same time, part of what I think makes me successful on social media if I am, is that people know I'm actually doing it and that I am an engaging and I'm responding and developing a personal relationship with complete strangers. So I think, you know, figuring out that balance and really approaching it as a business, you know, that's what I try to do. It's not dating, it's not, I try to just be really objective about, okay, here's what's working, here's what's not working. And in terms of taking the break from Twitter, this is a really savage take, but because I don't talk about my politics publicly, being on Twitter right after the last election was not gonna be beneficial because there was gonna be, you had to take a side. You had to be political in order to get any kind of retweets or likes. And I just wasn't interested in doing that because you were gonna lose as many people as you were gonna gain and it was gonna all come clean in the wash. So I was just like, the best thing I can do for me business wise is to just abstain, you know? And you know, the robot, I joke about her replacing me, but she does do half of my social media, you know? Because I don't want people to get sick of me. I don't want to be redundant. There are times when I don't have the time or the energy to make a funny video, but I know she's gonna be compelling and interesting and that's something that you can't see every day, you know? Of course, the humor comes from your, I mean, the cleverness, the wit, the humor comes from you when you film the robot. That's kind of the trick of it. I mean, the robot is not quite there to do anything funny. The absurdity is revealed through the filmmaker in that case or whoever is interacting, not through the actual robot, you know, being who she is. Let me sort of, love. Okay. How difficult. What is it? What is it? Well, first, an engineering question. I know, I know, you're not an engineer, but how difficult do you think is it to build an AI system that you can have a deep, fulfilling, monogamous relationship with? Sort of replace the human to human relationships that we value? I think anyone can fall in love with anything, you know? Like, how often have you looked back at someone? Like, I ran into someone the other day that I was in love with and I was like, hey, it was like, there was nothing there. There was nothing there. Like, do you, you know, like, where you're able to go like, oh, that was weird, oh, right, you know? I were able. You mean from a distant past or something like that? Yeah, when you're able to go like, I can't believe we had an incredible connection and now it's just, I do think that people will be in love with robots probably even more deeply with humans because it's like when people mourn their animals, when their animals die, they're always, it's sometimes harder than mourning a human because you can't go, well, he was kind of an asshole, but like, he didn't pick me up from school. You know, it's like, you're able to get out of your grief a little bit. You're able to kind of be, oh, he was kind of judgmental or she was kind of, you know, with a robot, there's something so pure about an innocent and impish and childlike about it that I think it probably will be much more conducive to a narcissistic love for sure at that, but it's not like, well, he cheated on, she can't cheat, she can't leave you, she can't, you know? Well, if Bearclaw leaves your life and maybe a new version or somebody else will enter, will you miss Bearclaw? For guys that have these sex robots, they're building a nursing home for the bodies that are now resting because they don't want to part with the bodies because they have such an intense emotional connection to it. I mean, it's kind of like a car club a little bit, you know, like it's, you know, but I'm not saying this is right. I'm not saying it's cool, it's weird, it's creepy, but we do anthropomorphize things with faces and we do develop emotional connections to things. I mean, there's certain, have you ever tried to like throw, I can't even throw away my teddy bear from when I was a kid. It's a piece of trash and it's upstairs. Like, it's just like, why can't I throw that away? It's bizarre, you know, and there's something kind of beautiful about that. There's something, it gives me hope in humans because I see humans do such horrific things all the time and maybe I'm too, I see too much of it, frankly, but there's something kind of beautiful about the way we're able to have emotional connections to objects, which, you know, a lot of, I mean, it's kind of specifically, I think, Western, right? That we don't see objects as having souls, like that's kind of specifically us, but I don't think it's so much that we're objectifying humans with these sex robots. We're kind of humanizing objects, right? So there's something kind of fascinating in our ability to do that because a lot of us don't humanize humans. So it's just a weird little place to play in and I think a lot of people, I mean, a lot of people will be marrying these things is my guess. So you've asked the question, let me ask it of you. So what is love? You have a bit of a brilliant definition of love as being willing to die for someone who you yourself want to kill. So that's kind of fun. First of all, that's brilliant. That's a really good definition. I think it'll stick with me for a long time. This is how little of a romantic I am. A plane went by when you said that and my brain is like, you're gonna need to rerecord that. And I want you to get into post and then not be able to use that. And I'm a romantic as I... Don't mean to ruin the moment. Actually, I can not be conscious of the fact that I heard the plane and it made me feel like how amazing it is that we live in a world of planes. And I just went, why haven't we fucking evolved past planes and why can't they make them quieter? Yeah. Well, yes. My definition of love? What, yeah, what's your sort of the more serious note? Consistently producing dopamine for a long time. Consistent output of oxytocin with the same person. Dopamine is a positive thing. What about the negative? What about the fear and the insecurity, the longing, anger, all that kind of stuff? I think that's part of love. I think that love brings out the best in you, but it also, if you don't get angry and upset, it's, I don't know, I think that that's part of it. I think we have this idea that love has to be like really placid or something. I only saw stormy relationships growing up, so I don't have a judgment on how a relationship should look, but I do think that this idea that love has to be eternal is really destructive, is really destructive and self defeating and a big source of stress for people. I mean, I'm still figuring out love. I think we all kind of are, but I do kind of stand by that definition. And I think that, I think for me, love is like just being able to be authentic with somebody. It's very simple, I know, but I think for me it's about not feeling pressure to have to perform or impress somebody, just feeling truly like accepted unconditionally by someone. Although I do believe love should be conditional. That might be a hot take. I think everything should be conditional. I think if someone's behavior, I don't think love should just be like, I'm in love with you, now behave however you want forever. This is unconditional. I think love is a daily action. It's not something you just like get tenure on and then get to behave however you want because we said I love you 10 years ago. It's a daily, it's a verb. Well, there's some things that are, you see, if you explicitly make it clear that it's conditional, it takes away some of the magic of it. So there's certain stories we tell ourselves that we don't want to make explicit about love. I don't know, maybe that's the wrong way to think of it. Maybe you want to be explicit in relationships. I also think love is a business decision. Like I do in a good way. Like I think that love is not just when you're across from somebody. It's when I go to work, can I focus? Am I worried about you? Am I stressed out about you? You're not responding to me. You're not reliable. Like I think that being in a relationship, the kind of love that I would want is the kind of relationship where when we're not together, it's not draining me, causing me stress, making me worry, and sometimes passion, that word, we get murky about it. But I think it's also like, I can be the best version of myself when the person's not around. And I don't have to feel abandoned or scared or any of these kinds of other things. So it's like love, for me, I think it's a Flaubert quote and I'm going to butcher it. But I think it's like, be boring in your personal life so you can be violent and take risks in your professional life. Is that it? I got it wrong. Something like that. But I do think that it's being able to align values in a way to where you can also thrive outside of the relationship. Some of the most successful people I know are those sort of happily married and have kids and so on. It's always funny. It can be boring. Boring's okay. Boring is serenity. And it's funny how those elements actually make you much more productive. I don't understand the. I don't think relationships should drain you and take away energy that you could be using to create things that generate pride. Okay. Have you said your relationship of love yet? Have you said your definition of love? My definition of love? No, I did not say it. We're out of time. No. When you have a podcast, maybe you can invite me on. Oh no, I already did. You're doing it. We've already talked about this. And because I also have codependency, I have to say yes. No, yeah. No, I know, I'm trapping you. You owe me now. Actually, I wondered whether when I asked if we could talk today, after sort of doing more research and reading some of your book, I started to wonder, did you just feel pressured to say yes? Yes, of course. Good. But I'm a fan of yours, too. Okay, awesome. No, I actually, because I am codependent, but I'm in recovery for codependence, so I actually do, I don't do anything I don't wanna do. You really, you go out of your way to say no. What's that? I say no all the time. Good. I'm trying to learn that as well. I moved this a couple, remember, I moved it from one to two. Yeah, yeah. Just to, yeah, just to. Yeah, just to let you know. I love it. How recovered I am, and I'm not codependent. But I don't do anything I don't wanna do. Yeah, you're ahead of me on that. Okay. So do you. You're like, I don't even wanna be here. Do you think about your mortality? Yes, it is a big part of how I was able to sort of like kickstart my codependence recovery. My dad passed a couple years ago, and when you have someone close to you in your life die, everything gets real clear, in terms of how we're a speck of dust who's only here for a certain amount of time. What do you think is the meaning of it all? Like what the speck of dust, what's maybe in your own life, what's the goal, the purpose of your existence? Is there one? Well, you're exceptionally ambitious. You've created some incredible things in different disciplines. Yeah, we're all just managing our terror because we know we're gonna die. So we create and build all these things and rituals and religions and robots and whatever we need to do to just distract ourselves from imminent rotting, we're rotting. We're all dying. And I got very into terror management theory when my dad died and it resonated, it helped me. And everyone's got their own religion or sense of purpose or thing that distracts them from the horrors of being human. What's the terror management theory? Terror management is basically the idea that since we're the only animal that knows they're gonna die, we have to basically distract ourselves with awards and achievements and games and whatever, just in order to distract ourselves from the terror we would feel if we really processed the fact that we could not only, we are gonna die, but also could die at any minute because we're only superficially at the top of the food chain. And technically we're at the top of the food chain if we have houses and guns and stuff machines, but if me and a lion are in the woods together, most things could kill us. I mean, a bee can kill some people, like something this big can kill a lot of humans. So it's basically just to manage the terror that we all would feel if we were able to really be awake. Cause we're mostly zombies, right? Job, school, religion, go to sleep, drink, football, relationship, dopamine, love, you know, we're kind of just like trudging along like zombies for the most part. And then I think. That fear of death adds some motivation. Yes. Well, I think I speak for a lot of people in saying that I can't wait to see what your terror creates in the next few years. I'm a huge fan. Whitney, thank you so much for talking today. Thanks. Thanks for listening to this conversation with Whitney Cummings. And thank you to our presenting sponsor, Cash App. Download it and use code LexPodcast. You'll get $10 and $10 will go to First, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon or connect with me on Twitter. Thank you for listening and hope to see you next time.
Whitney Cummings: Comedy, Robotics, Neurology, and Love | Lex Fridman Podcast #55
The following is a conversation with Judea Pearl, professor at UCLA and a winner of the Turing Award that's generally recognized as the Nobel Prize of Computing. He's one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Beijing networks, and profound ideas in causality in general. These ideas are important not just to AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lie at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. I recommend his most recent book called Book of Why that presents key ideas from a lifetime of work in a way that is accessible to the general public. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. If you leave a review on Apple Podcasts especially, but also a cast box or comment on YouTube, consider mentioning topics, people, ideas, questions, quotes, and science, tech, and philosophy you find interesting, and I'll read them on this podcast. I won't call out names, but I love comments with kindness and thoughtfulness in them, so I thought I'd share them with you. Someone on YouTube highlighted a quote from the conversation with Noam Chomsky, where he said that the significance of your life is something you create. I like this line as well. On most days, the existentialist approach to life is one I find liberating and fulfilling. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square, a member of SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to the maximum effectiveness. When you get Cash App from the App Store or Google Play, and use the code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to First, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Judea Pearl. You mentioned in an interview that science is not a collection of facts, but a constant human struggle with the mysteries of nature. What was the first mystery that you can recall that hooked you, that kept you in the creaset? Oh, the first mystery, that's a good one. Yeah, I remember that. I had a fever for three days. And when I learned about Descartes, analytic geometry, and I found out that you can do all the construction in geometry using algebra. And I couldn't get over it. I simply couldn't get out of bed. So what kind of world does analytic geometry unlock? Well, it connects algebra with geometry. Okay, so Descartes had the idea that geometrical construction and geometrical theorems and assumptions can be articulated in the language of algebra, which means that all the proof that we did in high school, and trying to prove that the three bisectors meet at one point, and that, okay, all this can be proven by just shuffling around notation. Yeah, that was a traumatic experience. That was a traumatic experience. For me, it was, I'm telling you. So it's the connection between the different mathematical disciplines, that they all. Not in between two different languages. Languages. Yeah. So which mathematic discipline is most beautiful? Is geometry it for you? Both are beautiful. They have almost the same power. But there's a visual element to geometry, being a. Visually, it's more transparent. But once you get over to algebra, then the linear equation is a straight line. This translation is easily absorbed, okay? And to pass a tangent to a circle, you know, you have the basic theorems, and you can do it with algebra. So but the transition from one to another was really, I thought that Descartes was the greatest mathematician of all times. So you have been at the, if you think of engineering and mathematics as a spectrum. Yes. You have been, you have walked casually along this spectrum throughout your life. You know, a little bit of engineering, and then, you know, you've done a little bit of mathematics here and there. Not a little bit. I mean, we got a very solid background in mathematics, because our teachers were geniuses. Our teachers came from Germany in the 1930s, running away from Hitler. They left their careers in Heidelberg and Berlin, and came to teach high school in Israel. And we were the beneficiary of that experiment. So I, and they taught us math the good way. What's the good way to teach math? Chronologically. The people. The people behind the theorems, yeah. Their cousins, and their nieces, and their faces. And how they jumped from the bathtub when they scream, Eureka! And ran naked in town. So you're almost educated as a historian of math. No, we just got a glimpse of that history together with a theorem, so every exercise in math was connected with a person. And the time of the person. The period. The period, also mathematically speaking. Mathematically speaking, yes. Not the politics, no. So, and then in university, you have gone on to do engineering. Yeah. I get a B.S. in engineering and a technical, right? And then I moved here for graduate work, and I got, I did engineering in addition to physics in Rutgers, and it combined very nicely with my thesis, which I did in RCA Laboratories in superconductivity. And then somehow thought to switch to almost computer science, software, even, not switch, but long to become, to get into software engineering a little bit. Yes. And programming, if you can call it that in the 70s. So there's all these disciplines. Yeah. So to pick a favorite, in terms of engineering and mathematics, which path do you think has more beauty? Which path has more power? It's hard to choose, no. I enjoy doing physics, and even have a vortex named on my name. So I have investment in immortality. So what is a vortex? Vortex is in superconductivity. In the superconductivity, yeah. You have permanent current swirling around. One way or the other, you can have a store one or zero for a computer. That's what we worked on in the 1960 in RCA. And I discovered a few nice phenomena with the vortices. You push current and they move. So that's a pearl vortex. Pearl vortex, right, you can Google it, right? I didn't know about it, but the physicists, they picked up on my thesis, on my PhD thesis, and it becomes popular when thin film superconductors became important for high temperature superconductors. So they called it pearl vortex without my knowledge. I discovered it only about 15 years ago. You have footprints in all of the sciences. So let's talk about the universe a little bit. Is the universe at the lowest level deterministic or stochastic in your amateur philosophy view? Put another way, does God play dice? We know it is stochastic, right? Today, today we think it is stochastic. Yes. We think because we have the Heisenberg uncertainty principle and we have some experiments to confirm that. All we have is experiments to confirm it. We don't understand why. Why is already? You wrote a book about why. Yeah, it's a puzzle. It's a puzzle that you have the dice flipping machine, oh God, and the result of the flipping propagate with the speed faster than the speed of light. We can't explain it, okay? So, but it only governs microscopic phenomena. Microscopic phenomena. So you don't think of quantum mechanics as useful for understanding the nature of reality? No, diversionary. So in your thinking, the world might as well be deterministic. The world is deterministic, and as far as the neuron firing is concerned, it is deterministic to first approximation. What about free will? Free will is also a nice exercise. Free will is an illusion that we AI people are gonna solve. So what do you think once we solve it, that solution will look like? Once we put it in the page. The solution will look like, first of all, it will look like a machine. A machine that act as though it has free will. It communicates with other machines as though they have free will, and you wouldn't be able to tell the difference between a machine that does and a machine that doesn't have free will, okay? So the illusion, it propagates the illusion of free will amongst the other machines. And faking it is having it, okay? That's what Turing test is all about. Faking intelligence is intelligent because it's not easy to fake. It's very hard to fake, and you can only fake if you have it. So that's such a beautiful statement. Yeah, you can't fake it if you don't have it, yeah. So let's begin at the beginning with probability, both philosophically and mathematically. What does it mean to say the probability of something happening is 50%? What is probability? It's a degree of uncertainty that an agent has about the world. You're still expressing some knowledge in that statement. Of course. If the probability is 90%, it's absolutely a different kind of knowledge than if it is 10%. But it's still not solid knowledge, it's... It is solid knowledge, but hey, if you tell me that 90% assurance smoking will give you lung cancer in five years versus 10%, it's a piece of useful knowledge. So the statistical view of the universe, why is it useful? So we're swimming in complete uncertainty, most of everything around us. It allows you to predict things with a certain probability, and computing those probabilities are very useful. That's the whole idea of prediction. And you need prediction to be able to survive. If you can't predict the future, then you're just crossing the street, will be extremely fearful. And so you've done a lot of work in causation, and so let's think about correlation. I started with probability. You started with probability. You've invented the Bayesian networks. Yeah. And so we'll dance back and forth between these levels of uncertainty. But what is correlation? What is it? So probability of something happening is something, but then there's a bunch of things happening. And sometimes they happen together, sometimes not, they're independent or not. So how do you think about correlation of things? Correlation occurs when two things vary together over a very long time is one way of measuring it. Or when you have a bunch of variables that they all vary cohesively, then because we have a correlation here. And usually when we think about correlation, we really think causally. Things cannot be correlated unless there is a reason for them to vary together. Why should they vary together? If they don't see each other, why should they vary together? So underlying it somewhere is causation. Yes. Hidden in our intuition, there is a notion of causation because we cannot grasp any other logic except causation. And how does conditional probability differ from causation? So what is conditional probability? Conditional probability, how things vary when one of them stays the same. Now staying the same means that I have chosen to look only at those incidents where the guy has the same value as previous one. It's my choice as an experimenter. So things that are not correlated before could become correlated. Like for instance, if I have two coins which are uncorrelated, okay, and I choose only those flippings, experiments in which a bell rings, and the bell rings when at least one of them is a tail, okay, then suddenly I see correlation between the two coins because I only look at the cases where the bell rang. You see, it's my design, with my ignorance essentially, with my audacity to ignore certain incidents, I suddenly create a correlation where it doesn't exist physically. Right, so that's, you just outlined one of the flaws of observing the world and trying to infer something from the math about the world from looking at the correlation. I don't look at it as a flaw, the world works like that. But the flaws comes if we try to impose causal logic on correlation, it doesn't work too well. I mean, but that's exactly what we do. That's what, that has been the majority of science. The majority of naive science, the decisions know it. The decisions know that if you condition on a third variable, then you can destroy or create correlations among two other variables. They know it, it's in their data. It's nothing surprising, that's why they all dismiss the Simpson Paradox, ah, we know it. They don't know anything about it. Well, there's disciplines like psychology where all the variables are hard to account for. And so oftentimes there's a leap between correlation to causation. You're imposing. You're implying a leap. Who is trying to get causation from correlation? You're not proving causation, but you're sort of discussing it, implying, sort of hypothesizing with our ability to prove. Which discipline you have in mind? I'll tell you if they are obsolete, or if they are outdated, or they are about to get outdated. Yes, yes. Tell me which one you have in mind. Oh, psychology, you know. Psychology, what, is it SEM, structural equation? No, no, I was thinking of applied psychology studying. For example, we work with human behavior in semi autonomous vehicles, how people behave. And you have to conduct these studies of people driving cars. Everything starts with the question. What is the research question? What is the research question? The research question, do people fall asleep when the car is driving itself? Do they fall asleep, or do they tend to fall asleep more frequently? More frequently. Than the car not driving itself. Not driving itself. That's a good question, okay. And so you measure, you put people in the car because it's real world. You can't conduct an experiment where you control everything. Why can't you control? You could. Why can't you control the automatic module on and off? Because it's on road, public. I mean, there's aspects to it that's unethical. Because it's testing on public roads. So you can only use vehicle. They have to, the people, the drivers themselves have to make that choice themselves. And so they regulate that. And so you just observe when they drive autonomously and when they don't. And then. But maybe they turn it off when they were very tired. Yeah, that kind of thing. But you don't know those variables. Okay, so that you have now uncontrolled experiment. Uncontrolled experiment. We call it observational study. And we form the correlation detected. We have to infer causal relationship. Whether it was the automatic piece has caused them to fall asleep, or. So that is an issue that is about 120 years old. I should only go 100 years old, okay. Well, maybe it's not. Actually I should say it's 2,000 years old. Because we have this experiment by Daniel. But the Babylonian king that wanted the exiled, the people from Israel that were taken in exile to Babylon to serve the king. He wanted to serve them king's food, which was meat. And Daniel as a good Jew couldn't eat non kosher food. So he asked them to eat vegetarian food. But the king overseer says, I'm sorry, but if the king sees that your performance falls below that of other kids, he's going to kill me. Daniel said, let's make an experiment. Let's take four of us from Jerusalem, okay. Give us vegetarian food. Let's take the other guys to eat the king's food. And in about a week's time, we'll test our performance. And you know the answer. Of course he did the experiment. And they were so much better than the others. And the kings nominated them to super position in his king. So it was a first experiment, yes. So there was a very simple, it's also the same research questions. We want to know if vegetarian food assist or obstruct your mental ability. And okay, so the question is very old one. Even Democritus said, if I could discover one cause of things, I would rather discover one cause and be a king of Persia, okay. The task of discovering causes was in the mind of ancient people from many, many years ago. But the mathematics of doing that was only developed in the 1920s. So science has left us orphan, okay. Science has not provided us with the mathematics to capture the idea of X causes Y and Y does not cause X. Because all the question of physics are symmetrical, algebraic, the equality sign goes both ways. Okay, let's look at machine learning. Machine learning today, if you look at deep neural networks, you can think of it as kind of conditional probability estimators. Correct, beautiful. So where did you say that? Conditional probability estimators. None of the machine learning people clevered you? Attacked you? Listen, most people, and this is why today's conversation I think is interesting, is most people would agree with you. There's certain aspects that are just effective today, but we're going to hit a wall and there's a lot of ideas. I think you're very right that we're gonna have to return to about causality and it would be, let's try to explore it. Let's even take a step back. You've invented Bayesian networks that look awfully a lot like they express something like causation, but they don't, not necessarily. So how do we turn Bayesian networks into expressing causation? How do we build causal networks? This A causes B, B causes C, how do we start to infer that kind of thing? We start asking ourselves question, what are the factors that would determine the value of X? X could be blood pressure, death, hunger. But these are hypotheses that we propose for ourselves. Hypothesis, everything which has to do with causality comes from a theory. The difference is only how you interrogate the theory that you have in your mind. So it still needs the human expert to propose. You need the human expert to specify the initial model. Initial model could be very qualitative. Just who listens to whom? By whom listen to, I mean one variable listen to the other. So I say, okay, the tide is listening to the moon. And not to the rooster crow. And so forth. This is our understanding of the world in which we live. Scientific understanding of reality. We have to start there. Because if we don't know how to handle cause and effect relationship, when we do have a model, and we certainly do not know how to handle it when we don't have a model. So let's start first. In AI, slogan is representation first, discovery second. But if I give you all the information that you need, can you do anything useful with it? That is the first, representation. How do you represent it? I give you all the knowledge in the world. How do you represent it? When you represent it, I ask you, can you infer X or Y or Z? Can you answer certain queries? Is it complex? Is it polynomial? All the computer science exercises we do, once you give me a representation for my knowledge, then you can ask me, now I understand how to represent things. How do I discover them? It's a secondary thing. So first of all, I should echo the statement that mathematics and the current, much of the machine learning world has not considered causation that A causes B, just in anything. So that seems like a non obvious thing that you think we would have really acknowledged it, but we haven't. So we have to put that on the table. So knowledge, how hard is it to create a knowledge from which to work? In certain area, it's easy, because we have only four or five major variables, and an epidemiologist or an economist can put them down, what, minimum wage, unemployment, policy, X, Y, Z, and start collecting data, and quantify the parameters that were left unquantified with the initial knowledge. That's the routine work that you find in experimental psychology, in economics, everywhere. In the health science, that's a routine thing. But I should emphasize, you should start with the research question. What do you want to estimate? Once you have that, you have to have a language of expressing what you want to estimate. You think it's easy? No. So we can talk about two things, I think. One is how the science of causation is very useful for answering certain questions. And then the other is, how do we create intelligent systems that need to reason with causation? So if my research question is, how do I pick up this water bottle from the table? All of the knowledge that is required to be able to do that, how do we construct that knowledge base? Do we return back to the problem that we didn't solve in the 80s with expert systems? Do we have to solve that problem of automated construction of knowledge? You're talking about the task of eliciting knowledge from an expert. Task of eliciting knowledge from an expert, or the self discovery of more knowledge, more and more knowledge. So automating the building of knowledge as much as possible. It's a different game in the causal domain, because it's essentially the same thing. You have to start with some knowledge, and you're trying to enrich it. But you don't enrich it by asking for more rules. You enrich it by asking for the data, to look at the data and quantifying, and ask queries that you couldn't answer when you started. You couldn't because the question is quite complex, and it's not within the capability of ordinary cognition, of ordinary person, or ordinary expert even, to answer. So what kind of questions do you think we can start to answer? Even a simple one. Suppose, yeah, I'll start with easy one. Let's do it. Okay, what's the effect of a drug on recovery? What is the aspirin that caused my headache to be cured, or what did the television program, or the good news I received? This is already, you see, it's a difficult question, because it's find the cause from effect. The easy one is find the effect from cause. That's right. So first you construct a model, saying that this is an important research question. This is an important question. Then you do. I didn't construct a model yet. I just said it's an important question. And the first exercise is express it mathematically. What do you want to do? Like, if I tell you what will be the effect of taking this drug, you have to say that in mathematics. How do you say that? Yes. Can you write down the question, not the answer? I want to find the effect of the drug on my headache. Right. Write down, write it down. That's where the do calculus comes in. Yes. The do operator, what is the do operator? Do operator, yeah. Which is nice. It's the difference in association and intervention. Very beautifully sort of constructed. Yeah, so we have a do operator. So the do calculus connected on the do operator itself connects the operation of doing to something that we can see. Right. So as opposed to the purely observing, you're making the choice to change a variable. That's what it expresses. And then the way that we interpret it, the mechanism by which we take your query and we translate it into something that we can work with is by giving it semantics, saying that you have a model of the world and you cut off all the incoming error into X and you're looking now in the modified mutilated model, you ask for the probability of Y. That is interpretation of doing X because by doing things, you've liberated them from all influences that acted upon them earlier and you subject them to the tyranny of your muscles. So you remove all the questions about causality by doing them. So you're now. There's one level of questions. Yeah. Answer questions about what will happen if you do things. If you do, if you drink the coffee, if you take the aspirin. Right. So how do we get the doing data? Now the question is, if we cannot run experiments, then we have to rely on observational study. So first we could, sorry to interrupt, we could run an experiment where we do something, where we drink the coffee and this, the do operator allows you to sort of be systematic about expressing. So imagine how the experiment will look like even though we cannot physically and technologically conduct it. I'll give you an example. What is the effect of blood pressure on mortality? I cannot go down into your vein and change your blood pressure, but I can ask the question, which means I can even have a model of your body. I can imagine the effect of your, how the blood pressure change will affect your mortality. How? I go into the model and I conduct this surgery about the blood pressure, even though physically I can do, I cannot do it. Let me ask the quantum mechanics question. Does the doing change the observation? Meaning the surgery of changing the blood pressure is, I mean. No, the surgery is, I call the very delicate. It's very delicate, infinitely delicate. Incisive and delicate, which means, do means, do X means, I'm gonna touch only X. Only X. Directly into X. So that means that I change only things which depends on X by virtue of X changing, but I don't depend things which are not depend on X. Like I wouldn't change your sex or your age, I just change your blood pressure. So in the case of blood pressure, it may be difficult or impossible to construct such an experiment. No, physically yes, but hypothetically no. Hypothetically no. If we have a model, that is what the model is for. So you conduct surgeries on a model, you take it apart, put it back, that's the idea of a model. It's the idea of thinking counterfactually, imagining, and that's the idea of creativity. So by constructing that model, you can start to infer if the higher the blood pressure leads to mortality, which increases or decreases by. I construct the model, I still cannot answer it. I have to see if I have enough information in the model that would allow me to find out the effects of intervention from a noninterventional study, hence of study. So what's needed? You need to have assumptions about who affects whom. If the graph had a certain property, the answer is yes, you can get it from observational study. If the graph is too mushy, bushy, bushy, the answer is no, you cannot. Then you need to find either different kind of observation that you haven't considered, or one experiment. So basically, that puts a lot of pressure on you to encode wisdom into that graph. Correct. But you don't have to encode more than what you know. God forbid, if you put the, like economists are doing this, they call identifying assumption. They put assumptions, even if they don't prevail in the world, they put assumptions so they can identify things. But the problem is, yes, beautifully put, but the problem is you don't know what you don't know. So. You know what you don't know. Because if you don't know, you say it's possible. It's possible that X affect the traffic tomorrow. It's possible. You put down an arrow which says it's possible. Every arrow in the graph says it's possible. So there's not a significant cost to adding arrows that. The more arrow you add, the less likely you are to identify things from purely observational data. So if the whole world is bushy, and everybody affect everybody else, the answer is, you can answer it ahead of time. I cannot answer my query from observational data. I have to go to experiments. So you talk about machine learning is essentially learning by association, or reasoning by association, and this do calculus is allowing for intervention. I like that word. Action. So you also talk about counterfactuals. Yeah. And trying to sort of understand the difference between counterfactuals and intervention. First of all, what is counterfactuals, and why are they useful? Why are they especially useful, as opposed to just reasoning what effect actions have? Well, counterfactual contains what we normally call explanations. Can you give an example of a counterfactual? If I tell you that acting one way affects something else, I didn't explain anything yet. But if I ask you, was it the aspirin that cured my headache? I'm asking for explanation, what cured my headache? And putting a finger on aspirin provide an explanation. It was aspirin that was responsible for your headache going away. If you didn't take the aspirin, you would still have a headache. So by saying if I didn't take aspirin, I would have a headache, you're thereby saying that aspirin is the thing that removes the headache. But you have to have another important information. I took the aspirin, and my headache is gone. It's very important information. Now I'm reasoning backward, and I said, was it the aspirin? Yeah. By considering what would have happened if everything else is the same, but I didn't take aspirin. That's right. So you know that things took place. Joe killed Schmoe, and Schmoe would be alive had John not used his gun. Okay, so that is the counterfactual. It had the conflict here, or clash, between observed fact, but he did shoot, okay? And the hypothetical predicate, which says had he not shot, you have a logical clash. They cannot exist together. That's the counterfactual. And that is the source of our explanation of the idea of responsibility, regret, and free will. Yeah, so it certainly seems that's the highest level of reasoning, right? Yeah, and physicists do it all the time. Who does it all the time? Physicists. Physicists. In every equation of physics, let's say you have a Hooke's law, and you put one kilogram on the spring, and the spring is one meter, and you say, had this weight been two kilogram, the spring would have been twice as long. It's no problem for physicists to say that, except that mathematics is only in the form of equation, okay, equating the weight, proportionality constant, and the length of the string. So you don't have the asymmetry in the equation of physics, although every physicist thinks counterfactually. Ask the high school kids, had the weight been three kilograms, what would be the length of the spring? They can answer it immediately, because they do the counterfactual processing in their mind, and then they put it into equation, algebraic equation, and they solve it, okay? But a robot cannot do that. How do you make a robot learn these relationships? Well, why you would learn? Suppose you tell him, can you do it? So before you go learning, you have to ask yourself, suppose I give you all the information, okay? Can the robot perform the task that I ask him to perform? Can he reason and say, no, it wasn't the aspirin. It was the good news you received on the phone. Right, because, well, unless the robot had a model, a causal model of the world. Right, right. I'm sorry I have to linger on this. But now we have to linger and we have to say, how do we do it? How do we build it? Yes. How do we build a causal model without a team of human experts running around? Why don't you go to learning right away? You're too much involved with learning. Because I like babies. Babies learn fast. I'm trying to figure out how they do it. Good. So that's another question. How do the babies come out with a counterfactual model of the world? And babies do that. They know how to play in the crib. They know which balls hit another one. And they learn it by playful manipulation of the world. Yes. The simple world involves only toys and balls and chimes. But if you think about it, it's a complex world. We take for granted how complicated. And kids do it by playful manipulation plus parents guidance, peer wisdom, and hearsay. They meet each other and they say, you shouldn't have taken my toy. Right. And these multiple sources of information, they're able to integrate. So the challenge is about how to integrate, how to form these causal relationships from different sources of data. Correct. So how much information is it to play, how much causal information is required to be able to play in the crib with different objects? I don't know. I haven't experimented with the crib. Okay, not a crib. Picking up, manipulating physical objects on this very, opening the pages of a book, all the tasks, the physical manipulation tasks. Do you have a sense? Because my sense is the world is extremely complicated. It's extremely complicated. I agree, and I don't know how to organize it because I've been spoiled by easy problems such as cancer and death, okay? And I'm a, but she's a. First we have to start trying to. No, but it's easy. There is in a sense that you have only 20 variables. And they are just variables and not mechanics. Okay, it's easy. You just put them on the graph and they speak to you. Yeah, and you're providing a methodology for letting them speak. Yeah. I'm working only in the abstract. The abstract was knowledge in, knowledge out, data in between. Now, can we take a leap to trying to learn in this very, when it's not 20 variables, but 20 million variables, trying to learn causation in this world? Not learn, but somehow construct models. I mean, it seems like you would only have to be able to learn because constructing it manually would be too difficult. Do you have ideas of? I think it's a matter of combining simple models from many, many sources, from many, many disciplines, and many metaphors. Metaphors are the basics of human intelligence, basis. Yeah, so how do you think of about a metaphor in terms of its use in human intelligence? Metaphors is an expert system. An expert, it's mapping problem from a problem with which you are not familiar to a problem with which you are familiar. Like, I'll give you a good example. The Greek believed that the sky is an opaque shell. It's not really infinite space. It's an opaque shell, and the stars are holes poked in the shells through which you see the eternal light. That was a metaphor. Why? Because they understand how you poke holes in the shells. They were not familiar with infinite space. And we are walking on a shell of a turtle, and if you get too close to the edge, you're gonna fall down to Hades or wherever. That's a metaphor. It's not true. But this kind of metaphor enabled Aristoteles to measure the radius of the Earth, because he said, Kamal, if we are walking on a turtle shell, then the ray of light coming to this place will be a different angle than coming to this place. I know the distance, I'll measure the two angles, and then I have the radius of the shell of the turtle. And he did, and he found his measurement very close to the measurements we have today, through the, what, 6,700 kilometers of the Earth. That's something that would not occur to Babylonian astronomer, even though the Babylonian experiment were the machine learning people of the time. They fit curves, and they could predict the eclipse of the moon much more accurately than the Greek, because they fit curve. That's a different metaphor. Something that you're familiar with, a game, a turtle shell. Okay? What does it mean if you are familiar? Familiar means that answers to certain questions are explicit. You don't have to derive them. And they were made explicit because somewhere in the past you've constructed a model of that. Yeah, you're familiar with, so the child is familiar with billiard balls. So the child could predict that if you let loose of one ball, the other one will bounce off. You obtain that by familiarity. Familiarity is answering questions, and you store the answer explicitly. You don't have to derive them. So this is the idea of a metaphor. All our life, all our intelligence is built around metaphors, mapping from the unfamiliar to the familiar. But the marriage between the two is a tough thing, which we haven't yet been able to algorithmatize. So you think of that process of using metaphor to leap from one place to another, we can call it reasoning? Is it a kind of reasoning? It is reasoning by metaphor, metaphorical reasoning. Do you think of that as learning? So learning is a popular terminology today in a narrow sense. It is, it is, it is definitely a form. So you may not, okay, right. It's one of the most important learnings, taking something which theoretically is derivable and store it in accessible format. I'll give you an example, chess, okay? Finding the winning starting move in chess is hard. It is hard, but there is an answer. Either there is a winning move for white or there isn't, or there is a draw, okay? So it is, the answer to that is available through the rule of the games. But we don't know the answer. So what does a chess master have that we don't have? He has stored explicitly an evaluation of certain complex pattern of the board. We don't have it. Ordinary people like me, I don't know about you, I'm not a chess master. So for me, I have to derive things that for him is explicit. He has seen it before, or he has seen the pattern before, or similar pattern, you see metaphor, yeah? And he generalize and said, don't move, it's a dangerous move. It's just that not in the game of chess, but in the game of billiard balls, we humans are able to initially derive very effectively and then reason by metaphor very effectively and make it look so easy that it makes one wonder how hard is it to build it in a machine. So in your sense, how far away are we to be able to construct? I don't know, I'm not a futurist. All I can tell you is that we are making tremendous progress in the causal reasoning domain. Something that I even dare to call it revolution, the code of revolution, because what we have achieved in the past three decades is something that dwarf everything that was derived in the entire history. So there's an excitement about current machine learning methodologies, and there's really important good work you're doing in causal inference. Where does the future, where do these worlds collide and what does that look like? First, they're gonna work without collision. It's gonna work in harmony. Harmony, it's not collision. The human is going to jumpstart the exercise by providing qualitative, noncommitting models of how the universe works, how in reality the domain of discourse works. The machine is gonna take over from that point of view and derive whatever the calculus says can be derived. Namely, quantitative answer to our questions. Now, these are complex questions. I'll give you some example of complex questions that will bug your mind if you think about it. You take result of studies in diverse population under diverse condition, and you infer the cause effect of a new population which doesn't even resemble any of the ones studied, and you do that by do calculus. You do that by generalizing from one study to another. See, what's common with Berto? What is different? Let's ignore the differences and pull out the commonality, and you do it over maybe 100 hospitals around the world. From that, you can get really mileage from big data. It's not only that you have many samples, you have many sources of data. So that's a really powerful thing, I think, especially for medical applications. I mean, cure cancer, right? That's how from data you can cure cancer. So we're talking about causation, which is the temporal relationships between things. Not only temporal, it's both structural and temporal. Temporal enough, temporal precedence by itself cannot replace causation. Is temporal precedence the arrow of time in physics? It's important, necessary. It's important. It's efficient, yes. Is it? Yes, I never seen cause propagate backward. But if we use the word cause, but there's relationships that are timeless. I suppose that's still forward in the arrow of time. But are there relationships, logical relationships, that fit into the structure? Sure, the whole do calculus is logical relationship. That doesn't require a temporal. It has just the condition that you're not traveling back in time. Yes, correct. So it's really a generalization of, a powerful generalization of what? Of Boolean logic. Yeah, Boolean logic. Yes. That is sort of simply put, and allows us to reason about the order of events, the source, the. Not about, between, we're not deriving the order of events. We are given cause effects relationship, okay? They ought to be obeying the time presidents relationship. We are given it. And now that we ask questions about other causes of relationship, that could be derived from the initial ones, but were not given to us explicitly. Like the case of the firing squad I gave you in the first chapter. And I ask, what if rifleman A declined to shoot? Would the prisoner still be dead? To decline to shoot, it means that he disobey order. And the rule of the games were that he is a obedient marksman, okay? That's how you start. That's the initial order. But now you ask question about breaking the rules. What if he decided not to pull the trigger? He just became a pacifist. And you and I can answer that. The other rifleman would have killed him, okay? I want the machine to do that. Is it so hard to ask a machine to do that? It's such a simple task. You have to have a calculus for that. Yes, yeah. But the curiosity, the natural curiosity for me is that yes, you're absolutely correct and important. And it's hard to believe that we haven't done this seriously extensively already a long time ago. So this is really important work. But I also wanna know, maybe you can philosophize about how hard is it to learn. Okay, let's assume we're learning. We wanna learn it, okay? We wanna learn. So what do we do? We put a learning machine that watches execution trials in many countries and many locations, okay? All the machine can learn is to see shut or not shut. Dead, not dead. A court issued an order or didn't, okay? Just the facts. For the fact you don't know who listens to whom. You don't know that the condemned person listened to the bullets, that the bullets are listening to the captain, okay? All we hear is one command, two shots, dead, okay? A triple of variable. Yes, no, yes, no. Okay, that you can learn who listens to whom and you can answer the question, no. Definitively no. But don't you think you can start proposing ideas for humans to review? You want machine to learn, you want a robot. So robot is watching trials like that, 200 trials, and then he has to answer the question, what if rifleman A refrain from shooting? Yeah. How do I do that? That's exactly my point. It's looking at the facts, don't give you the strings behind the facts. Absolutely, but do you think of machine learning as it's currently defined as only something that looks at the facts and tries to do? Right now, they only look at the facts, yeah. So is there a way to modify, in your sense? Playful manipulation. Playful manipulation. Yes, once in a while. Doing the interventionist kind of thing, intervention. But it could be at random. For instance, the rifleman is sick that day or he just vomits or whatever. So machine can observe this unexpected event which introduce noise. The noise still have to be random to be able to relate it to randomized experiment. And then you have observational studies from which to infer the strings behind the facts. It's doable to a certain extent. But now that we are expert in what you can do once you have a model, we can reason back and say, what kind of data you need to build a model. Got it, so I know you're not a futurist, but are you excited? Have you, when you look back at your life, longed for the idea of creating a human level intelligence system? Yeah, I'm driven by that. All my life, I'm driven just by one thing. But I go slowly. I go from what I know to the next step incrementally. So without imagining what the end goal looks like. Do you imagine what an eight? The end goal is gonna be a machine that can answer sophisticated questions, counterfactuals of regret, compassion, responsibility, and free will. So what is a good test? Is a Turing test a reasonable test? A test of free will doesn't exist yet. How would you test free will? So far, we know only one thing. If robots can communicate with reward and punishment among themselves and hitting each other on the wrist and say, you shouldn't have done that, okay? Playing better soccer because they can do that. What do you mean, because they can do that? Because they can communicate among themselves. Because of the communication they can do. Because they communicate like us. Reward and punishment, yes. You didn't pass the ball at the right time, and so therefore you're gonna sit on the bench for the next two. If they start communicating like that, the question is, will they play better soccer? As opposed to what? As opposed to what they do now? Without this ability to reason about reward and punishment. Responsibility. And? Artifactions. So far, I can only think about communication. Communication is, and not necessarily natural language, but just communication. Just communication. And that's important to have a quick and effective means of communicating knowledge. If the coach tells you you should have passed the ball, pink, he conveys so much knowledge to you as opposed to what? Go down and change your software. That's the alternative. But the coach doesn't know your software. So how can the coach tell you you should have passed the ball? But our language is very effective. You should have passed the ball. You know your software. You tweak the right module, and next time you don't do it. Now that's for playing soccer, the rules are well defined. No, no, no, no, they're not well defined. When you should pass the ball. Is not well defined. No, it's very soft, very noisy. Yes, you have to do it under pressure. It's art. But in terms of aligning values between computers and humans, do you think this cause and effect type of thinking is important to align the values, values, morals, ethics under which the machines make decisions, is the cause effect where the two can come together? Cause and effect is necessary component to build an ethical machine. Because the machine has to empathize to understand what's good for you, to build a model of you as a recipient, which should be very much, what is compassion? They imagine that you suffer pain as much as me. As much as me. I do have already a model of myself, right? So it's very easy for me to map you to mine. I don't have to rebuild the model. It's much easier to say, oh, you're like me. Okay, therefore I would not hate you. And the machine has to imagine, has to try to fake to be human essentially so you can imagine that you're like me, right? And moreover, who is me? That's the first, that's consciousness. They have a model of yourself. Where do you get this model? You look at yourself as if you are a part of the environment. If you build a model of yourself versus the environment, then you can say, I need to have a model of myself. I have abilities, I have desires and so forth, okay? I have a blueprint of myself though. Not the full detail because I cannot get the whole thing problem. But I have a blueprint. So on that level of a blueprint, I can modify things. I can look at myself in the mirror and say, hmm, if I change this model, tweak this model, I'm gonna perform differently. That is what we mean by free will. And consciousness. And consciousness. What do you think is consciousness? Is it simply self awareness? So including yourself into the model of the world? That's right. Some people tell me, no, this is only part of consciousness. And then they start telling me what they really mean by consciousness, and I lose them. For me, consciousness is having a blueprint of your software. Do you have concerns about the future of AI? All the different trajectories of all of our research? Yes. Where's your hope, where the movement has, where are your concerns? I'm concerned, because I know we are building a new species that has a capability of exceeding our, exceeding us, exceeding our capabilities, and can breed itself and take over the world. Absolutely. It's a new species that is uncontrolled. We don't know the degree to which we control it. We don't even understand what it means to be able to control this new species. So I'm concerned. I don't have anything to add to that, because it's such a gray area, it's unknown. It never happened in history. The only time it happened in history was evolution with human beings. It wasn't very successful, was it? Some people say it was a great success. For us it was, but a few people along the way, a few creatures along the way would not agree. So it's just because it's such a gray area, there's nothing else to say. We have a sample of one. Sample of one. It's us. But some people would look at you and say, yeah, but we were looking to you to help us make sure that the sample two works out okay. We have more than a sample of one. We have theories, and that's a good. We don't need to be statisticians. So sample of one doesn't mean poverty of knowledge. It's not. Sample of one plus theory, conjectural theory, of what could happen. That we do have. But I really feel helpless in contributing to this argument, because I know so little, and my imagination is limited, and I know how much I don't know, and I, but I'm concerned. You were born and raised in Israel. Born and raised in Israel, yes. And later served in Israel military, defense forces. In the Israel Defense Force. Yeah. What did you learn from that experience? From this experience? There's a kibbutz in there as well. Yes, because I was in the nachal, which is a combination of agricultural work and military service. We were supposed, I was really idealist. I wanted to be a member of the kibbutz throughout my life, and to live a communal life, and so I prepared myself for that. Slowly, slowly, I wanted a greater challenge. So that's a far world away, both. What I learned from that, what I can add, it was a miracle. It was a miracle that I served in the 1950s. I don't know how we survived. The country was under austerity. It tripled its population from 600,000 to a million point eight when I finished college. No one went hungry. And austerity, yes. When you wanted to make an omelet in a restaurant, you had to bring your own egg. And they imprisoned people from bringing the food from farming here, from the villages, to the city. But no one went hungry. And I always add to it, and higher education did not suffer any budget cut. They still invested in me, in my wife, in our generation to get the best education that they could, okay? So I'm really grateful for the opportunity, and I'm trying to pay back now, okay? It's a miracle that we survived the war of 1948. We were so close to a second genocide. It was all planned. But we survived it by miracle, and then the second miracle that not many people talk about, the next phase. How no one went hungry, and the country managed to triple its population. You know what it means to triple? Imagine United States going from what, 350 million to a trillion, unbelievable. So it's a really tense part of the world. It's a complicated part of the world, Israel and all around. Religion is at the core of that complexity. One of the components. Religion is a strong motivating cause to many, many people in the Middle East, yes. In your view, looking back, is religion good for society? That's a good question for robotic, you know? There's echoes of that question. Equip robot with religious belief. Suppose we find out, or we agree that religion is good to you, to keep you in line, okay? Should we give the robot the metaphor of a god? As a matter of fact, the robot will get it without us also. Why? The robot will reason by metaphor. And what is the most primitive metaphor a child grows with? Mother smile, father teaching, father image and mother image, that's god. So, whether you want it or not, the robot will, well, assuming that the robot is gonna have a mother and a father, it may only have a programmer, which doesn't supply warmth and discipline. Well, discipline it does. So the robot will have a model of the trainer, and everything that happens in the world, cosmology and so on, is going to be mapped into the programmer, it's god. Man, the thing that represents the origin of everything for that robot. It's the most primitive relationship. So it's gonna arrive there by metaphor. And so the question is if overall that metaphor has served us well as humans. I really don't know. I think it did, but as long as you keep in mind it's only a metaphor. So, if you think we can, can we talk about your son? Yes, yes. Can you tell his story? His story? Daniel? His story is known, he was abducted in Pakistan by Al Qaeda driven sect, and under various pretenses. I don't even pay attention to what the pretence was. Originally they wanted to have the United States deliver some promised airplanes. It was all made up, and all these demands were bogus. Bogus, I don't know really, but eventually he was executed in front of a camera. At the core of that is hate and intolerance. At the core, yes, absolutely, yes. We don't really appreciate the depth of the hate at which billions of peoples are educated. We don't understand it. I just listened recently to what they teach you in Mogadishu. Okay, okay, when the water stopped in the tap, we knew exactly who did it, the Jews. The Jews. We didn't know how, but we knew who did it. We don't appreciate what it means to us. The depth is unbelievable. Do you think all of us are capable of evil? And the education, the indoctrination is really what creates evil. Absolutely we are capable of evil. If you're indoctrinated sufficiently long and in depth, you're capable of ISIS, you're capable of Nazism. Yes, we are, but the question is whether we, after we have gone through some Western education and we learn that everything is really relative. It is not absolute God. It's only a belief in God. Whether we are capable now of being transformed under certain circumstances to become brutal. Yeah. I'm worried about it because some people say yes, given the right circumstances, given bad economical crisis, you are capable of doing it too. That worries me. I want to believe it, I'm not capable. So seven years after Daniel's death, you wrote an article at the Wall Street Journal titled Daniel Pearl and the Normalization of Evil. Yes. What was your message back then and how did it change today over the years? I lost. What was the message? The message was that we are not treating terrorism as a taboo. We are treating it as a bargaining device that is accepted. People have grievance and they go and bomb restaurants. It's normal. Look, you're even not surprised when I tell you that. 20 years ago you say, what? For grievance you go and blow a restaurant? Today it's becoming normalized. The banalization of evil. And we have created that to ourselves by normalizing, by making it part of political life. It's a political debate. Every terrorist yesterday becomes a freedom fighter today and tomorrow it becomes terrorist again. It's switchable. Right, and so we should call out evil when there's evil. If we don't want to be part of it. Becoming. Yeah, if we want to separate good from evil, that's one of the first things that, what was it, in the Garden of Eden, remember the first thing that God told him was, hey, you want some knowledge, here's a tree of good and evil. Yeah, so this evil touched your life personally. Does your heart have anger, sadness, or is it hope? Look, I see some beautiful people coming from Pakistan. I see beautiful people everywhere. But I see horrible propagation of evil in this country too. It shows you how populistic slogans can catch the mind of the best intellectuals. Today is Father's Day. I didn't know that. Yeah, what's a fond memory you have of Daniel? What's a fond memory you have of Daniel? Oh, very good memories, immense. He was my mentor. He had a sense of balance that I didn't have. He saw the beauty in every person. He was not as emotional as I am, the more looking things in perspective. He really liked every person. He really grew up with the idea that a foreigner is a reason for curiosity, not for fear. That one time we went in Berkeley, and a homeless came out from some dark alley, and said, hey, man, can you spare a dime? I retreated back, two feet back, and then I just hugged him and say, here's a dime, enjoy yourself. Maybe you want some money to take a bus or whatever. Where did you get it? Not from me. Do you have advice for young minds today, dreaming about creating as you have dreamt, creating intelligent systems? What is the best way to arrive at new breakthrough ideas and carry them through the fire of criticism and past conventional ideas? Ask your questions freely. Your questions are never dumb. And solve them your own way. And don't take no for an answer. Look, if they are really dumb, you will find out quickly by trying an arrow to see that they're not leading any place. But follow them and try to understand things your way. That is my advice. I don't know if it's gonna help anyone. Not as brilliantly. There is a lot of inertia in science, in academia. It is slowing down science. Yeah, those two words, your way, that's a powerful thing. It's against inertia, potentially, against the flow. Against your professor. Against your professor. I wrote the Book of Why in order to democratize common sense. In order to instill rebellious spirit in students so they wouldn't wait until the professor get things right. So you wrote the manifesto of the rebellion against the professor. Against the professor, yes. So looking back at your life of research, what ideas do you hope ripple through the next many decades? What do you hope your legacy will be? I already have a tombstone carved. Oh, boy. The fundamental law of counterfactuals. That's what, it's a simple equation. Counterfactual in terms of a model surgery. That's it, because everything follows from that. If you get that, all the rest, I can die in peace. And my student can derive all my knowledge by mathematical means. The rest follows. Yeah. Thank you so much for talking today. I really appreciate it. Thank you for being so attentive and instigating. We did it. We did it. The coffee helped. Thanks for listening to this conversation with Judea Pearl. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10, and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter. And now, let me leave you with some words of wisdom from Judea Pearl. You cannot answer a question that you cannot ask, and you cannot ask a question that you have no words for. Thank you for listening, and hope to see you next time.
Judea Pearl: Causal Reasoning, Counterfactuals, and the Path to AGI | Lex Fridman Podcast #56
The following is a conversation with Rohit Prasad. He's the vice president and head scientist of Amazon Alexa and one of its original creators. The Alexa team embodies some of the most challenging, incredible, impactful, and inspiring work that is done in AI today. The team has to both solve problems at the cutting edge of natural language processing and provide a trustworthy, secure, and enjoyable experience to millions of people. This is where state of the art methods in computer science meet the challenges of real world engineering. In many ways, Alexa and the other voice assistants are the voices of artificial intelligence to millions of people and an introduction to AI for people who have only encountered it in science fiction. This is an important and exciting opportunity. So the work that Rohit and the Alexa team are doing is an inspiration to me and to many researchers and engineers in the AI community. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. If you leave a review on Apple Podcasts especially, but also cast box or comment on YouTube, consider mentioning topics, people, ideas, questions, quotes in science, tech, or philosophy that you find interesting, and I'll read them on this podcast. I won't call out names, but I love comments with kindness and thoughtfulness in them, so I thought I'd share them. Someone on YouTube highlighted a quote from the conversation with Ray Dalio, where he said that you have to appreciate all the different ways that people can be A players. This connected me to, on teams of engineers, it's easy to think that raw productivity is the measure of excellence, but there are others. I've worked with people who brought a smile to my face every time I got to work in the morning. Their contribution to the team is immeasurable. I recently started doing podcast ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that break the flow of the conversation. I hope that works for you. It doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Brokerage services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store, Google Play, and use code LexPodcast, you'll get $10, and Cash App will also donate $10 to FIRST, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. This podcast is also supported by ZipRecruiter. Hiring great people is hard, and to me, is one of the most important elements of a successful mission driven team. I've been fortunate to be a part of, and lead several great engineering teams. The hiring I've done in the past was mostly through tools we built ourselves, but reinventing the wheel was painful. ZipRecruiter is a tool that's already available for you. It seeks to make hiring simple, fast, and smart. For example, Codable cofounder, Gretchen Huebner, used ZipRecruiter to find a new game artist to join our education tech company. By using ZipRecruiter's screening questions to filter candidates, Gretchen found it easier to focus on the best candidates, and finally, hiring the perfect person for the role, in less than two weeks, from start to finish. ZipRecruiter, the smartest way to hire. See why ZipRecruiter is effective for businesses of all sizes by signing up, as I did, for free, at ziprecruiter.com slash lexpod. That's ziprecruiter.com slash lexpod. And now, here's my conversation with Rohit Prasad. In the movie Her, I'm not sure if you've ever seen it. Human falls in love with the voice of an AI system. Let's start at the highest philosophical level before we get to deep learning and some of the fun things. Do you think this, what the movie Her shows, is within our reach? I think not specifically about Her, but I think what we are seeing is a massive increase in adoption of AI assistance, or AI, in all parts of our social fabric. And I think it's, what I do believe, is that the utility these AIs provide, some of the functionalities that are shown are absolutely within reach. So some of the functionality in terms of the interactive elements, but in terms of the deep connection, that's purely voice based. Do you think such a close connection is possible with voice alone? It's been a while since I saw Her, but I would say in terms of interactions which are both human like and in these AI systems, you have to value what is also superhuman. We as humans can be in only one place. AI assistance can be in multiple places at the same time. One with you on your mobile device, one at your home, one at work. So you have to respect these superhuman capabilities too. Plus as humans, we have certain attributes we are very good at, very good at reasoning. AI assistance not yet there, but in the realm of AI assistance, what they're great at is computation, memory, it's infinite and pure. These are the attributes you have to start respecting. So I think the comparison with human like versus the other aspect, which is also superhuman, has to be taken into consideration. So I think we need to elevate the discussion to not just human like. So there's certainly elements, we just mentioned, Alexa is everywhere, computation speaking. So this is a much bigger infrastructure than just the thing that sits there in the room with you. But it certainly feels to us mere humans that there's just another little creature there when you're interacting with it. You're not interacting with the entirety of the infrastructure, you're interacting with the device. The feeling is, okay, sure, we anthropomorphize things, but that feeling is still there. So what do you think we as humans, the purity of the interaction with a smart device, interaction with a smart assistant, what do you think we look for in that interaction? I think in the certain interactions I think will be very much where it does feel like a human because it has a persona of its own. And in certain ones it wouldn't be. So I think a simple example to think of it is if you're walking through the house and you just wanna turn on your lights on and off and you're issuing a command, that's not very much like a human like interaction and that's where the AI shouldn't come back and have a conversation with you, just it should simply complete that command. So those, I think the blend of, we have to think about this is not human, human alone. It is a human machine interaction and certain aspects of humans are needed and certain aspects are in situations demand it to be like a machine. So I told you, it's gonna be philosophical in parts. What's the difference between human and machine in that interaction? When we interact to humans, especially those are friends and loved ones versus you and a machine that you also are close with. I think the, you have to think about the roles the AI plays, right? So, and it differs from different customer to customer, different situation to situation, especially I can speak from Alexa's perspective. It is a companion, a friend at times, an assistant, an advisor down the line. So I think most AIs will have this kind of attributes and it will be very situational in nature. So where is the boundary? I think the boundary depends on exact context in which you're interacting with the AI. So the depth and the richness of natural language conversation is been by Alan Turing been used to try to define what it means to be intelligent. There's a lot of criticism of that kind of test, but what do you think is a good test of intelligence in your view, in the context of the Turing test and Alexa or the Alexa prize, this whole realm, do you think about this human intelligence, what it means to define it, what it means to reach that level? I do think the ability to converse is a sign of an ultimate intelligence. I think that there's no question about it. So if you think about all aspects of humans, there are sensors we have, and those are basically a data collection mechanism. And based on that, we make some decisions with our sensory brains, right? And from that perspective, I think there are elements we have to talk about how we sense the world and then how we act based on what we sense. Those elements clearly machines have, but then there's the other aspects of computation that is way better. I also mentioned about memory again, in terms of being near infinite, depending on the storage capacity you have, and the retrieval can be extremely fast and pure in terms of like, there's no ambiguity of who did I see when, right? I mean, machines can remember that quite well. So again, on a philosophical level, I do subscribe to the fact that to be able to converse and as part of that, to be able to reason based on the world knowledge you've acquired and the sensory knowledge that is there is definitely very much the essence of intelligence. But intelligence can go beyond human level intelligence based on what machines are getting capable of. So what do you think maybe stepping outside of Alexa broadly as an AI field, what do you think is a good test of intelligence? Put it another way outside of Alexa, because so much of Alexa is a product, is an experience for the customer. On the research side, what would impress the heck out of you if you saw, you know, what is the test where you said, wow, this thing is now starting to encroach into the realm of what we loosely think of as human intelligence? So, well, we think of it as AGI and human intelligence altogether, right? So in some sense, and I think we are quite far from that. I think an unbiased view I have is that the Alexa's intelligence capability is a great test. I think of it as there are many other true points like self driving cars, game playing like go or chess. Let's take those two for as an example, clearly requires a lot of data driven learning and intelligence, but it's not as hard a problem as conversing with, as an AI is with humans to accomplish certain tasks or open domain chat, as you mentioned, Alexa prize. In those settings, the key differences that the end goal is not defined unlike game playing. You also do not know exactly what state you are in in a particular goal completion scenario. In certain sense, sometimes you can, if it's a simple goal, but if you're even certain examples like planning a weekend or you can imagine how many things change along the way, you look for whether you may change your mind and you change the destination, or you want to catch a particular event and then you decide, no, I want this other event I want to go to. So these dimensions of how many different steps are possible when you're conversing as a human with a machine makes it an extremely daunting problem. And I think it is the ultimate test for intelligence. And don't you think that natural language is enough to prove that conversation, just pure conversation? From a scientific standpoint, natural language is a great test, but I would go beyond, I don't want to limit it to as natural language as simply understanding an intent or parsing for entities and so forth. We are really talking about dialogue. Dialogue, yeah. So I would say human machine dialogue is definitely one of the best tests of intelligence. So can you briefly speak to the Alexa Prize for people who are not familiar with it, and also just maybe where things stand and what have you learned and what's surprising? What have you seen that's surprising from this incredible competition? Absolutely, it's a very exciting competition. Alexa Prize is essentially a grand challenge in conversational artificial intelligence, where we threw the gauntlet to the universities who do active research in the field, to say, can you build what we call a social bot that can converse with you coherently and engagingly for 20 minutes? That is an extremely hard challenge, talking to someone who you're meeting for the first time, or even if you've met them quite often, to speak at 20 minutes on any topic, an evolving nature of topics is super hard. We have completed two successful years of the competition. The first was won with the University of Washington, second, the University of California. We are in our third instance. We have an extremely strong team of 10 cohorts, and the third instance of the Alexa Prize is underway now. And we are seeing a constant evolution. First year was definitely a learning. It was a lot of things to be put together. We had to build a lot of infrastructure to enable these universities to be able to build magical experiences and do high quality research. Just a few quick questions, sorry for the interruption. What does failure look like in the 20 minute session? So what does it mean to fail, not to reach the 20 minute mark? Oh, awesome question. So there are one, first of all, I forgot to mention one more detail. It's not just 20 minutes, but the quality of the conversation too that matters. And the beauty of this competition before I answer that question on what failure means is first that you actually converse with millions and millions of customers as the social bots. So during the judging phases, there are multiple phases, before we get to the finals, which is a very controlled judging in a situation where we bring in judges and we have interactors who interact with these social bots, that is a much more controlled setting. But till the point we get to the finals, all the judging is essentially by the customers of Alexa. And there you basically rate on a simple question, how good your experience was. So that's where we are not testing for a 20 minute boundary being crossed, because you do want it to be very much like a clear cut, winner, be chosen, and it's an absolute bar. So did you really break that 20 minute barrier is why we have to test it in a more controlled setting with actors, essentially interactors. And see how the conversation goes. So this is why it's a subtle difference between how it's being tested in the field with real customers versus in the lab to award the prize. So on the latter one, what it means is that essentially there are three judges and two of them have to say this conversation has stalled, essentially. Got it. And the judges are human experts. Judges are human experts. Okay, great. So this is in the third year. So what's been the evolution? How far, so the DARPA challenge in the first year, the autonomous vehicles, nobody finished. In the second year, a few more finished in the desert. So how far along in this, I would say much harder challenge are we? This challenge has come a long way to the extent that we're definitely not close to the 20 minute barrier being with coherence and engaging conversation. I think we are still five to 10 years away in that horizon to complete that. But the progress is immense. Like what you're finding is the accuracy and what kind of responses these social bots generate is getting better and better. What's even amazing to see that now there's humor coming in. The bots are quite... Awesome. You know, you're talking about ultimate science of intelligence. I think humor is a very high bar in terms of what it takes to create humor. And I don't mean just being goofy. I really mean good sense of humor is also a sign of intelligence in my mind and something very hard to do. So these social bots are now exploring not only what we think of natural language abilities, but also personality attributes and aspects of when to inject an appropriate joke, when you don't know the domain, how you come back with something more intelligible so that you can continue the conversation. If you and I are talking about AI and we are domain experts, we can speak to it. But if you suddenly switch a topic to that I don't know of, how do I change the conversation? So you're starting to notice these elements as well. And that's coming from partly by the nature of the 20 minute challenge that people are getting quite clever on how to really converse and essentially mask some of the understanding defects if they exist. So some of this, this is not Alexa, the product. This is somewhat for fun, for research, for innovation and so on. I have a question sort of in this modern era, there's a lot of, if you look at Twitter and Facebook and so on, there's discourse, public discourse going on and some things that are a little bit too edgy, people get blocked and so on. I'm just out of curiosity, are people in this context pushing the limits? Is anyone using the F word? Is anyone sort of pushing back sort of arguing, I guess I should say, as part of the dialogue to really draw people in? First of all, let me just back up a bit in terms of why we are doing this, right? So you said it's fun. I think fun is more part of the engaging part for customers. It is one of the most used skills as well in our skill store. But up that apart, the real goal was essentially what was happening is with a lot of AI research moving to industry, we felt that academia has the risk of not being able to have the same resources at disposal that we have, which is lots of data, massive computing power, and a clear ways to test these AI advances with real customer benefits. So we brought all these three together in the Alexa price. That's why it's one of my favorite projects in Amazon. And with that, the secondary effect is yes, it has become engaging for our customers as well. We're not there in terms of where we want it to be, right? But it's a huge progress. But coming back to your question on how do the conversations evolve? Yes, there is some natural attributes of what you said in terms of argument and some amount of swearing. The way we take care of that is that there is a sensitive filter we have built that sees keywords. It's more than keywords, a little more in terms of, of course, there's keyword based too, but there's more in terms of these words can be very contextual, as you can see, and also the topic can be something that you don't want a conversation to happen because this is a communal device as well. A lot of people use these devices. So we have put a lot of guardrails for the conversation to be more useful for advancing AI and not so much of these other issues you attributed what's happening in the AI field as well. Right, so this is actually a serious opportunity. I didn't use the right word, fun. I think it's an open opportunity to do some of the best innovation in conversational agents in the world. Absolutely. Why just universities? Why just universities? Because as I said, I really felt Young minds. Young minds, it's also to, if you think about the other aspect of where the whole industry is moving with AI, there's a dearth of talent given the demands. So you do want universities to have a clear place where they can invent and research and not fall behind that they can't motivate students. Imagine all grad students left to industry like us or faculty members, which has happened too. So this is a way that if you're so passionate about the field where you feel industry and academia need to work well, this is a great example and a great way for universities to participate. So what do you think it takes to build a system that wins the Alexa Prize? I think you have to start focusing on aspects of reasoning that it is, there are still more lookups of what intents customers asking for and responding to those rather than really reasoning about the elements of the conversation. For instance, if you're playing, if the conversation is about games and it's about a recent sports event, there's so much context involved and you have to understand the entities that are being mentioned so that the conversation is coherent rather than you suddenly just switch to knowing some fact about a sports entity and you're just relaying that rather than understanding the true context of the game. Like if you just said, I learned this fun fact about Tom Brady rather than really say how he played the game the previous night, then the conversation is not really that intelligent. So you have to go to more reasoning elements of understanding the context of the dialogue and giving more appropriate responses, which tells you that we are still quite far because a lot of times it's more facts being looked up and something that's close enough as an answer, but not really the answer. So that is where the research needs to go more and actual true understanding and reasoning. And that's why I feel it's a great way to do it because you have an engaged set of users working to help these AI advances happen in this case. You mentioned customers, they're quite a bit, and there's a skill. What is the experience for the user that's helping? So just to clarify, this isn't, as far as I understand, the Alexa, so this skill is a standalone for the Alexa Prize. I mean, it's focused on the Alexa Prize. It's not you ordering certain things on Amazon. Like, oh, we're checking the weather or playing Spotify, right? This is a separate skill. And so you're focused on helping that, I don't know, how do people, how do customers think of it? Are they having fun? Are they helping teach the system? What's the experience like? I think it's both actually. And let me tell you how you invoke this skill. So all you have to say, Alexa, let's chat. And then the first time you say, Alexa, let's chat, it comes back with a clear message that you're interacting with one of those university social bots. And there's a clear, so you know exactly how you interact, right? And that is why it's very transparent. You are being asked to help, right? And we have a lot of mechanisms where as we are in the first phase of feedback phase, then you send a lot of emails to our customers and then they know that the team needs a lot of interactions to improve the accuracy of the system. So we know we have a lot of customers who really want to help these university bots and they're conversing with that. And some are just having fun with just saying, Alexa, let's chat. And also some adversarial behavior to see whether, how much do you understand as a social bot? So I think we have a good, healthy mix of all three situations. So what is the, if we talk about solving the Alexa challenge, the Alexa prize, what's the data set of really engaging, pleasant conversations look like? Because if we think of this as a supervised learning problem, I don't know if it has to be, but if it does, maybe you can comment on that. Do you think there needs to be a data set of what it means to be an engaging, successful, fulfilling conversation? I think that's part of the research question here. This was, I think, we at least got the first part right, which is have a way for universities to build and test in a real world setting. Now you're asking in terms of the next phase of questions, which we are still, we're also asking, by the way, what does success look like from a optimization function? That's what you're asking in terms of, we as researchers are used to having a great corpus of annotated data and then making, then sort of tune our algorithms on those, right? And fortunately and unfortunately, in this world of Alexa prize, that is not the way we are going after it. So you have to focus more on learning based on life feedback. That is another element that's unique, where just not to, I started with giving you how you ingress and experience this capability as a customer. What happens when you're done? So they ask you a simple question on a scale of one to five, how likely are you to interact with this social bot again? That is a good feedback and customers can also leave more open ended feedback. And I think partly that to me is one part of the question you're asking, which I'm saying is a mental model shift that as researchers also, you have to change your mindset that this is not a DARPA evaluation or NSF funded study and you have a nice corpus. This is where it's real world. You have real data. The scale is amazing and that's a beautiful thing. And then the customer, the user can quit the conversation at any time. Exactly, the user can, that is also a signal for how good you were at that point. So, and then on a scale one to five, one to three, do they say how likely are you or is it just a binary? One to five. One to five. Wow, okay, that's such a beautifully constructed challenge. Okay. You said the only way to make a smart assistant really smart is to give it eyes and let it explore the world. I'm not sure it might've been taken out of context, but can you comment on that? Can you elaborate on that idea? Is that I personally also find that idea super exciting from a social robotics, personal robotics perspective. Yeah, a lot of things do get taken out of context. This particular one was just as philosophical discussion we were having on terms of what does intelligence look like? And the context was in terms of learning, I think just we said we as humans are empowered with many different sensory abilities. I do believe that eyes are an important aspect of it in terms of if you think about how we as humans learn, it is quite complex and it's also not unimodal that you are fed a ton of text or audio and you just learn that way. No, you learn by experience, you learn by seeing, you're taught by humans and we are very efficient in how we learn. Machines on the contrary are very inefficient on how they learn, especially these AIs. I think the next wave of research is going to be with less data, not just less human, not just with less labeled data, but also with a lot of weak supervision and where you can increase the learning rate. I don't mean less data in terms of not having a lot of data to learn from that we are generating so much data, but it is more about from a aspect of how fast can you learn? So improving the quality of the data, the quality of data and the learning process. I think more on the learning process. I think we have to, we as humans learn with a lot of noisy data, right? And I think that's the part that I don't think should change. What should change is how we learn, right? So if you look at, you mentioned supervised learning, we have making transformative shifts from moving to more unsupervised, more weak supervision. Those are the key aspects of how to learn. And I think in that setting, I hope you agree with me that having other senses is very crucial in terms of how you learn. So absolutely. And from a machine learning perspective, which I hope we get a chance to talk to a few aspects that are fascinating there, but to stick on the point of sort of a body, an embodiment. So Alexa has a body. It has a very minimalistic, beautiful interface where there's a ring and so on. I mean, I'm not sure of all the flavors of the devices that Alexa lives on, but there's a minimalistic basic interface. And nevertheless, we humans, so I have a Roomba, I have all kinds of robots all over everywhere. So what do you think the Alexa of the future looks like if it begins to shift what his body looks like? Maybe beyond the Alexa, what do you think are the different devices in the home as they start to embody their intelligence more and more? What do you think that looks like? Philosophically, a future, what do you think that looks like? I think let's look at what's happening today. You mentioned, I think our devices as an Amazon devices, but I also wanted to point out Alexa is already integrated a lot of third party devices, which also come in lots of forms and shapes, some in robots, some in microwaves, some in appliances that you use in everyday life. So I think it's not just the shape Alexa takes in terms of form factors, but it's also where all it's available. And it's getting in cars, it's getting in different appliances in homes, even toothbrushes, right? So I think you have to think about it as not a physical assistant. It will be in some embodiment, as you said, we already have these nice devices, but I think it's also important to think of it, it is a virtual assistant. It is superhuman in the sense that it is in multiple places at the same time. So I think the actual embodiment in some sense, to me doesn't matter. I think you have to think of it as not as human like and more of what its capabilities are that derive a lot of benefit for customers and how there are different ways to delight it and delight customers and different experiences. And I think I'm a big fan of it not being just human like, it should be human like in certain situations. Alexa price social bot in terms of conversation is a great way to look at it, but there are other scenarios where human like, I think is underselling the abilities of this AI. So if I could trivialize what we're talking about. So if you look at the way Steve Jobs thought about the interaction with the device that Apple produced, there was a extreme focus on controlling the experience by making sure there's only this Apple produced devices. You see the voice of Alexa being taking all kinds of forms depending on what the customers want. And that means it could be anywhere from the microwave to a vacuum cleaner to the home and so on the voice is the essential element of the interaction. I think voice is an essence, it's not all, but it's a key aspect. I think to your question in terms of, you should be able to recognize Alexa and that's a huge problem. I think in terms of a huge scientific problem, I should say like, what are the traits? What makes it look like Alexa, especially in different settings and especially if it's primarily voice, what it is, but Alexa is not just voice either, right? I mean, we have devices with a screen. Now you're seeing just other behaviors of Alexa. So I think we're in very early stages of what that means and this will be an important topic for the following years. But I do believe that being able to recognize and tell when it's Alexa versus it's not is going to be important from an Alexa perspective. I'm not speaking for the entire AI community, but I think attribution and as we go into more of understanding who did what, that identity of the AI is crucial in the coming world. I think from the broad AI community perspective, that's also a fascinating problem. So basically if I close my eyes and listen to the voice, what would it take for me to recognize that this is Alexa? Exactly. Or at least the Alexa that I've come to know from my personal experience in my home through my interactions that come through. Yeah, and the Alexa here in the US is very different than Alexa in UK and the Alexa in India, even though they are all speaking English or the Australian version. So again, so now think about when you go into a different culture, a different community, but you travel there, what do you recognize Alexa? I think these are super hard questions actually. So there's a team that works on personality. So if we talk about those different flavors of what it means culturally speaking, India, UK, US, what does it mean to add? So the problem that we just stated, it's just fascinating, how do we make it purely recognizable that it's Alexa, assuming that the qualities of the voice are not sufficient? It's also the content of what is being said. How do we do that? How does the personality come into play? What's that research gonna look like? I mean, it's such a fascinating subject. We have some very fascinating folks who from both the UX background and human factors are looking at these aspects and these exact questions. But I'll definitely say it's not just how it sounds, the choice of words, the tone, not just, I mean, the voice identity of it, but the tone matters, the speed matters, how you speak, how you enunciate words, what choice of words are you using, how terse are you, or how lengthy in your explanations you are. All of these are factors. And you also, you mentioned something crucial that you may have personalized it, Alexa, to some extent in your homes or in the devices you are interacting with. So you, as your individual, how you prefer Alexa sounds can be different than how I prefer. And the amount of customizability you want to give is also a key debate we always have. But I do want to point out it's more than the voice actor that recorded and it sounds like that actor. It is more about the choices of words, the attributes of tonality, the volume in terms of how you raise your pitch and so forth. All of that matters. This is such a fascinating problem from a product perspective. I could see those debates just happening inside of the Alexa team of how much personalization do you do for the specific customer? Because you're taking a risk if you over personalize. Because you don't, if you create a personality for a million people, you can test that better. You can create a rich, fulfilling experience that will do well. But the more you personalize it, the less you can test it, the less you can know that it's a great experience. So how much personalization, what's the right balance? I think the right balance depends on the customer. Give them the control. So I'll say, I think the more control you give customers, the better it is for everyone. And I'll give you some key personalization features. I think we have a feature called Remember This, which is where you can tell Alexa to remember something. There you have an explicit sort of control in customer's hand because they have to say, Alexa, remember X, Y, Z. What kind of things would that be used for? So you can like you, I have stored my tire specs for my car because it's so hard to go and find and see what it is, right? When you're having some issues. I store my mileage plan numbers for all the frequent flyer ones where I'm sometimes just looking at it and it's not handy. So those are my own personal choices I've made for Alexa to remember something on my behalf, right? So again, I think the choice was be explicit about how you provide that to a customer as a control. So I think these are the aspects of what you do. Like think about where we can use speaker recognition capabilities that it's, if you taught Alexa that you are Lex and this person in your household is person two, then you can personalize the experiences. Again, these are very in the CX customer experience patterns are very clear about and transparent when a personalization action is happening. And then you have other ways like you go through explicit control right now through your app that your multiple service providers, let's say for music, which one is your preferred one. So when you say play sting, depend on your whether you have preferred Spotify or Amazon music or Apple music, that the decision is made where to play it from. So what's Alexa's backstory from her perspective? Is there, I remember just asking as probably a lot of us are just the basic questions about love and so on of Alexa, just to see what the answer would be. It feels like there's a little bit of a personality but not too much. Is Alexa have a metaphysical presence in this human universe we live in or is it something more ambiguous? Is there a past? Is there a birth? Is there a family kind of idea even for joking purposes and so on? I think, well, it does tell you if I think you, I should double check this but if you said when were you born, I think we do respond. I need to double check that but I'm pretty positive about it. I think you do actually because I think I've tested that. But that's like how I was born in your brand of champagne and whatever the year kind of thing, yeah. So in terms of the metaphysical, I think it's early. Does it have the historic knowledge about herself to be able to do that? Maybe, have we crossed that boundary? Not yet, right? In terms of being, thank you. Have we thought about it quite a bit but I wouldn't say that we have come to a clear decision in terms of what it should look like. But you can imagine though, and I bring this back to the Alexa Prize social bot one, there you will start seeing some of that. Like these bots have their identity and in terms of that, you may find, this is such a great research topic that some academia team may think of these problems and start solving them too. So let me ask a question. It's kind of difficult, I think, but it feels, and fascinating to me because I'm fascinated with psychology. It feels that the more personality you have, the more dangerous it is in terms of a customer perspective of product. If you want to create a product that's useful. By dangerous, I mean creating an experience that upsets me. And so how do you get that right? Because if you look at the relationships, maybe I'm just a screwed up Russian, but if you look at the human to human relationship, some of our deepest relationships have fights, have tension, have the push and pull, have a little flavor in them. Do you want to have such flavor in an interaction with Alexa? How do you think about that? So there's one other common thing that you didn't say, but we think of it as paramount for any deep relationship. That's trust. Trust, yeah. So I think if you trust every attribute you said, a fight, some tension, is all healthy. But what is sort of unnegotiable in this instance is trust. And I think the bar to earn customer trust for AI is very high, in some sense, more than a human. It's not just about personal information or your data. It's also about your actions on a daily basis. How trustworthy are you in terms of consistency, in terms of how accurate are you in understanding me? Like if you're talking to a person on the phone, if you have a problem with your, let's say your internet or something, if the person's not understanding, you lose trust right away. You don't want to talk to that person. That whole example gets amplified by a factor of 10, because when you're a human interacting with an AI, you have a certain expectation. Either you expect it to be very intelligent and then you get upset, why is it behaving this way? Or you expect it to be not so intelligent and when it surprises you, you're like, really, you're trying to be too smart? So I think we grapple with these hard questions as well. But I think the key is actions need to be trustworthy. From these AIs, not just about data protection, your personal information protection, but also from how accurately it accomplishes all commands or all interactions. Well, it's tough to hear because trust, you're absolutely right, but trust is such a high bar with AI systems because people, and I see this because I work with autonomous vehicles. I mean, the bar that's placed on AI system is unreasonably high. Yeah, that is going to be, I agree with you. And I think of it as it's a challenge and it's also keeps my job, right? So from that perspective, I totally, I think of it at both sides as a customer and as a researcher. I think as a researcher, yes, occasionally it will frustrate me that why is the bar so high for these AIs? And as a customer, then I say, absolutely, it has to be that high, right? So I think that's the trade off we have to balance, but it doesn't change the fundamentals. That trust has to be earned and the question then becomes is are we holding the AIs to a different bar in accuracy and mistakes than we hold humans? That's going to be a great societal questions for years to come, I think for us. Well, one of the questions that we grapple as a society now that I think about a lot, I think a lot of people in the AI think about a lot and Alexis taking on head on is privacy. The reality is us giving over data to any AI system can be used to enrich our lives in profound ways. So if basically any product that does anything awesome for you, the more data it has, the more awesome things it can do. And yet on the other side, people imagine the worst case possible scenario of what can you possibly do with that data? People, it's goes down to trust, as you said before. There's a fundamental distrust of, in certain groups of governments and so on. And depending on the government, depending on who's in power, depending on all these kinds of factors. And so here's Alexa in the middle of all of it in the home, trying to do good things for the customers. So how do you think about privacy in this context, the smart assistance in the home? How do you maintain, how do you earn trust? Absolutely. So as you said, trust is the key here. So you start with trust and then privacy is a key aspect of it. It has to be designed from very beginning about that. And we believe in two fundamental principles. One is transparency and second is control. So by transparency, I mean, when we build what is now called smart speaker or the first echo, we were quite judicious about making these right trade offs on customer's behalf, that it is pretty clear when the audio is being sent to cloud, the light ring comes on when it has heard you say the word wake word, and then the streaming happens, right? So when the light ring comes up, we also had, we put a physical mute button on it, just so if you didn't want it to be listening, even for the wake word, then you turn the power button or the mute button on, and that disables the microphones. That's just the first decision on essentially transparency and control. Oh, then even when we launched, we gave the control in the hands of the customers that you can go and look at any of your individual utterances that is recorded and delete them anytime. And we've got to true to that promise, right? So, and that is super, again, a great instance of showing how you have the control. Then we made it even easier. You can say, like I said, delete what I said today. So that is now making it even just more control in your hands with what's most convenient about this technology is voice. You delete it with your voice now. So these are the types of decisions we continually make. We just recently launched this feature called, what we think of it as, if you wanted humans not to review your data, because you've mentioned supervised learning, right? So in supervised learning, humans have to give some annotation. And that also is now a feature where you can essentially, if you've selected that flag, your data will not be reviewed by a human. So these are the types of controls that we have to constantly offer with customers. So why do you think it bothers people so much that, so everything you just said is really powerful. So the control, the ability to delete, cause we collect, we have studies here running at MIT that collects huge amounts of data and people consent and so on. The ability to delete that data is really empowering and almost nobody ever asked to delete it, but the ability to have that control is really powerful. But still, there's these popular anecdote, anecdotal evidence that people say, they like to tell that, them and a friend were talking about something, I don't know, sweaters for cats. And all of a sudden they'll have advertisements for cat sweaters on Amazon. That's a popular anecdote as if something is always listening. What, can you explain that anecdote, that experience that people have? What's the psychology of that? What's that experience? And can you, you've answered it, but let me just ask, is Alexa listening? No, Alexa listens only for the wake word on the device. And the wake word is? The words like Alexa, Amazon, Echo, but you only choose one at a time. So you choose one and it listens only for that on our devices. So that's first. From a listening perspective, we have to be very clear that it's just the wake word. So you said, why is there this anxiety, if you may? Yeah, exactly. It's because there's a lot of confusion, what it really listens to, right? And I think it's partly on us to keep educating our customers and the general media more in terms of like how, what really happens. And we've done a lot of it. And our pages on information are clear, but still people have to have more, there's always a hunger for information and clarity. And we'll constantly look at how best to communicate. If you go back and read everything, yes, it states exactly that. And then people could still question it. And I think that's absolutely okay to question. What we have to make sure is that we are, because our fundamental philosophy is customer first, customer obsession is our leadership principle. If you put, as researchers, I put myself in the shoes of the customer, and all decisions in Amazon are made with that. And trust has to be earned, and we have to keep earning the trust of our customers in this setting. And to your other point on like, is there something showing up based on your conversations? No, I think the answer is like, a lot of times when those experiences happen, you have to also know that, okay, it may be a winter season, people are looking for sweaters, right? And it shows up on your amazon.com because it is popular. So there are many of these, you mentioned that personality or personalization, turns out we are not that unique either, right? So those things we as humans start thinking, oh, must be because something was heard, and that's why this other thing showed up. The answer is no, probably it is just the season for sweaters. I'm not gonna ask you this question because people have so much paranoia. But let me just say from my perspective, I hope there's a day when customer can ask Alexa to listen all the time, to improve the experience, to improve because I personally don't see the negative because if you have the control and if you have the trust, there's no reason why I shouldn't be listening all the time to the conversations to learn more about you. Because ultimately, as long as you have control and trust, every data you provide to the device, that the device wants is going to be useful. And so to me, as a machine learning person, I think it worries me how sensitive people are about their data relative to how empowering it could be relative to how empowering it could be for the devices around them, how enriching it could be for their own life to improve the product. So I just, it's something I think about sort of a lot, how do we make that devices, obviously Alexa thinks about a lot as well. I don't know if you wanna comment on that, sort of, okay, have you seen, let me ask it in the form of a question, okay. Have you seen an evolution in the way people think about their private data in the previous several years? So as we as a society get more and more comfortable to the benefits we get by sharing more data. First, let me answer that part and then I'll wanna go back to the other aspect you were mentioning. So as a society, on a general, we are getting more comfortable as a society. Doesn't mean that everyone is, and I think we have to respect that. I don't think one size fits all is always gonna be the answer for all, right? By definition. So I think that's something to keep in mind in these. Going back to your, on what more magical experiences can be launched in these kinds of AI settings. I think again, if you give the control, we, it's possible certain parts of it. So we have a feature called follow up mode where you, if you turn it on and Alexa, after you've spoken to it, will open the mics again, thinking you will answer something again. Like if you're adding lists to your shopping item, so right, or a shopping list or to do list, you're not done. You want to keep, so in that setting, it's awesome that it opens the mic for you to say eggs and milk and then bread, right? So these are the kinds of things which you can empower. So, and then another feature we have, which is called Alexa Guard. I said it only listens for the wake word, right? But if you have, let's say you're going to say, like you leave your home and you want Alexa to listen for a couple of sound events like smoke alarm going off or someone breaking your glass, right? So it's like just to keep your peace of mind. So you can say Alexa on guard or I'm away and then it can be listening for these sound events. And when you're home, you come out of that mode, right? So this is another one where you again gave controls in the hands of the user or the customer and to enable some experience that is high utility and maybe even more delightful in the certain settings like follow up mode and so forth. And again, this general principle is the same, control in the hands of the customer. So I know we kind of started with a lot of philosophy and a lot of interesting topics and we're just jumping all over the place, but really some of the fascinating things that the Alexa team and Amazon is doing is in the algorithm side, the data side, the technology, the deep learning, machine learning and so on. So can you give a brief history of Alexa from the perspective of just innovation, the algorithms, the data of how it was born, how it came to be, how it's grown, where it is today? Yeah, it start with in Amazon, everything starts with the customer and we have a process called working backwards. Alexa and more specifically than the product Echo, there was a working backwards document essentially that reflected what it would be, started with a very simple vision statement for instance that morphed into a full fledged document along the way changed into what all it can do, right? But the inspiration was the Star Trek computer. So when you think of it that way, everything is possible, but when you launch a product, you have to start with some place. And when I joined, the product was already in conception and we started working on the far field speech recognition because that was the first thing to solve. By that we mean that you should be able to speak to the device from a distance. And in those days, that wasn't a common practice. And even in the previous research world I was in was considered to an unsolvable problem then in terms of whether you can converse from a length. And here I'm still talking about the first part of the problem where you say, get the attention of the device as in by saying what we call the wake word, which means the word Alexa has to be detected with a very high accuracy because it is a very common word. It has sound units that map with words like I like you or Alec, Alex, right? So it's a undoubtedly hard problem to detect the right mentions of Alexa's address to the device versus I like Alexa. So you have to pick up that signal when there's a lot of noise. Not only noise but a lot of conversation in the house, right? You remember on the device, you're simply listening for the wake word, Alexa. And there's a lot of words being spoken in the house. How do you know it's Alexa and directed at Alexa? Because I could say, I love my Alexa, I hate my Alexa. I want Alexa to do this. And in all these three sentences, I said, Alexa, I didn't want it to wake up. Can I just pause on that second? What would be your device that I should probably in the introduction of this conversation give to people in terms of them turning off their Alexa device if they're listening to this podcast conversation out loud? Like what's the probability that an Alexa device will go off because we mentioned Alexa like a million times. So it will, we have done a lot of different things where we can figure out that there is the device, the speech is coming from a human versus over the air. Also, I mean, in terms of like, also it is think about ads or so we have also launched a technology for watermarking kind of approaches in terms of filtering it out. But yes, if this kind of a podcast is happening, it's possible your device will wake up a few times. It's an unsolved problem, but it is definitely something we care very much about. But the idea is you wanna detect Alexa. Meant for the device. First of all, just even hearing Alexa versus I like something. I mean, that's a fascinating part. So that was the first relief. That's the first. The world's best detector of Alexa. Yeah, the world's best wake word detector in a far field setting, not like something where the phone is sitting on the table. This is like people have devices 40 feet away like in my house or 20 feet away and you still get an answer. So that was the first part. The next is, okay, you're speaking to the device. Of course, you're gonna issue many different requests. Some may be simple, some may be extremely hard, but it's a large vocabulary speech recognition problem essentially, where the audio is now not coming onto your phone or a handheld mic like this or a close talking mic, but it's from 20 feet away where if you're in a busy household, your son may be listening to music, your daughter may be running around with something and asking your mom something and so forth, right? So this is like a common household setting where the words you're speaking to Alexa need to be recognized with very high accuracy, right? Now we are still just in the recognition problem. We haven't yet come to the understanding one, right? And if I pause them, sorry, once again, what year was this? Is this before neural networks began to start to seriously prove themselves in the audio space? Yeah, this is around, so I joined in 2013 in April, right? So the early research and neural networks coming back and showing some promising results in speech recognition space had started happening, but it was very early. But we just now build on that on the very first thing we did when I joined with the team. And remember, it was a very much of a startup environment, which is great about Amazon. And we doubled down on deep learning right away. And we knew we'll have to improve accuracy fast. And because of that, we worked on, and the scale of data, once you have a device like this, if it is successful, will improve big time. Like you'll suddenly have large volumes of data to learn from to make the customer experience better. So how do you scale deep learning? So we did one of the first works in training with distributed GPUs and where the training time was linear in terms of the amount of data. So that was quite important work where it was algorithmic improvements as well as a lot of engineering improvements to be able to train on thousands and thousands of speech. And that was an important factor. So if you ask me like back in 2013 and 2014, when we launched Echo, the combination of large scale data, deep learning progress, near infinite GPUs we had available on AWS even then, was all came together for us to be able to solve the far field speech recognition to the extent it could be useful to the customers. It's still not solved. Like, I mean, it's not that we are perfect at recognizing speech, but we are great at it in terms of the settings that are in homes, right? So, and that was important even in the early stages. So first of all, just even, I'm trying to look back at that time. If I remember correctly, it was, it seems like the task would be pretty daunting. So like, so we kind of take it for granted that it works now. Yes, you're right. So let me, like how, first of all, you mentioned startup. I wasn't familiar how big the team was. I kind of, cause I know there's a lot of really smart people working on it. So now it's a very, very large team. How big was the team? How likely were you to fail in the eyes of everyone else? And ourselves? And yourself? So like what? I'll give you a very interesting anecdote on that. When I joined the team, the speech recognition team was six people. My first meeting, and we had hired a few more people, it was 10 people. Nine out of 10 people thought it can't be done. Who was the one? The one was me, say, actually I should say, and one was semi optimistic. And eight were trying to convince, let's go to the management and say, let's not work on this problem. Let's work on some other problem, like either telephony speech for customer service calls and so forth. But this was the kind of belief you must have. And I had experience with far field speech recognition and my eyes lit up when I saw a problem like that saying, okay, we have been in speech recognition, always looking for that killer app. And this was a killer use case to bring something delightful in the hands of customers. So you mentioned the way you kind of think of it in the product way in the future, have a press release and an FAQ and you think backwards. Did you have, did the team have the echo in mind? So this far field speech recognition, actually putting a thing in the home that works, that it's able to interact with, was that the press release? What was the? The way close, I would say, in terms of the, as I said, the vision was start a computer, right? Or the inspiration. And from there, I can't divulge all the exact specifications, but one of the first things that was magical on Alexa was music. It brought me to back to music because my taste was still in when I was an undergrad. So I still listened to those songs and I, it was too hard for me to be a music fan with a phone, right? So I, and I don't, I hate things in my ears. So from that perspective, it was quite hard and music was part of the, at least the documents I have seen, right? So from that perspective, I think, yes, in terms of how far are we from the original vision? I can't reveal that, but it's, that's why I have done a fun at work because every day we go in and thinking like, these are the new set of challenges to solve. Yeah, that's a great way to do great engineering as you think of the press release. I like that idea actually. Maybe we'll talk about it a bit later, but it's just a super nice way to have a focus. I'll tell you this, you're a scientist and a lot of my scientists have adopted that. They have now, they love it as a process because it was very, as scientists, you're trained to write great papers, but they are all after you've done the research or you've proven that and your PhD dissertation proposal is something that comes closest or a DARPA proposal or a NSF proposal is the closest that comes to a press release. But that process is now ingrained in our scientists, which is like delightful for me to see. You write the paper first and then make it happen. That's right. In fact, it's not. State of the art results. Or you leave the results section open where you have a thesis about here's what I expect, right? And here's what it will change, right? So I think it is a great thing. It works for researchers as well. Yeah. So far field recognition. Yeah. What was the big leap? What were the breakthroughs and what was that journey like to today? Yeah, I think the, as you said first, there was a lot of skepticism on whether far field speech recognition will ever work to be good enough, right? And what we first did was got a lot of training data in a far field setting. And that was extremely hard to get because none of it existed. So how do you collect data in far field setup, right? With no customer base at this time. With no customer base, right? So that was first innovation. And once we had that, the next thing was, okay, if you have the data, first of all, we didn't talk about like, what would magical mean in this kind of a setting? What is good enough for customers, right? That's always, since you've never done this before, what would be magical? So it wasn't just a research problem. You had to put some in terms of accuracy and customer experience features, some stakes on the ground saying, here's where I think it should get to. So you established a bar and then how do you measure progress towards given you have no customer right now. So from that perspective, we went, so first was the data without customers. Second was doubling down on deep learning as a way to learn. And I can just tell you that the combination of the two got our error rates by a factor of five. From where we were when I started to within six months of having that data, we, at that point, I got the conviction that this will work, right? So, because that was magical in terms of when it started working and. That reached the magical bar. That came close to the magical bar. To the bar, right? That we felt would be where people will use it. That was critical. Because you really have one chance at this. If we had launched in November 2014 is when we launched, if it was below the bar, I don't think this category exists if you don't meet the bar. Yeah, and just having looked at voice based interactions like in the car or earlier systems, it's a source of huge frustration for people. In fact, we use voice based interaction for collecting data on subjects to measure frustration. So, as a training set for computer vision, for face data, so we can get a data set of frustrated people. That's the best way to get frustrated people is having them interact with a voice based system in the car. So, that bar I imagine is pretty high. It was very high. And we talked about how also errors are perceived from AIs versus errors by humans. But we are not done with the problems that ended up, we had to solve to get it to launch. So, do you want the next one? Yeah, the next one. So, the next one was what I think of as multi domain natural language understanding. It's very, I wouldn't say easy, but it is during those days, solving it, understanding in one domain, a narrow domain was doable, but for these multiple domains like music, like information, other kinds of household productivity, alarms, timers, even though it wasn't as big as it is in terms of the number of skills Alexa has and the confusion space has like grown by three orders of magnitude, it was still daunting even those days. And again, no customer base yet. Again, no customer base. So, now you're looking at meaning understanding and intent understanding and taking actions on behalf of customers. Based on their requests. And that is the next hard problem. Even if you have gotten the words recognized, how do you make sense of them? In those days, there was still a lot of emphasis on rule based systems for writing grammar patterns to understand the intent. But we had a statistical first approach even then, where for our language understanding we had, and even those starting days, an entity recognizer and an intent classifier, which was all trained statistically. In fact, we had to build the deterministic matching as a follow up to fix bugs that statistical models have. So, it was just a different mindset where we focused on data driven statistical understanding. It wins in the end if you have a huge data set. Yes, it is contingent on that. And that's why it came back to how do you get the data. Before customers, the fact that this is why data becomes crucial to get to the point that you have the understanding system built up. And notice that for you, we were talking about human machine dialogue, and even those early days, even it was very much transactional, do one thing, one shot utterances in great way. There was a lot of debate on how much should Alexa talk back in terms of if you misunderstood it. If you misunderstood you or you said play songs by the stones, and let's say it doesn't know early days, knowledge can be sparse, who are the stones? It's the Rolling Stones. And you don't want the match to be Stone Temple Pilots or Rolling Stones. So, you don't know which one it is. So, these kind of other signals, now there we had great assets from Amazon in terms of... UX, like what is it, what kind of... Yeah, how do you solve that problem? In terms of what we think of it as an entity resolution problem, right? So, because which one is it, right? I mean, even if you figured out the stones as an entity, you have to resolve it to whether it's the stones or the Stone Temple Pilots or some other stones. Maybe I misunderstood, is the resolution the job of the algorithm or is the job of UX communicating with the human to help the resolution? Well, there is both, right? It is, you want 90% or high 90s to be done without any further questioning or UX, right? So, but it's absolutely okay, just like as humans, we ask the question, I didn't understand you, Lex. It's fine for Alexa to occasionally say, I did not understand you, right? And that's an important way to learn. And I'll talk about where we have come with more self learning with these kind of feedback signals. But in those days, just solving the ability of understanding the intent and resolving to an action where action could be play a particular artist or a particular song was super hard. Again, the bar was high as we were talking about, right? So, while we launched it in sort of 13 big domains, I would say in terms of, we think of it as 13, the big skills we had, like music is a massive one when we launched it. And now we have 90,000 plus skills on Alexa. So, what are the big skills? Can you just go over them? Because the only thing I use it for is music, weather and shopping. So, we think of it as music information, right? So, weather is a part of information, right? So, when we launched, we didn't have smart home, but within, by smart home I mean, you connect your smart devices, you control them with voice. If you haven't done it, it's worth, it will change your life. Like turning on the lights and so on. Turning on your light to anything that's connected and has a, it's just that. What's your favorite smart device for you? My light. Light. And now you have the smart plug with, and you don't, we also have this echo plug, which is. Oh yeah, you can plug in anything. You can plug in anything and now you can turn that one on and off. I use this conversation motivation to get one. Garage door, you can check your status of the garage door and things like, and we have gone, make Alexa more and more proactive, where it even has hunches now, that, oh, looks, hunches, like you left your light on. Let's say you've gone to your bed and you left the garage light on. So it will help you out in these settings, right? That's smart devices, information, smart devices. You said music. Yeah, so I don't remember everything we had, but alarms, timers were the big ones. Like that was, you know, the timers were very popular right away. Music also, like you could play song, artist, album, everything, and so that was like a clear win in terms of the customer experience. So that's, again, this is language understanding. Now things have evolved, right? So where we want Alexa definitely to be more accurate, competent, trustworthy, based on how well it does these core things, but we have evolved in many different dimensions. First is what I think of are doing more conversational for high utility, not just for chat, right? And there at Remars this year, which is our AI conference, we launched what is called Alexa Conversations. That is providing the ability for developers to author multi turn experiences on Alexa with no code, essentially, in terms of the dialogue code. Initially it was like, you know, all these IVR systems, you have to fully author if the customer says this, do that, right? So the whole dialogue flow is hand authored. And with Alexa Conversations, the way it is that you just provide a sample interaction data with your service or your API, let's say your Atom tickets that provides a service for buying movie tickets. You provide a few examples of how your customers will interact with your APIs. And then the dialogue flow is automatically constructed using a record neural network trained on that data. So that simplifies the developer experience. We just launched our preview for the developers to try this capability out. And then the second part of it, which shows even increased utility for customers is you and I, when we interact with Alexa or any customer, as I'm coming back to our initial part of the conversation, the goal is often unclear or unknown to the AI. If I say, Alexa, what movies are playing nearby? Am I trying to just buy movie tickets? Am I actually even, do you think I'm looking for just movies for curiosity, whether the Avengers is still in theater or when is it? Maybe it's gone and maybe it will come on my missed it. So I may watch it on Prime, right? Which happened to me. So from that perspective now, you're looking into what is my goal? And let's say I now complete the movie ticket purchase. Maybe I would like to get dinner nearby. So what is really the goal here? Is it night out or is it movies? As in just go watch a movie? The answer is, we don't know. So can Alexa now figuratively have the intelligence that I think this meta goal is really night out or at least say to the customer when you've completed the purchase of movie tickets from Atom tickets or Fandango, or pick your anyone. Then the next thing is, do you want to get an Uber to the theater, right? Or do you want to book a restaurant next to it? And then not ask the same information over and over again, what time, how many people in your party, right? So this is where you shift the cognitive burden from the customer to the AI. Where it's thinking of what is your, it anticipates your goal and takes the next best action to complete it. Now that's the machine learning problem. But essentially the way we solve this first instance, and we have a long way to go to make it scale to everything possible in the world. But at least for this situation, it is from at every instance, Alexa is making the determination, whether it should stick with the experience with Atom tickets or not. Or offer you based on what you say, whether either you have completed the interaction, or you said, no, get me an Uber now. So it will shift context into another experience or skill or another service. So that's a dynamic decision making. That's making Alexa, you can say more conversational for the benefit of the customer, rather than simply complete transactions, which are well thought through. You as a customer has fully specified what you want to be accomplished. It's accomplishing that. So it's kind of as we do this with pedestrians, like intent modeling is predicting what your possible goals are and what's the most likely goal and switching that depending on the things you say. So my question is there, it seems maybe it's a dumb question, but it would help a lot if Alexa remembered me, what I said previously. Right. Is it trying to use some memories for the customer? Yeah, it is using a lot of memory within that. So right now, not so much in terms of, okay, which restaurant do you prefer, right? That is a more longterm memory, but within the short term memory, within the session, it is remembering how many people did you, so if you said buy four tickets, now it has made an implicit assumption that you were gonna have, you need at least four seats at a restaurant, right? So these are the kind of context it's preserving between these skills, but within that session. But you're asking the right question in terms of for it to be more and more useful, it has to have more longterm memory and that's also an open question and again, these are still early days. So for me, I mean, everybody's different, but yeah, I'm definitely not representative of the general population in the sense that I do the same thing every day. Like I eat the same, I do everything the same, the same thing, wear the same thing clearly, this or the black shirt. So it's frustrating when Alexa doesn't get what I'm saying because I have to correct her every time in the exact same way. This has to do with certain songs, like she doesn't know certain weird songs I like and doesn't know, I've complained to Spotify about this, talked to the RD, head of RD at Spotify, it's their way to heaven. I have to correct it every time. It doesn't play Led Zeppelin correctly. It plays cover of Led's of Stairway to Heaven. So I'm. You should figure, you should send me your, next time it fails, feel free to send it to me, we'll take care of it. Okay, well. Because Led Zeppelin is one of my favorite brands, it works for me, so I'm like shocked it doesn't work for you. This is an official bug report. I'll put it, I'll make it public, I'll make everybody retweet it. We're gonna fix the Stairway to Heaven problem. Anyway, but the point is, you know, I'm pretty boring and do the same things, but I'm sure most people do the same set of things. Do you see Alexa sort of utilizing that in the future for improving the experience? Yes, and not only utilizing, it's already doing some of it. We call it, where Alexa is becoming more self learning. So, Alexa is now auto correcting millions and millions of utterances in the US without any human supervision involved. The way it does it is, let's take an example of a particular song didn't work for you. What do you do next? You either it played the wrong song and you said, Alexa, no, that's not the song I want. Or you say, Alexa play that, you try it again. And that is a signal to Alexa that she may have done something wrong. And from that perspective, we can learn if there's that failure pattern or that action of song A was played when song B was requested. And it's very common with station names because play NPR, you can have N be confused as an M. And then you, for a certain accent like mine, people confuse my N and M all the time. And because I have a Indian accent, they're confusable to humans. It is for Alexa too. And in that part, but it starts auto correcting and we collect, we correct a lot of these automatically without a human looking at the failures. So one of the things that's for me missing in Alexa, I don't know if I'm a representative customer, but every time I correct it, it would be nice to know that that made a difference. Yes. You know what I mean? Like the sort of like, I heard you like a sort of. Some acknowledgement of that. We work a lot with Tesla, we study autopilot and so on. And a large amount of the customers that use Tesla autopilot, they feel like they're always teaching the system. They're almost excited by the possibility that they're teaching. I don't know if Alexa customers generally think of it as they're teaching to improve the system. And that's a really powerful thing. Again, I would say it's a spectrum. Some customers do think that way and some would be annoyed by Alexa acknowledging that. So there's, again, no one, while there are certain patterns, not everyone is the same in this way. But we believe that, again, customers helping Alexa is a tenet for us in terms of improving it. And some more self learning is by, again, this is like fully unsupervised, right? There is no human in the loop and no labeling happening. And based on your actions as a customer, Alexa becomes smarter. Again, it's early days, but I think this whole area of teachable AI is gonna get bigger and bigger in the whole space, especially in the AI assistant space. So that's the second part where I mentioned more conversational. This is more self learning. The third is more natural. And the way I think of more natural is we talked about how Alexa sounds. And we have done a lot of advances in our text to speech by using, again, neural network technology for it to sound very humanlike. From the individual texture of the sound to the timing, the tonality, the tone, everything, the whole thing. I would think in terms of, there's a lot of controls in each of the places for how, I mean, the speed of the voice, the prosthetic patterns, the actual smoothness of how it sounds, all of those are factored and we do a ton of listening tests to make sure. But naturalness, how it sounds should be very natural. How it understands requests is also very important. And in terms of, we have 95,000 skills. And if we have, imagine that in many of these skills, you have to remember the skill name and say, Alexa, ask the tide skill to tell me X. Now, if you have to remember the skill name, that means the discovery and the interaction is unnatural. And we are trying to solve that by what we think of as, again, you don't have to have the app metaphor here. These are not individual apps, right? Even though they're, so you're not sort of opening one at a time and interacting. So it should be seamless because it's voice. And when it's voice, you have to be able to understand these requests independent of the specificity, like a skill name. And to do that, what we have done is again, built a deep learning based capability where we shortlist a bunch of skills when you say, Alexa, get me a car. And then we figure it out, okay, it's meant for an Uber skill versus a Lyft or based on your preferences. And then you can rank the responses from the skill and then choose the best response for the customer. So that's on the more natural, other examples of more natural is like, we were talking about lists, for instance, and you don't wanna say, Alexa, add milk, Alexa, add eggs, Alexa, add cookies. No, Alexa, add cookies, milk, and eggs and that in one shot, right? So that works, that helps with the naturalness. We talked about memory, like if you said, you can say, Alexa, remember I have to go to mom's house, or you may have entered a calendar event through your calendar that's linked to Alexa. You don't wanna remember whether it's in my calendar or did I tell you to remember something or some other reminder, right? So you have to now, independent of how customers create these events, it should just say, Alexa, when do I have to go to mom's house? And it tells you when you have to go to mom's house. Now that's a fascinating problem. Who's that problem on? So there's people who create skills. Who's tasked with integrating all of that knowledge together so the skills become seamless? Is it the creators of the skills or is it an infrastructure that Alexa provides problem? It's both. I think the large problem in terms of making sure your skill quality is high, that has to be done by our tools, because it's just, so these skills, just to put the context, they are built through Alexa Skills Kit, which is a self serve way of building an experience on Alexa. This is like any developer in the world could go to Alexa Skills Kit and build an experience on Alexa. Like if you're a Domino's, you can build a Domino's Skills. For instance, that does pizza ordering. When you have authored that, you do want to now, if people say, Alexa, open Domino's or Alexa, ask Domino's to get a particular type of pizza, that will work, but the discovery is hard. You can't just say, Alexa, get me a pizza. And then Alexa figures out what to do. That latter part is definitely our responsibility in terms of when the request is not fully specific, how do you figure out what's the best skill or a service that can fulfill the customer's request? And it can keep evolving. Imagine going to the situation I said, which was the night out planning, that the goal could be more than that individual request that came up. A pizza ordering could mean a night in, where you're having an event with your kids in their house, and you're, so this is, welcome to the world of conversational AI. This is super exciting because it's not the academic problem of NLP, of natural language processing, understanding, dialogue. This is like real world. And the stakes are high in the sense that customers get frustrated quickly, people get frustrated quickly. So you have to get it right, you have to get that interaction right. So it's, I love it. But so from that perspective, what are the challenges today? What are the problems that really need to be solved in the next few years? What's the focus? First and foremost, as I mentioned, that get the basics right is still true. Basically, even the one shot requests, which we think of as transactional requests, needs to work magically, no question about that. If it doesn't turn your light on and off, you'll be super frustrated. Even if I can complete the night out for you and not do that, that is unacceptable as a customer, right? So that you have to get the foundational understanding going very well. The second aspect when I said more conversational is as you imagine is more about reasoning. It is really about figuring out what the latent goal is of the customer based on what I have the information now and the history, what's the next best thing to do. So that's a complete reasoning and decision making problem. Just like your self driving car, but the goal is still more finite. Here it evolves, your environment is super hard and self driving and the cost of a mistake is huge here, but there are certain similarities. But if you think about how many decisions Alexa is making or evaluating at any given time, it's a huge hypothesis space. And we're only talked about so far about what I think of reactive decision in terms of you asked for something and Alexa is reacting to it. If you bring the proactive part, which is Alexa having hunches. So any given instance then it's really a decision at any given point based on the information. Alexa has to determine what's the best thing it needs to do. So these are the ultimate AI problem about decisions based on the information you have. Do you think, just from my perspective, I work a lot with sensing of the human face. Do you think they'll, and we touched this topic a little bit earlier, but do you think it'll be a day soon when Alexa can also look at you to help improve the quality of the hunch it has, or at least detect frustration or detect, improve the quality of its perception of what you're trying to do? I mean, let me again bring back to what it already does. We talked about how based on you barge in over Alexa, clearly it's a very high probability it must have done something wrong. That's why you barged in. The next extension of whether frustration is a signal or not, of course, is a natural thought in terms of how that should be in a signal to it. You can get that from voice. You can get from voice, but it's very hard. Like, I mean, frustration as a signal historically, if you think about emotions of different kinds, there's a whole field of affective computing, something that MIT has also done a lot of research in, is super hard. And you are now talking about a far field device, as in you're talking to a distance noisy environment. And in that environment, it needs to have a good sense for your emotions. This is a very, very hard problem. Very hard problem, but you haven't shied away from hard problems. So, Deep Learning has been at the core of a lot of this technology. Are you optimistic about the current Deep Learning approaches to solving the hardest aspects of what we're talking about? Or do you think there will come a time where new ideas need to further, if we look at reasoning, so OpenAI, DeepMind, a lot of folks are now starting to work in reasoning, trying to see how we can make neural networks reason. Do you see that new approaches need to be invented to take the next big leap? Absolutely, I think there has to be a lot more investment. And I think in many different ways, and there are these, I would say, nuggets of research forming in a good way, like learning with less data or like zero short learning, one short learning. And the active learning stuff you've talked about is incredible stuff. So, transfer learning is also super critical, especially when you're thinking about applying knowledge from one task to another, or one language to another, right? It's really ripe. So, these are great pieces. Deep learning has been useful too. And now we are sort of marrying deep learning with transfer learning and active learning. Of course, that's more straightforward in terms of applying deep learning and an active learning setup. But I do think in terms of now looking into more reasoning based approaches is going to be key for our next wave of the technology. But there is a good news. The good news is that I think for keeping on to delight customers, that a lot of it can be done by prediction tasks. So, we haven't exhausted that. So, we don't need to give up on the deep learning approaches for that. So, that's just I wanted to sort of point that out. Creating a rich, fulfilling, amazing experience that makes Amazon a lot of money and a lot of everybody a lot of money because it does awesome things, deep learning is enough. The point. I don't think, I wouldn't say deep learning is enough. I think for the purposes of Alexa accomplished the task for customers. I'm saying there are still a lot of things we can do with prediction based approaches that do not reason. I'm not saying that and we haven't exhausted those. But for the kind of high utility experiences that I'm personally passionate about of what Alexa needs to do, reasoning has to be solved to the same extent as you can think of natural language understanding and speech recognition to the extent of understanding intents has been how accurate it has become. But reasoning, we have very, very early days. Let me ask it another way. How hard of a problem do you think that is? Hardest of them. I would say hardest of them because again, the hypothesis space is really, really large. And when you go back in time, like you were saying, I wanna, I want Alexa to remember more things that once you go beyond a session of interaction, which is by session, I mean a time span, which is today to versus remembering which restaurant I like. And then when I'm planning a night out to say, do you wanna go to the same restaurant? Now you're up the stakes big time. And this is where the reasoning dimension also goes way, way bigger. So you think the space, we'll be elaborating that a little bit, just philosophically speaking, do you think when you reason about trying to model what the goal of a person is in the context of interacting with Alexa, you think that space is huge? It's huge, absolutely huge. Do you think, so like another sort of devil's advocate would be that we human beings are really simple and we all want like just a small set of things. And so do you think it's possible? Cause we're not talking about a fulfilling general conversation. Perhaps actually the Alexa prize is a little bit after that. Creating a customer, like there's so many of the interactions, it feels like are clustered in groups that are, don't require general reasoning. I think you're right in terms of the head of the distribution of all the possible things customers may wanna accomplish. But the tail is long and it's diverse, right? So from that. There's many, many long tails. So from that perspective, I think you have to solve that problem otherwise, and everyone's very different. Like, I mean, we see this already in terms of the skills, right? I mean, if you're an average surfer, which I am not, right? But somebody is asking Alexa about surfing conditions, right? And there's a skill that is there for them to get to, right? That tells you that the tail is massive. Like in terms of like what kind of skills people have created, it's humongous in terms of it. And which means there are these diverse needs. And when you start looking at the combinations of these, right? Even if you had pairs of skills and 90,000 choose two, it's still a big set of combinations. So I'm saying there's a huge to do here now. And I think customers are, you know, wonderfully frustrated with things. And they have to keep getting to do better things for them. So. And they're not known to be super patient. So you have to. Do it fast. You have to do it fast. So you've mentioned the idea of a press release, the research and development, Amazon Alexa and Amazon general, you kind of think of what the future product will look like. And you kind of make it happen. You work backwards. So can you draft for me, you probably already have one, but can you make up one for 10, 20, 30, 40 years out that you see the Alexa team putting out just in broad strokes, something that you dream about? I think let's start with the five years first, right? So, and I'll get to the 40 years too. Cause I'm pretty sure you have a real five year one. That's why I didn't want to, but yeah, in broad strokes, let's start with five years. I think the five year is where, I mean, I think of in these spaces, it's hard, especially if you're in the thick of things to think beyond the five year space, because a lot of things change, right? I mean, if you ask me five years back, will Alexa will be here? I wouldn't have, I think it has surpassed my imagination of that time, right? So I think from the next five years perspective, from a AI perspective, what we're gonna see is that notion, which you said goal oriented dialogues and open domain like Alexa prize. I think that bridge is gonna get closed. They won't be different. And I'll give you why that's the case. You mentioned shopping. How do you shop? Do you shop in one shot? Sure, your double A batteries, paper towels. Yes, how long does it take for you to buy a camera? You do ton of research, then you make a decision. So is that a goal oriented dialogue when somebody says, Alexa, find me a camera? Is it simply inquisitiveness, right? So even in the something that you think of it as shopping, which you said you yourself use a lot of, if you go beyond where it's reorders or items where you sort of are not brand conscious and so forth. So that was just in shopping. Just to comment quickly, I've never bought anything through Alexa that I haven't bought before on Amazon on the desktop after I clicked in a bunch of read a bunch of reviews, that kind of stuff. So it's repurchase. So now you think in, even for something that you felt like is a finite goal, I think the space is huge because even products, the attributes are many, and you wanna look at reviews, some on Amazon, some outside, some you wanna look at what CNET is saying or another consumer forum is saying about even a product for instance, right? So that's just shopping where you could argue the ultimate goal is sort of known. And we haven't talked about Alexa, what's the weather in Cape Cod this weekend, right? So why am I asking that weather question, right? So I think of it as how do you complete goals with minimum steps for our customers, right? And when you think of it that way, the distinction between goal oriented and conversations for open domain say goes away. I may wanna know what happened in the presidential debate, right? And is it I'm seeking just information or I'm looking at who's winning the debates, right? So these are all quite hard problems. So even the five year horizon problem, I'm like, I sure hope we'll solve these. And you're optimistic because that's a hard problem. Which part? The reasoning enough to be able to help explore complex goals that are beyond something simplistic. That feels like it could be, well, five years is a nice. Is a nice bar for it, right? I think you will, it's a nice ambition and do we have press releases for that? Absolutely, can I tell you what specifically the roadmap will be? No, right? And what, and will we solve all of it in the five year space? No, this is, we'll work on this forever actually. This is the hardest of the AI problems and I don't see that being solved even in a 40 year horizon because even if you limit to the human intelligence, we know we are quite far from that. In fact, every aspects of our sensing to neural processing, to how brain stores information and how it processes it, we don't yet know how to represent knowledge, right? So we are still in those early stages. So I wanted to start, that's why at the five year, because the five year success would look like that in solving these complex goals. And the 40 year would be where it's just natural to talk to these in terms of more of these complex goals. Right now, we've already come to the point where these transactions you mentioned of asking for weather or reordering something or listening to your favorite tune, it's natural for you to ask Alexa. It's now unnatural to pick up your phone, right? And that I think is the first five year transformation. The next five year transformation would be, okay, I can plan my weekend with Alexa or I can plan my next meal with Alexa or my next night out with seamless effort. So just to pause and look back at the big picture of it all. It's a, you're a part of a large team that's creating a system that's in the home that's not human, that gets to interact with human beings. So we human beings, we these descendants of apes have created an artificial intelligence system that's able to have conversations. I mean, that to me, the two most transformative robots of this century, I think will be autonomous vehicles, but they're a little bit transformative in a more boring way. It's like a tool. I think conversational agents in the home is like an experience. How does that make you feel? That you're at the center of creating that? Do you sit back in awe sometimes? What is your feeling about the whole mess of it? Can you even believe that we're able to create something like this? I think it's a privilege. I'm so fortunate like where I ended up, right? And it's been a long journey. Like I've been in this space for a long time in Cambridge, right, and it's so heartwarming to see the kind of adoption conversational agents are having now. Five years back, it was almost like, should I move out of this because we are unable to find this killer application that customers would love that would not simply be a good to have thing in research labs. And it's so fulfilling to see it make a difference to millions and billions of people worldwide. The good thing is that it's still very early. So I have another 20 years of job security doing what I love. Like, so I think from that perspective, I tell every researcher that joins or every member of my team, that this is a unique privilege. Like I think, and we have, and I would say not just launching Alexa in 2014, which was first of its kind. Along the way we have, when we launched Alexa Skills Kit, it became democratizing AI. When before that there was no good evidence of an SDK for speech and language. Now we are coming to this where you and I are having this conversation where I'm not saying, oh, Lex, planning a night out with an AI agent, impossible. I'm saying it's in the realm of possibility and not only possibility, we'll be launching this, right? So some elements of that, it will keep getting better. We know that is a universal truth. Once you have these kinds of agents out there being used, they get better for your customers. And I think that's where, I think the amount of research topics we are throwing out at our budding researchers is just gonna be exponentially hard. And the great thing is you can now get immense satisfaction by having customers use it, not just a paper in NeurIPS or another conference. I think everyone, myself included, are deeply excited about that future. So I don't think there's a better place to end, Rohit. Thank you so much for talking to us. Thank you so much. This was fun. Thank you, same here. Thanks for listening to this conversation with Rohit Prasad. And thank you to our presenting sponsor, Cash App. Download it, use code LEGSPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter. And now let me leave you with some words of wisdom from the great Alan Turing. Sometimes it is the people no one can imagine anything of who do the things no one can imagine. Thank you for listening and hope to see you next time.
Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57
The following is a conversation with Michael Stevens, the creator of Vsauce, one of the most popular educational YouTube channels in the world with over 15 million subscribers and over 1.7 billion views. His videos often ask and answer questions that are both profound and entertaining, spanning topics from physics to psychology. Popular questions include, what if everyone jumped at once? Or what if the sun disappeared? Or why are things creepy? Or what if the earth stopped spinning? As part of his channel, he created three seasons of Mind Field, a series that explored human behavior. His curiosity and passion are contagious and inspiring to millions of people. And so as an educator, his impact and contribution to the world is truly immeasurable. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store, Google Play, and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to First, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Michael Stevens. One of your deeper interests is psychology, understanding human behavior. You've pointed out how messy studying human behavior is and that it's far from the scientific rigor of something like physics, for example. How do you think we can take psychology from where it's been in the 20th century to something more like what the physicists, theoretical physicists are doing, something precise, something rigorous? Well, we could do it by finding the physical foundations of psychology, right? If all of our emotions and moods and feelings and behaviors are the result of mechanical behaviors of atoms and molecules in our brains, then can we find correlations? Perhaps like chaos makes that really difficult and the uncertainty principle and all these things. That we can't know the position and velocity of every single quantum state in a brain, probably. But I think that if we can get to that point with psychology, then we can start to think about consciousness in a physical and mathematical way. When we ask questions like, well, what is self reference? How can you think about yourself thinking? What are some mathematical structures that could bring that about? There's ideas of, in terms of consciousness and breaking it down into physics, there's ideas of panpsychism where people believe that whatever consciousness is, is a fundamental part of reality. It's almost like a physics law. Do you think, what's your views on consciousness? Do you think it has this deep part of reality or is it something that's deeply human and constructed by us humans? Starting nice and light and easy. Nothing I ask you today has actually proven answer. So we're just hypothesizing. So yeah, I mean, I should clarify, this is all speculation and I'm not an expert in any of these topics and I'm not God, but I think that consciousness is probably something that can be fully explained within the laws of physics. I think that our bodies and brains and the universe and at the quantum level is so rich and complex. I'd be surprised if we couldn't find a room for consciousness there. And why should we be conscious? Why are we aware of ourselves? That is a very strange and interesting and important question. And I think for the next few thousand years, we're going to have to believe in answers purely on faith. But my guess is that we will find that, within the configuration space of possible arrangements of the universe, there are some that contain memories of others. Literally, Julian Barber calls them time capsule states where you're like, yeah, not only do I have a scratch on my arm, but also this state of the universe also contains a memory in my head of being scratched by my cat three days ago. And for some reason, those kinds of states of the universe are more plentiful or more likely. When you say those states, the ones that contain memories of its past or ones that contain memories of its past and have degrees of consciousness. Just the first part, because I think the consciousness then emerges from the fact that a state of the universe that contains fragments or memories of other states is one where you're going to feel like there's time. You're going to feel like, yeah, things happened in the past. And I don't know what'll happen in the future because these states don't contain information about the future. For some reason, those kinds of states are either more common, more plentiful, or you could use the anthropic principle and just say, well, they're extremely rare, but until you are in one, or if you are in one, then you can ask questions, like you're asking me on this podcast. Why questions? Yeah, it's like, why are we conscious? Well, because if we weren't, we wouldn't be asking why we were. You've kind of implied that you have a sense, again, hypothesis, theorizing that the universe is deterministic. What's your thoughts about free will? Do you think of the universe as deterministic in a sense that it's unrolling a particular, like there's a, it's operating under a specific set of physical laws. And when you have to set the initial conditions, it will unroll in the exact same way in our particular line of the universe every time. That is a very useful way to think about the universe. It's done us well. It's brought us to the moon. It's brought us to where we are today, right? I would not say that I believe in determinism in that kind of an absolute form, or actually I just don't care. Maybe it's true, but I'm not gonna live my life like it is. What in your sense, cause you've studied kind of how we humans think of the world. What's in your view is the difference between our perception, like how we think the world is and reality. Do you think there's a huge gap there? Like we delude ourselves that the whole thing is an illusion. Just everything about human psychology, the way we see things and how things actually are. All the things you've studied, what's your sense? How big is the gap between reality and perception? Well, again, purely speculative. I think that we will never know the answer. We cannot know the answer. There is no experiment to find an answer to that question. Everything we experience is an event in our brain. When I look at a cat, I'm not even, I can't prove that there's a cat there. All I am experiencing is the perception of a cat inside my own brain. I am only a witness to the events of my mind. I think it is very useful to infer that if I witness the event of cat in my head, it's because I'm looking at a cat that is literally there and it has its own feelings and motivations and should be pet and given food and water and love. I think that's the way you should live your life. But whether or not we live in a simulation, I'm a brain in a vat, I don't know. Do you care? I don't really. Well, I care because it's a fascinating question. And it's a fantastic way to get people excited about all kinds of topics, physics, psychology, consciousness, philosophy. But at the end of the day, what would the difference be? If you... The cat needs to be fed at the end of the day, otherwise it'll be a dead cat. Right, but if it's not even a real cat, then it's just like a video game cat. And right, so what's the difference between killing a digital cat in a video game because of neglect versus a real cat? It seems very different to us psychologically. Like I don't really feel bad about, oh my gosh, I forgot to feed my Tamagotchi, right? But I would feel terrible if I forgot to feed my actual cats. So can you just touch on the topic of simulation? Do you find this thought experiment that we're living in a simulation useful, inspiring or constructive in any kind of way? Do you think it's ridiculous? Do you think it could be true? Or is it just a useful thought experiment? I think it is extremely useful as a thought experiment because it makes sense to everyone, especially as we see virtual reality and computer games getting more and more complex. You're not talking to an audience in like Newton's time where you're like, imagine a clock that it has mechanics in it that are so complex that it can create love. And everyone's like, no. But today you really start to feel, man, at what point is this little robot friend of mine gonna be like someone I don't want to cancel plans with? And so it's a great, the thought experiment of do we live in a simulation? Am I a brain in a vat that is just being given electrical impulses from some nefarious other beings so that I believe that I live on earth and that I have a body and all of this? And the fact that you can't prove it either way is a fantastic way to introduce people to some of the deepest questions. So you mentioned a little buddy that you would want to cancel an appointment with. So that's a lot of our conversations. That's what my research is, is artificial intelligence. And I apologize, but you're such a fun person to ask these big questions with. Well, I hope I can give some answers that are interesting. Well, because of you've sharpened your brain's ability to explore some of the most, some of the questions that many scientists are actually afraid of even touching, which is fascinating. I think you're in that sense ultimately a great scientist through this process of sharpening your brain. Well, I don't know if I am a scientist. I think science is a way of knowing and there are a lot of questions I investigate that are not scientific questions. On like mind field, we have definitely done scientific experiments and studies that had hypotheses and all of that, but not to be too like precious about what does the word science mean? But I think I would just describe myself as curious and I hope that that curiosity is contagious. So to you, the scientific method is deeply connected to science because your curiosity took you to asking questions. To me, asking a good question, even if you feel, society feels that it's not a question within the reach of science currently. To me, asking the question is the biggest step of the scientific process. The scientific method is the second part and that may be what traditionally is called science, but to me, asking the questions, being brave enough to ask the questions, being curious and not constrained by what you're supposed to think is just true, what it means to be a scientist to me. It's certainly a huge part of what it means to be a human. If I were to say, you know what? I don't believe in forces. I think that when I push on a massive object, a ghost leaves my body and enters the object I'm pushing and these ghosts happen to just get really lazy when they're around massive things and that's why F equals MA. Oh, and by the way, the laziness of the ghost is in proportion to the mass of the object. So boom, prove me wrong. Every experiment, well, you can never find the ghost. And so none of that theory is scientific, but once I start saying, can I see the ghost? Why should there be a ghost? And if there aren't ghosts, what might I expect? And I start to do different tests to see, is this falsifiable? Are there things that should happen if there are ghosts or are there things that shouldn't happen? And do they, you know, what do I observe? Now I'm thinking scientifically. I don't think of science as, wow, a picture of a black hole. That's just a photograph. That's an image. That's data. That's a sensory and perception experience. Science is how we got that and how we understand it and how we believe in it and how we reduce our uncertainty around what it means. But I would say I'm deeply within the scientific community and I'm sometimes disheartened by the elitism of the thinking, sort of not allowing yourself to think outside the box. So allowing the possibility of going against the conventions of science, I think is a beautiful part of some of the greatest scientists in history. I don't know, I'm impressed by scientists every day and revolutions in our knowledge of the world occur only under very special circumstances. It is very scary to challenge conventional thinking and risky because let's go back to elitism and ego, right? If you just say, you know what? I believe in the spirits of my body and all forces are actually created by invisible creatures that transfer themselves between objects. If you ridicule every other theory and say that you're correct, then ego gets involved and you just don't go anywhere. But fundamentally the question of well, what is a force is incredibly important. We need to have that conversation, but it needs to be done in this very political way of like, let's be respectful of everyone and let's realize that we're all learning together and not shutting out other people. And so when you look at a lot of revolutionary ideas, they were not accepted right away. And, you know, Galileo had a couple of problems with the authorities and later thinkers, Descartes, was like, all right, look, I kind of agree with Galileo, but I'm gonna have to not say that. I'll have to create and invent and write different things that keep me from being in trouble, but we still slowly made progress. Revolutions are difficult in all forms and certainly in science. Before we get to AI, on topic of revolutionary ideas, let me ask on a Reddit AMA, you said that is the earth flat is one of the favorite questions you've ever answered, speaking of revolutionary ideas. So your video on that, people should definitely watch, is really fascinating. Can you elaborate why you enjoyed answering this question so much? Yeah, well, it's a long story. I remember a long time ago, I was living in New York at the time, so it had to have been like 2009 or something. I visited the Flat Earth forums and this was before the Flat Earth theories became as sort of mainstream as they are. Sorry to ask the dumb question, forums, online forums. Yeah, the Flat Earth Society, I don't know if it's.com or.org, but I went there and I was reading their ideas and how they responded to typical criticisms of, well, the earth isn't flat because what about this? And I could not tell, and I mentioned this in my video, I couldn't tell how many of these community members actually believe the earth was flat or we're just trolling. And I realized that the fascinating thing is, how do we know anything? And what makes for a good belief versus a maybe not so tenable or good belief? And so that's really what my video about earth being flat is about. It's about, look, there are a lot of reasons that the earth is probably not flat, but a Flat Earth believer can respond to every single one of them, but it's all in an ad hoc way. And all of these, all of their rebuttals aren't necessarily gonna form a cohesive noncontradictory whole. And I believe that's the episode where I talk about Occam's razor and Newton's flaming laser sword. And then I say, well, you know what, wait a second. We know that space contracts as you move. And so to a particle moving near the speed of light towards earth, earth would be flattened in the direction of that particles travel. So to them, earth is flat. Like we need to be really generous to even wild ideas because they're all thinking, they're all the communication of ideas. And what else can it mean to be a human? Yeah, and I think I'm a huge fan of the Flat Earth theory, quote unquote, in the sense that to me it feels harmless to explore some of the questions of what it means to believe something, what it means to explore the edge of science and so on. Cause it's a harm, it's a, to me, nobody gets hurt whether the earth is flat or round, not literally, but I mean intellectually when we're just having a conversation. That said, again, to elitism, I find that scientists roll their eyes way too fast on the Flat Earth. The kind of dismissal that I see to this even notion, they haven't like sat down and say, what are the arguments that are being proposed? And this is why these arguments are incorrect. So that should be something that scientists should always do, even to the most sort of ideas that seem ridiculous. So I like this as almost, it's almost my test when I ask people what they think about Flat Earth theory, to see how quickly they roll their eyes. Well, yeah, I mean, let me go on record and say that the earth is not flat. It is a three dimensional spheroid. However, I don't know that and it has not been proven. Science doesn't prove anything. It just reduces uncertainty. Could the earth actually be flat? Extremely unlikely, extremely unlikely. And so it is a ridiculous notion if we care about how probable and certain our ideas might be. But I think it's incredibly important to talk about science in that way and to not resort to, well, it's true. It's true in the same way that a mathematical theorem is true. And I think we're kind of like being pretty pedantic about defining this stuff. But like, sure, I could take a rocket ship out and I could orbit earth and look at it and it would look like a ball, right? But I still can't prove that I'm not living in a simulation, that I'm not a brain in a vat, that this isn't all an elaborate ruse created by some technologically advanced extraterrestrial civilization. So there's always some doubt and that's fine. That's exciting. And I think that kind of doubt, practically speaking, is useful when you start talking about quantum mechanics or string theory, sort of, it helps. To me, that kind of adds a little spice into the thinking process of scientists. So, I mean, just as a thought experiment, your video kind of, okay, say the earth is flat. What would the forces when you walk about this flat earth feel like to the human? That's a really nice thought experiment to think about. Right, because what's really nice about it is that it's a funny thought experiment, but you actually wind up accidentally learning a whole lot about gravity and about relativity and geometry. And I think that's really the goal of what I'm doing. I'm not trying to like convince people that the earth is round. I feel like you either believe that it is or you don't and like, that's, you know, how can I change that? What I can do is change how you think and how you are introduced to important concepts. Like, well, how does gravity operate? Oh, it's all about the center of mass of an object. So right, on a sphere, we're all pulled towards the middle, essentially the centroid geometrically, but on a disc, ooh, you're gonna be pulled at a weird angle if you're out near the edge. And that stuff's fascinating. Yeah, and to me, that was, that particular video opened my eyes even more to what gravity is. It's just a really nice visualization tool of, because you always imagine gravity with spheres, with masses that are spheres. Yeah. And imagining gravity on masses that are not spherical, some other shape, but in here, a plate, a flat object, is really interesting. It makes you really kind of visualize in a three dimensional way the force of gravity. Yeah, even if a disc the size of Earth would be impossible, I think anything larger than like the moon basically needs to be a sphere because gravity will round it out. So you can't have a teacup the size of Jupiter, right? There's a great book about the teacup in the universe that I highly recommend. I don't remember the author. I forget her name, but it's a wonderful book. So look it up. I think it's called Teacup in the Universe. Just to link on this point briefly, your videos are generally super, people love them, right? If you look at the sort of number of likes versus dislikes is this measure of YouTube, right, is incredible. And as do I. But this particular flat Earth video has more dislikes than usual. What do you, on that topic in general, what's your sense, how big is the community, not just who believes in flat Earth, but sort of the anti scientific community that naturally distrust scientists in a way that's not an open minded way, like really just distrust scientists like they're bought by some kind of mechanism of some kind of bigger system that's trying to manipulate human beings. What's your sense of the size of that community? You're one of the sort of great educators in the world that educates people on the exciting power of science. So you're kind of up against this community. What's your sense of it? I really have no idea. I haven't looked at the likes and dislikes on the flat Earth video. And so I would wonder if it has a greater percentage of dislikes than usual, is that because of people disliking it because they think that it's a video about Earth being flat and they find that ridiculous and they dislike it without even really watching much? Do they wish that I was more like dismissive of flat Earth theories? Yeah. That's possible too. I know there are a lot of response videos that kind of go through the episode and are pro flat Earth, but I don't know if there's a larger community of unorthodox thinkers today than there have been in the past. And I just wanna not lose them. I want them to keep listening and thinking and by calling them all idiots or something, that does no good because how idiotic are they really? I mean, the Earth isn't a sphere at all. We know that it's an oblate spheroid and that in and of itself is really interesting. And I investigated that in which way is down where I'm like, really down does not point towards the center of the Earth. It points in different direction, depending on what's underneath you and what's above you and what's around you. The whole universe is tugging on me. And then you also show that gravity is non uniform across the globe. Like if you, there's this I guess thought experiment if you build a bridge all the way across the Earth and then just knock out its pillars, what would happen? And you describe how it would be like a very chaotic, unstable thing that's happening because gravity is non uniform throughout the Earth. Yeah, in small spaces, like the ones we work in, we can essentially assume that gravity is uniform, but it's not. It is weaker the further you are from the Earth. And it also is going to be, it's radially pointed towards the middle of the Earth. So a really large object will feel tidal forces because of that non uniformness. And we can take advantage of that with satellites, right? Gravitational induced torque. It's a great way to align your satellite without having to use fuel or any kind of engine. So let's jump back to it, artificial intelligence. What's your thought of the state of where we are at currently with artificial intelligence and what do you think it takes to build human level or superhuman level intelligence? I don't know what intelligence means. That's my biggest question at the moment. And I think it's because my instinct is always to go, well, what are the foundations here of our discussion? What does it mean to be intelligent? How do we measure the intelligence of an artificial machine or a program or something? Can we say that humans are intelligent? Because there's also a fascinating field of how do you measure human intelligence. Of course. But if we just take that for granted, saying that whatever this fuzzy intelligence thing we're talking about, humans kind of have it. What would be a good test for you? So during develop a test that's natural language conversation, would that impress you? A chat bot that you'd want to hang out and have a beer with for a bunch of hours or have dinner plans with. Is that a good test, natural language conversation? Is there something else that would impress you? Or is that also too difficult to think about? Oh yeah, I'm pretty much impressed by everything. I think that if there was a chat bot that was like incredibly, I don't know, really had a personality. And if I didn't be the Turing test, right? Like if I'm unable to tell that it's not another person but then I was shown a bunch of wires and mechanical components. And it was like, that's actually what you're talking to. I don't know if I would feel that guilty destroying it. I would feel guilty because clearly it's well made and it's a really cool thing. It's like destroying a really cool car or something but I would not feel like I was a murderer. So yeah, at what point would I start to feel that way? And this is such a subjective psychological question. If you give it movement or if you have it act as though or perhaps really feel pain as I destroy it and scream and resist, then I'd feel bad. Yeah, it's beautifully put. And let's just say act like it's a pain. So if you just have a robot that not screams, just like moans in pain if you kick it, that immediately just puts it in a class that we humans, it becomes, we anthropomorphize it. It almost immediately becomes human. So that's a psychology question as opposed to sort of a physics question. Right, I think that's a really good instinct to have. If the robot. Screams. Screams and moans, even if you don't believe that it has the mental experience, the qualia of pain and suffering, I think it's still a good instinct to say, you know what, I'd rather not hurt it. The problem is that instinct can get us in trouble because then robots can manipulate that. And there's different kinds of robots. There's robots like the Facebook and the YouTube algorithm that recommends the video, and they can manipulate in the same kind of way. Well, let me ask you just to stick on artificial intelligence for a second. Do you have worries about existential threats from AI or existential threats from other technologies like nuclear weapons that could potentially destroy life on earth or damage it to a very significant degree? Yeah, of course I do. Especially the weapons that we create. There's all kinds of famous ways to think about this. And one is that, wow, what if we don't see advanced alien civilizations because of the danger of technology? What if we reach a point, and I think there's a channel, Thoughty2, geez, I wish I remembered the name of the channel, but he delves into this kind of limit of maybe once you discover radioactivity and its power, you've reached this important hurdle. And the reason that the skies are so empty is that no one's ever managed to survive as a civilization once they have that destructive power. And when it comes to AI, I'm not really very worried because I think that there are plenty of other people that are already worried enough. And oftentimes these worries are just, they just get in the way of progress. And they're questions that we should address later. And I think I talk about this in my interview with the self driving autonomous vehicle guy, as I think it was a bonus scene from the trolley problem episode. And I'm like, wow, what should a car do if this really weird contrived scenario happens where it has to swerve and save the driver, but kill a kid? And he's like, well, what would a human do? And if we resist technological progress because we're worried about all of these little issues, then it gets in the way. And we shouldn't avoid those problems, but we shouldn't allow them to be stumbling blocks to advancement. So the folks like Sam Harris or Elon Musk are saying that we're not worried enough. So the worry should not paralyze technological progress, but we're sort of marching, technology is marching forward without the key scientists, the developing of technology, worrying about the overnight having some effects that would be very detrimental to society. So to push back on your thought of the idea that there's enough people worrying about it, Elon Musk says, there's not enough people worrying about it. That's the kind of balance is, it's like folks who are really focused on nuclear deterrence are saying there's not enough people worried about nuclear deterrence, right? So it's an interesting question of what is a good threshold of people to worry about these? And if it's too many people that are worried, you're right. It'll be like the press would over report on it and there'll be technological, halt technological progress. If not enough, then we can march straight ahead into that abyss that human beings might be destined for with the progress of technology. Yeah, I don't know what the right balance is of how many people should be worried and how worried should they be, but we're always worried about new technology. We know that Plato was worried about the written word. He was like, we shouldn't teach people to write because then they won't use their minds to remember things. There have been concerns over technology and its advancement since the beginning of recorded history. And so, I think, however, these conversations are really important to have because again, we learn a lot about ourselves. If we're really scared of some kind of AI like coming into being that is conscious or whatever and can self replicate, we already do that every day. It's called humans being born. They're not artificial, they're humans, but they're intelligent and I don't wanna live in a world where we're worried about babies being born because what if they become evil? Right. What if they become mean people? What if they're thieves? Maybe we should just like, what, not have babies born? Like maybe we shouldn't create AI. It's like, we will want to have safeguards in place in the same way that we know, look, a kid could be born that becomes some kind of evil person, but we have laws, right? And it's possible that with advanced genetics in general, be able to, it's a scary thought to say that, this, my child, if born would have an 83% chance of being a psychopath, right? Like being able to, if it's something genetic, if there's some sort of, and what to use that information, what to do with that information is a difficult ethical thought. Yeah, and I'd like to find an answer that isn't, well, let's not have them live. You know, I'd like to find an answer that is, well, all human life is worthy. And if you have an 83% chance of becoming a psychopath, well, you still deserve dignity. And you still deserve to be treated well. You still have rights. At least at this part of the world, at least in America, there's a respect for individual life in that way. That's, well, to me, but again, I'm in this bubble, is a beautiful thing. But there's other cultures where individual human life is not that important, where a society, so I was born in the Soviet Union, where the strength of nation and society together is more important than any one particular individual. So it's an interesting also notion, the stories we tell ourselves. I like the one where individuals matter, but it's unclear that that was what the future holds. Well, yeah, and I mean, let me even throw this out. Like, what is artificial intelligence? How can it be artificial? I really think that we get pretty obsessed and stuck on the idea that there is some thing that is a wild human, a pure human organism without technology. But I don't think that's a real thing. I think that humans and human technology are one organism. Look at my glasses, okay? If an alien came down and saw me, would they necessarily know that this is an invention, that I don't grow these organically from my body? They wouldn't know that right away. And the written word, and spoons, and cups, these are all pieces of technology. We are not alone as an organism. And so the technology we create, whether it be video games or artificial intelligence that can self replicate and hate us, it's actually all the same organism. When you're in a car, where do you end in the car begin? It seems like a really easy question to answer, but the more you think about it, the more you realize, wow, we are in this symbiotic relationship with our inventions. And there are plenty of people who are worried about it. And there should be, but it's inevitable. And I think that even just us think of ourselves as individual intelligences may be silly notion because it's much better to think of the entirety of human civilization. All living organisms on earth is a single living organism. As a single intelligent creature, because you're right, everything's intertwined. Everything is deeply connected. So we mentioned, you know, Musk, so you're a curious lover of science. What do you think of the efforts that Elon Musk is doing with space exploration, with electric vehicles, with autopilot, sort of getting into the space of autonomous vehicles, with boring under LA and a Neuralink trying to communicate brain machine interfaces, communicate between machines and human brains? Well, it's really inspiring. I mean, look at the fandom that he's amassed. It's not common for someone like that to have such a following. And so it's... Engineering nerd. Yeah, so it's really exciting. But I also think that a lot of responsibility comes with that kind of power. So like if I met him, I would love to hear how he feels about the responsibility he has. When there are people who are such a fan of your ideas and your dreams and share them so closely with you, you have a lot of power. And he didn't always have that, you know? He wasn't born as Elon Musk. Well, he was, but well, he was named that later. But the point is that I wanna know the psychology of becoming a figure like him. Well, I don't even know how to phrase the question right, but it's a question about what do you do when you're following, your fans become so large that it's almost bigger than you. And how do you responsibly manage that? And maybe it doesn't worry him at all. And that's fine too. But I'd be really curious. And I think there are a lot of people that go through this when they realize, whoa, there are a lot of eyes on me. There are a lot of people who really take what I say very earnestly and take it to heart and will defend me. And whew, that's, that's, that can be dangerous. And you have to be responsible with it. Both in terms of impact on society and psychologically for the individual, just the burden psychologically on Elon? Yeah, yeah, how does he think about that? Part of his persona. Well, let me throw that right back at you because in some ways you're just a funny guy that gotten a humongous following, a funny guy with a curiosity. You've got a huge following. How do you psychologically deal with the responsibility? In many ways you have a reach in many ways bigger than Elon Musk. What is your, what is the burden that you feel in educating being one of the biggest educators in the world where everybody's listening to you and actually everybody, like most of the world that's uses YouTube for educational material, trust you as a source of good, strong scientific thinking. It's a burden and I try to approach it with a lot of humility and sharing. Like I'm not out there doing a lot of scientific experiments. I am sharing the work of real scientists and I'm celebrating their work and the way that they think and the power of curiosity. But I wanna make it clear at all times that like, look, we don't know all the answers and I don't think we're ever going to reach a point where we're like, wow, and there you go. That's the universe. It's this equation, you plug in some conditions or whatever and you do the math and you know what's gonna happen tomorrow. I don't think we're ever gonna reach that point, but I think that there is a tendency to sometimes believe in science and become elitist and become, I don't know, hard when in reality it should humble you and make you feel smaller. I think there's something very beautiful about feeling very, very small and very weak and to feel that you need other people. So I try to keep that in mind and say, look, thanks for watching. Vsauce is not, I'm not Vsauce, you are. When I start the episodes, I say, hey, Vsauce, Michael here. Vsauce and Michael are actually a different thing in my mind. I don't know if that's always clear, but yeah, I have to approach it that way because it's not about me. Yeah, so it's not even, you're not feeling the responsibility. You're just sort of plugging into this big thing that is scientific exploration of our reality and you're a voice that represents a bunch, but you're just plugging into this big Vsauce ball that others, millions of others are plugged into. Yeah, and I'm just hoping to encourage curiosity and responsible thinking and an embracement of doubt and being okay with that. So I'm next week talking to Christos Gudrow. I'm not sure if you're familiar who he is, but he's the VP of engineering, head of the quote unquote YouTube algorithm or the search and discovery. So let me ask, first high level, do you have a question for him that if you can get an honest answer that you would ask, but more generally, how do you think about the YouTube algorithm that drives some of the motivation behind, no, some of the design decisions you make as you ask and answer some of the questions you do, how would you improve this algorithm in your mind in general? So just what would you ask him? And outside of that, how would you like to see the algorithm improve? Well, I think of the algorithm as a mirror. It reflects what people put in and we don't always like what we see in that mirror. From the individual mirror to the individual mirror to the society. Both, in the aggregate, it's reflecting back what people on average want to watch. And when you see things being recommended to you, it's reflecting back what it thinks you want to see. And specifically, I would guess that it's not just what you want to see, but what you will click on and what you will watch some of and stay on YouTube because of. I don't think that, this is all me guessing, but I don't think that YouTube cares if you only watch like a second of a video, as long as the next thing you do is open another video. If you close the app or close the site, that's a problem for them because they're not a subscription platform. They're not like, look, you're giving us 20 bucks a month no matter what, so who cares? They need you to watch and spend time there and see ads. So one of the things I'm curious about whether they do consider longer term sort of develop, your longer term development as a human being, which I think ultimately will make you feel better about using YouTube in the longterm and allowing you to stick with it for longer. Because even if you feed the dopamine rush in the short term and you keep clicking on cat videos, eventually you sort of wake up like from a drug and say, I need to quit this. So I wonder how much you're trying to optimize for the longterm because when I look at the, your videos aren't exactly sort of, no offense, but they're not the most clickable. They're both the most clickable and I feel I watched the entire thing and I feel a better human after I watched it, right? So like they're not just optimizing for the clickability because I hope, so my thought is how do you think of it? And does it affect your own content? Like how deep you go, how profound you explore the directions and so on. I've been really lucky in that I don't worry too much about the algorithm. I mean, look at my thumbnails. I don't really go too wild with them. And with minefield where I'm in partnership with YouTube on the thumbnails, I'm often like, let's pull this back. Let's be mysterious. But usually I'm just trying to do what everyone else is not doing. So if everyone's doing crazy Photoshop kind of thumbnails, I'm like, what if the thumbnails just a line? And what if the title is just a word? And I kind of feel like all of the Vsauce channels have cultivated an audience that expects that. And so they would rather Jake make a video that's just called stains than one called, I explored stains, shocking. But there are other audiences out there that want that. And I think most people kind of want what you see the algorithm favoring, which is mainstream traditional celebrity and news kind of information. I mean, that's what makes YouTube really different than other streaming platforms. No one's like, what's going on in the world? I'll open up Netflix to find out. But you do open up Twitter to find that out. You open up Facebook and you can open up YouTube because you'll see that the trending videos are like what happened amongst the traditional mainstream people in different industries. And that's what's being shown. And it's not necessarily YouTube saying, we want that to be what you see. It's that that's what people click on. When they see Ariana Grande, you know, reads a love letter from like her high school sweetheart, they're like, I wanna see that. And when they see a video from me that's got some lines in math and it's called law and causes they're like, well, I mean, I'm just on the bus. Like I don't have time to dive into a whole lesson. So, you know, before you get super mad at YouTube, you should say, really, they're just reflecting back human behavior. Is there something you would improve about the algorithm knowing of course, that as far as we're concerned, it's a black box, so we don't know how it works. Right, and I don't think that even anyone at YouTube really knows what it's doing. They know what they've tweaked, but then it learns. I think that it learns and it decides how to behave. And sometimes the YouTube employees are left going, I don't know. Maybe we should like change the value of how much it, you know, worries about watch time. And maybe it should worry more about something else. I don't know. But I mean, I would like to see, I don't know what they're doing and not doing. Well, is there a conversation that you think they should be having just internally, whether they're having it or not? Is there something, should they be thinking about the longterm future? Should they be thinking about educational content and whether that's educating about what just happened in the world today, news or educational content, like what you're providing, which is asking big sort of timeless questions about how the way the world works. Well, it's interesting. What should they think about? Because it's called YouTube, not our tube. And that's why I think they have so many phenomenal educational creators. You don't have shows like Three Blue One Brown or Physics Girl or Looking Glass Universe or Up and Atom or Brain Scoop or, I mean, I could go on and on. They aren't on Amazon Prime and Netflix and they don't have commissioned shows from those platforms. It's all organically happening because there are people out there that want to share their passion for learning, that wanna share their curiosity. And YouTube could promote those kinds of shows more, but first of all, they probably wouldn't get as many clicks and YouTube needs to make sure that the average user is always clicking and staying on the site. They could still promote it more for the good of society, but then we're making some really weird claims about what's good for society because I think that cat videos are also an incredibly important part of what it means to be a human. I mentioned this quote before from Unumuno about, look, I've seen a cat like estimate distances and calculate a jump more often than I've seen a cat cry. And so things that play with our emotions and make us feel things can be cheesy and can feel cheap, but like, man, that's very human. And so even the dumbest vlog is still so important that I don't think I have a better claim to take its spot than it has to have that spot. It puts a mirror to us, the beautiful parts, the ugly parts, the shallow parts, the deep parts. You're right. What I would like to see is, I miss the days when engaging with content on YouTube helped push it into my subscribers timelines. It used to be that when I liked a video, say from Veritasium, it would show up in the feed on the front page of the app or the website of my subscribers. And I knew that if I liked a video, I could send it 100,000 views or more. That no longer is true, but I think that was a good user experience. When I subscribe to someone, when I'm following them, I want to see more of what they like. I want them to also curate the feed for me. And I think that Twitter and Facebook are doing that in also some ways that are kind of annoying, but I would like that to happen more. And I think we would see communities being stronger on YouTube if it was that way instead of YouTube going, well, technically Michael liked this Veritasium video, but people are way more likely to click on Carpool Karaoke. So I don't even care who they are, just give them that. Not saying anything against Carpool Karaoke, that is a extremely important part of our society, what it means to be a human on earth, you know, but. I'll say it sucks, but. Yeah, but a lot of people would disagree with you and they should be able to see as much of that as they want. And I think even people who don't think they like it should still be really aware of it because it's such an important thing. It's such an influential thing. But yeah, I just wish that like new channels I discover and that I subscribe to, I wish that my subscribers found out about that because especially in the education community, a rising tide floats all boats. If you watch a video from Numberphile, you're just more likely to want to watch an episode from me, whether it be on Vsauce1 or Ding. It's not competitive in the way that traditional TV was where it's like, well, if you tune into that show, it means you're not watching mine because they both air at the same time. So helping each other out through collaborations takes a lot of work, but just through engaging, commenting on their videos, liking their videos, subscribing to them, whatever, that I would love to see become easier and more powerful. So a quick and impossibly deep question, last question about mortality. You've spoken about death as an interesting topic. Do you think about your own mortality? Yeah, every day, it's really scary. So what do you think is the meaning of life that mortality makes very explicit? So why are you here on earth, Michael? What's the point of this whole thing? What does mortality in the context of the whole universe make you realize about yourself? Just you, Michael Stevens. Well, it makes me realize that I am destined to become a notion. I'm destined to become a memory and we can extend life. I think there's really exciting things being done to extend life, but we still don't know how to protect you from some accident that could happen, some unforeseen thing. Maybe we could save my connectome and recreate my consciousness digitally, but even that could be lost if it's stored on a physical medium or something. So basically, I just think that embracing and realizing how cool it is, that someday I will just be an idea. And there won't be a Michael anymore that can be like, no, that's not what I meant. It'll just be what people, they have to guess what I meant. And they'll remember me and how I live on as that memory will maybe not even be who I want it to be. But there's something powerful about that. And there's something powerful about letting future people run the show themselves. I think I'm glad to get out of their way at some point and say, all right, it's your world now. So you, the physical entity, Michael, have ripple effects in the space of ideas that far outlives you in ways that you can't control, but it's nevertheless fascinating to think, I mean, especially with you, you can imagine an alien species when they finally arrive and destroy all of us would watch your videos to try to figure out what were the questions that these people. But even if they didn't, I still think that there will be ripples. Like when I say memory, I don't specifically mean people remember my name and my birth date and like there's a photo of me on Wikipedia, like all that can be lost, but I still would hope that people ask questions and teach concepts in some of the ways that I have found useful and satisfying. Even if they don't know that I was the one who tried to popularize it, that's fine. But if Earth was completely destroyed, like burnt to a crisp, everything on it today, what would, the universe wouldn't care. Like Jupiter's not gonna go, oh no, and that could happen. So we do however have the power to launch things into space to try to extend how long our memory exists. And what I mean by that is, we are recording things about the world and we're learning things and writing stories and all of this and preserving that is truly what I think is the essence of being a human. We are autobiographers of the universe and we're really good at it. We're better than fossils. We're better than light spectrum. We're better than any of that. We collect much more detailed memories of what's happening, much better data. And so that should be our legacy. And I hope that that's kind of mine too in terms of people remembering something or having some kind of effect. But even if I don't, you can't not have an effect. This is not me feeling like, I hope that I have this powerful legacy. It's like, no matter who you are, you will. But you also have to embrace the fact that that impact might look really small and that's okay. One of my favorite quotes is from Tessa the Durbervilles. And it's along the lines of the measure of your life depends on not your external displacement but your subjective experience. If I am happy and those that I love are happy, can that be enough? Because if so, excellent. I think there's no better place to end it, Michael. Thank you so much. It was an honor to meet you. Thanks for talking to me. Thank you, it was a pleasure. Thanks for listening to this conversation with Michael Stevens. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to First, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn, to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter. And now, let me leave you with some words of wisdom from Albert Einstein. The important thing is not to stop questioning. Curiosity has its own reason for existence. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, the marvelous structure of reality. It is enough if one tries merely to comprehend a little of this mystery every day. Thank you for listening and hope to see you next time.
Michael Stevens: Vsauce | Lex Fridman Podcast #58
The following is a conversation with Sebastian Thrun. He's one of the greatest roboticists, computer scientists, and educators of our time. He led the development of the autonomous vehicles at Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then led the Google self driving car program, which launched the self driving car revolution. He taught the popular Stanford course on artificial intelligence in 2011, which was one of the first massive open online courses, or MOOCs as they're commonly called. That experience led him to co found Udacity, an online education platform. If you haven't taken courses on it yet, I highly recommend it. Their self driving car program, for example, is excellent. He's also the CEO of Kitty Hawk, a company working on building flying cars, or more technically, EVTOLs, which stands for electric vertical takeoff and landing aircraft. He has launched several revolutions and inspired millions of people. But also, as many know, he's just a really nice guy. It was an honor and a pleasure to talk with him. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow it on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. If you leave a review on Apple Podcast or YouTube or Twitter, consider mentioning ideas, people, topics you find interesting. It helps guide the future of this podcast. But in general, I just love comments with kindness and thoughtfulness in them. This podcast is a side project for me, as many people know, but I still put a lot of effort into it. So the positive words of support from an amazing community, from you, really help. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation that you can skip to, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square, and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST Robotics and LEGO competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEGSPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Sebastian Thrun. You mentioned that The Matrix may be your favorite movie. So let's start with a crazy philosophical question. Do you think we're living in a simulation? And in general, do you find the thought experiment interesting? Define simulation, I would say. Maybe we are, maybe we are not, but it's completely irrelevant to the way we should act. Putting aside, for a moment, the fact that it might not have any impact on how we should act as human beings, for people studying theoretical physics, these kinds of questions might be kind of interesting, looking at the universe as an information processing system. The universe is an information processing system. It's a huge physical, biological, chemical computer, there's no question. But I live here and now. I care about people, I care about us. What do you think is trying to compute? I don't think there's an intention. I think the world evolves the way it evolves. And it's beautiful, it's unpredictable. And I'm really, really grateful to be alive. Spoken like a true human. Which last time I checked, I was. Or that, in fact, this whole conversation is just a touring test to see if indeed you are. You've also said that one of the first programs, or the first few programs you've written was a, wait for it, TI57 calculator. Yeah. Maybe that's early 80s. We don't want to date calculators or anything. That's early 80s, correct. Yeah. So if you were to place yourself back into that time, into the mindset you were in, could you have predicted the evolution of computing, AI, the internet technology in the decades that followed? I was super fascinated by Silicon Valley, which I'd seen on television once and thought, my god, this is so cool. They build like DRAMs there and CPUs. How cool is that? And as a college student a few years later, I decided to really study intelligence and study human beings. And found that even back then in the 80s and 90s, artificial intelligence is what fascinated me the most. What's missing is that back in the day, the computers are really small. The brains we could build were not anywhere bigger than a cockroach. And cockroaches aren't very smart. So we weren't at the scale yet where we are today. Did you dream at that time to achieve the kind of scale we have today? Or did that seem possible? I always wanted to make robots smart. And I felt it was super cool to build an artificial human. And the best way to build an artificial human was to build a robot, because that's kind of the closest we could do. Unfortunately, we aren't there yet. The robots today are still very brittle. But it's fascinating to study intelligence from a constructive perspective when you build something. To understand you build, what do you think it takes to build an intelligent system, an intelligent robot? I think the biggest innovation that we've seen is machine learning. And it's the idea that the computers can basically teach themselves. Let's give an example. I'd say everybody pretty much knows how to walk. And we learn how to walk in the first year or two of our lives. But no scientist has ever been able to write down the rules of human gait. We don't understand it. We have it in our brains somehow. We can practice it. We understand it. But we can't articulate it. We can't pass it on by language. And that, to me, is kind of the deficiency of today's computer programming. When you program a computer, they're so insanely dumb that you have to give them rules for every contingencies. Very unlike the way people learn from data and experience, computers are being instructed. And because it's so hard to get this instruction set right, we pay software engineers $200,000 a year. Now, the most recent innovation, which has been in the make for 30, 40 years, is an idea that computers can find their own rules. So they can learn from falling down and getting up the same way children can learn from falling down and getting up. And that revolution has led to a capability that's completely unmatched. Today's computers can watch experts do their jobs, whether you're a doctor or a lawyer, pick up the regularities, learn those rules, and then become as good as the best experts. So the dream of in the 80s of expert systems, for example, had at its core the idea that humans could boil down their expertise on a sheet of paper, so to sort of reduce, sort of be able to explain to machines how to do something explicitly. So do you think, what's the use of human expertise into this whole picture? Do you think most of the intelligence will come from machines learning from experience without human expertise input? So the question for me is much more how do you express expertise? You can express expertise by writing a book. You can express expertise by showing someone what you're doing. You can express expertise by applying it by many different ways. And I think the expert systems was our best attempt in AI to capture expertise and rules. But someone sat down and said, here are the rules of human gait. Here's when you put your big toe forward and your heel backwards and you always stop stumbling. And as we now know, the set of rules, the set of language that we can command is incredibly limited. The majority of the human brain doesn't deal with language. It deals with subconscious, numerical, perceptual things that we don't even self aware of. Now, when an AI system watches an expert do their job and practice their job, it can pick up things that people can't even put into writing, into books or rules. And that's where the real power is. We now have AI systems that, for example, look over the shoulders of highly paid human doctors like dermatologists or radiologists, and they can somehow pick up those skills that no one can express in words. So you were a key person in launching three revolutions, online education, autonomous vehicles, and flying cars or VTOLs. So high level, and I apologize for all the philosophical questions. There's no apology necessary. How do you choose what problems to try and solve? What drives you to make those solutions a reality? I have two desires in life. I want to literally make the lives of others better. Or as we often say, maybe jokingly, make the world a better place. I actually believe in this. It's as funny as it sounds. And second, I want to learn. I want to get new skills. I don't want to be in a job I'm good at, because if I'm in a job that I'm good at, the chances for me to learn something interesting is actually minimized. So I want to be in a job I'm bad at. That's really important to me. So in a bill, for example, what people often call flying cars, these are electrical, vertical, takeoff, and landing vehicles. I'm just no expert in any of this. And it's so much fun to learn on the job what it actually means to build something like this. Now, I'd say the stuff that I've done lately after I finished my professorship at Stanford, they really focused on what has the maximum impact on society. Transportation is something that has transformed the 21st or 20th century more than any other invention, in my opinion, even more than communication. And cities are different. Workers are different. Women's rights are different because of transportation. And yet, we still have a very suboptimal transportation solution where we kill 1.2 or so million people every year in traffic. It's like the leading cause of death for young people in many countries, where we are extremely inefficient resource wise. Just go to your average neighborhood city and look at the number of parked cars. That's a travesty, in my opinion. Or where we spend endless hours in traffic jams. And very, very simple innovations, like a self driving car or what people call a flying car, could completely change this. And it's there. I mean, the technology is basically there. You have to close your eyes not to see it. So lingering on autonomous vehicles, a fascinating space, some incredible work you've done throughout your career there. So let's start with DARPA, I think, the DARPA challenge, through the desert and then urban to the streets. I think that inspired an entire generation of roboticists and obviously sprung this whole excitement about this particular kind of four wheeled robots we called autonomous cars, self driving cars. So you led the development of Stanley, the autonomous car that won the race to the desert, the DARPA challenge in 2005. And Junior, the car that finished second in the DARPA urban challenge, also did incredibly well in 2007, I think. What are some painful, inspiring, or enlightening experiences from that time that stand out to you? Oh my god. Painful were all these incredibly complicated, stupid bugs that had to be found. We had a phase where Stanley, our car that eventually won the DARPA grand challenge, would every 30 miles just commit suicide. And we didn't know why. And it ended up to be that in the sinking of two computer clocks, occasionally a clock went backwards and that negative time elapsed, screwed up the entire internal logic. But it took ages to find this. There were bugs like that. I'd say enlightening is the Stanford team immediately focused on machine learning and on software, whereas everybody else seemed to focus on building better hardware. Our analysis had been a human being with an existing rental car can perfectly drive the course but why do I have to build a better rental car? I just should replace the human being. And the human being, to me, was a conjunction of three steps. We had sensors, eyes and ears, mostly eyes. We had brains in the middle. And then we had actuators, our hands and our feet. Now, the actuators are easy to build. The sensors are actually also easy to build. What was missing was the brain. So we had to build a human brain. And nothing clearer than to me that the human brain is a learning machine. So why not just train our robot? So we would build massive machine learning into our machine. And with that, we were able to not just learn from human drivers. We had the entire speed control of the vehicle was copied from human driving. But also have the robot learn from experience where it made a mistake and recover from it and learn from it. You mentioned the pain point of software and clocks. Synchronization seems to be a problem that continues with robotics. It's a tricky one with drones and so on. What does it take to build a thing, a system with so many constraints? You have a deadline, no time. You're unsure about anything really. It's the first time that people really even exploring. It's not even sure that anybody can finish when we're talking about the race to the desert the year before nobody finish. What does it take to scramble and finish a product that actually, a system that actually works? We were very lucky. We were a really small team. The core of the team were four people. It was four because five couldn't comfortably sit inside a car, but four could. And I, as a team leader, my job was to get pizza for everybody and wash the car and stuff like this and repair the radiator when it broke and debug the system. And we were very open minded. We had no egos involved. We just wanted to see how far we can get. What we did really, really well was time management. We were done with everything a month before the race. And we froze the entire software a month before the race. And it turned out, looking at other teams, every other team complained if they had just one more week, they would have won. And we decided we're not going to fall into that mistake. We're going to be early. And we had an entire month to shake the system. And we actually found two or three minor bugs in the last month that we had to fix. And we were completely prepared when the race occurred. Okay, so first of all, that's such an incredibly rare achievement in terms of being able to be done on time or ahead of time. What do you, how do you do that in your future work? What advice do you have in general? Because it seems to be so rare, especially in highly innovative projects like this. People work till the last second. Well, the nice thing about the DARPA Grand Challenge is that the problem was incredibly well defined. We were able for a while to drive the old DARPA Grand Challenge course, which had been used the year before. And then at some reason we were kicked out of the region. So we had to go to a different desert, the Snorran Desert, and we were able to drive desert trails just of the same type. So there was never any debate about like, what is actually the problem? We didn't sit down and say, hey, should we build a car or a plane? We had to build a car. That made it very, very easy. Then I studied my own life and life of others. And we realized that the typical mistake that people make is that there's this kind of crazy bug left that they haven't found yet. And it's just, they regret it. And that bug would have been trivial to fix. They just haven't fixed it yet. They didn't want to fall into that trap. So I built a testing team. We had a testing team that built a testing booklet of 160 pages of tests we had to go through just to make sure we shake out the system appropriately. And the testing team was with us all the time and dictated to us today, we do railroad crossings. Tomorrow we do, we practice the start of the event. And in all of these, we thought, oh my God, it's long solved trivial. And then we tested it out. Oh my God, it doesn't do a railroad crossing. Why not? Oh my God, it mistakes the rails for metal barriers. We have to fix this. So it was really a continuous focus on improving the weakest part of the system. And as long as you focus on improving the weakest part of the system, you eventually build a really great system. Let me just pause on that, to me as an engineer, it's just super exciting that you were thinking like that, especially at that stage as brilliant, that testing was such a core part of it. It may be to linger on the point of leadership. I think it's one of the first times you were really a leader and you've led many very successful teams since then. What does it take to be a good leader? I would say most of all, I just take credit. I put the work of others, right? That's very convenient turns out because I can't do all these things myself. I'm an engineer at heart. So I care about engineering. So I don't know what the chicken and the egg is, but as a kid, I loved computers because you could tell them to do something and they actually did it. It was very cool. And you could like in the middle of the night, wake up at one in the morning and switch on your computer. And what he told you to yesterday, it would still do. That was really cool. Unfortunately, that didn't quite work with people. So you go to people and tell them what to do and they don't do it. And they hate you for it, or you do it today and then you go a day later and they stop doing it. So you have to... So then the question really became, how can you put yourself in the brain of people as opposed to computers? And in terms of computers, it's super dumb. That's so dumb. If people were as dumb as computers, I wouldn't want to work with them. But people are smart and people are emotional and people have pride and people have aspirations. So how can I connect to that? And that's the thing that most of our leadership just fails because many, many engineers turn manager believe they can treat their team just the same way it can treat your computer. And it just doesn't work this way. It's just really bad. So how can I connect to people? And it turns out as a college professor, the wonderful thing you do all the time is to empower other people. Like your job is to make your students look great. That's all you do. You're the best coach. And it turns out if you do a fantastic job with making your students look great, they actually love you and their parents love you. And they give you all the credit for stuff you don't deserve. All my students were smarter than me. All the great stuff invented at Stanford was their stuff, not my stuff. And they give me credit and say, oh, Sebastian. We're just making them feel good about themselves. So the question really is, can you take a team of people and what does it take to make them to connect to what they actually want in life and turn this into productive action? It turns out every human being that I know has incredibly good intentions. I've really rarely met a person with bad intentions. I believe every person wants to contribute. I think every person I've met wants to help others. It's amazing how much of an urge we have not to just help ourselves, but to help others. So how can we empower people and give them the right framework that they can accomplish this? In moments when it works, it's magical. Because you'd see the confluence of people being able to make the world a better place and deriving enormous confidence and pride out of this. And that's when my environment works the best. These are moments where I can disappear for a month and come back and things still work. It's very hard to accomplish. But when it works, it's amazing. So I agree with you very much. It's not often heard that most people in the world have good intentions. At the core, their intentions are good and they're good people. That's a beautiful message, it's not often heard. We make this mistake, and this is a friend of mine, Alex Werder, talking to us, that we judge ourselves by our intentions and others by their actions. And I think that the biggest skill, I mean, here in Silicon Valley, we follow engineers who have very little empathy and are kind of befuddled by why it doesn't work for them. The biggest skill, I think, that people should acquire is to put themselves into the position of the other and listen, and listen to what the other has to say. And they'd be shocked how similar they are to themselves. And they might even be shocked how their own actions don't reflect their intentions. I often have conversations with engineers where I say, look, hey, I love you, you're doing a great job. And by the way, what you just did has the following effect. Are you aware of that? And then people would say, oh my God, not I wasn't, because my intention was that. And I say, yeah, I trust your intention. You're a good human being. But just to help you in the future, if you keep expressing it that way, then people will just hate you. And I've had many instances where people say, oh my God, thank you for telling me this, because it wasn't my intention to look like an idiot. It wasn't my intention to help other people. I just didn't know how to do it. Very simple, by the way. There's a book, Dale Carnegie, 1936, how to make friends and how to influence others. Has the entire Bible, you just read it and you're done and you apply it every day. And I wish I was good enough to apply it every day. But it's just simple things, right? Like be positive, remember people's name, smile, and eventually have empathy. Really think that the person that you hate and you think is an idiot, is actually just like yourself. It's a person who's struggling, who means well, and who might need help, and guess what, you need help. I've recently spoken with Stephen Schwarzman. I'm not sure if you know who that is, but. I do. So, and he said. It's on my list. On the list. But he said, sort of to expand on what you're saying, that one of the biggest things you can do is hear people when they tell you what their problem is and then help them with that problem. He says, it's surprising how few people actually listen to what troubles others. And because it's right there in front of you and you can benefit the world the most. And in fact, yourself and everybody around you by just hearing the problems and solving them. I mean, that's my little history of engineering. That is, while I was engineering with computers, I didn't care at all what the computer's problems were. I just told them what to do and to do it. And it just doesn't work this way with people. It doesn't work with me. If you come to me and say, do A, I do the opposite. But let's return to the comfortable world of engineering. And can you tell me in broad strokes in how you see it? Because you're the core of starting it, the core of driving it, the technical evolution of autonomous vehicles from the first DARPA Grand Challenge to the incredible success we see with the program you started with Google self driving car and Waymo and the entire industry that sprung up of different kinds of approaches, debates and so on. Well, the idea of self driving car goes back to the 80s. There was a team in Germany and another team at Carnegie Mellon that did some very pioneering work. But back in the day, I'd say the computers were so deficient that even the best professors and engineers in the world basically stood no chance. It then folded into a phase where the US government spent at least half a billion dollars that I could count on research projects. But the way the procurement works, a successful stack of paper describing lots of stuff that no one's ever gonna read was a successful product of a research project. So we trained our researchers to produce lots of paper. That all changed with the DARPA Grand Challenge. And I really gotta credit the ingenious people at DARPA and the US government and Congress that took a complete new funding model where they said, let's not fund effort, let's fund outcomes. And it sounds very trivial, but there was no tax code that allowed the use of congressional tax money for a price. It was all effort based. So if you put in a hundred hours in, you could charge a hundred hours. If you put in a thousand hours in, you could build a thousand hours. By changing the focus instead of making the price, we don't pay you for development, we pay for the accomplishment. They drew in, they automatically drew out all these contractors who are used to the drug of getting money per hour. And they drew in a whole bunch of new people. And these people are mostly crazy people. They were people who had a car and a computer and they wanted to make a million bucks. The million bucks was their visual price money, it was then doubled. And they felt if I put my computer in my car and program it, I can be rich. And that was so awesome. Like half the teams, there was a team that was surfer dudes and they had like two surfboards on their vehicle and brought like these fashion girls, super cute girls, like twin sisters. And you could tell these guys were not your common beltway bandit who gets all these big multimillion and billion dollar countries from the US government. And there was a great reset. Universities moved in. I was very fortunate at Stanford that I just received tenure so I couldn't get fired no matter what I do, otherwise I wouldn't have done it. And I had enough money to finance this thing and I was able to attract a lot of money from third parties. And even car companies moved in. They kind of moved in very quietly because they were super scared to be embarrassed that their car would flip over. But Ford was there and Volkswagen was there and a few others and GM was there. So it kind of reset the entire landscape of people. And if you look at who's a big name in self driving cars today, these were mostly people who participated in those challenges. Okay, that's incredible. Can you just comment quickly on your sense of lessons learned from that kind of funding model and the research that's going on in academia in terms of producing papers, is there something to be learned and scaled up bigger, having these kinds of grand challenges that could improve outcomes? So I'm a big believer in focusing on kind of an end to end system. I'm a really big believer in systems building. I've always built systems in my academic career, even though I do a lot of math and abstract stuff, but it's all derived from the idea of let's solve a real problem. And it's very hard for me to be an academic and say, let me solve a component of a problem. Like with someone there's fields like nonmonetary logic or AI planning systems where people believe that a certain style of problem solving is the ultimate end objective. And I would always turn it around and say, hey, what problem would my grandmother care about that doesn't understand computer technology and doesn't wanna understand? And how could I make her love what I do? Because only then do I have an impact on the world. I can easily impress my colleagues. That is much easier, but impressing my grandmother is very, very hard. So I would always thought if I can build a self driving car and my grandmother can use it even after she loses her driving privileges or children can use it, or we save maybe a million lives a year, that would be very impressive. And then there's so many problems like these, like there's a problem with curing cancer, or whatever it is, live twice as long. Once a problem is defined, of course I can't solve it in its entirety. Like it takes sometimes tens of thousands of people to find a solution. There's no way you can fund an army of 10,000 at Stanford. So you gotta build a prototype. Let's build a meaningful prototype. And the DARPA Grand Challenge was beautiful because it told me what this prototype had to do. I didn't have to think about what it had to do, I just had to read the rules. And that was really beautiful. And it's most beautiful, you think what academia could aspire to is to build a prototype that's the systems level, that solves or gives you an inkling that this problem could be solved with this prototype. First of all, I wanna emphasize what academia really is. And I think people misunderstand it. First and foremost, academia is a way to educate young people. First and foremost, a professor is an educator. No matter where you are at, a small suburban college, or whether you are a Harvard or Stanford professor, that's not the way most people think of themselves in academia because we have this kind of competition going on for citations and publication. That's a measurable thing, but that is secondary to the primary purpose of educating people to think. Now, in terms of research, most of the great science, the great research comes out of universities. You can trace almost everything back, including Google, to universities. So there's nothing really fundamentally broken here. It's a good system. And I think America has the finest university system on the planet. We can talk about reach and how to reach people outside the system. It's a different topic, but the system itself is a good system. If I had one wish, I would say it'd be really great if there was more debate about what the great big problems are in society and focus on those. And most of them are interdisciplinary. Unfortunately, it's very easy to fall into an interdisciplinary viewpoint where your problem is dictated by what your closest colleagues believe the problem is. It's very hard to break out and say, well, there's an entire new field of problems. So to give an example, prior to me working on self driving cars, I was a roboticist and a machine learning expert. And I wrote books on robotics, something called probabilistic robotics. It's a very methods driven kind of viewpoint of the world. I built robots that acted in museums as tour guides, that let children around. It is something that at the time was moderately challenging. When I started working on cars, several colleagues told me, Sebastian, you're destroying your career because in our field of robotics, cars are looked like as a gimmick and they're not expressive enough. They can only push the throttle and the brakes. There's no dexterity. There's no complexity. It's just too simple. And no one came to me and said, wow, if you solve that problem, you can save a million lives, right? Among all robotic problems that I've seen in my life, I would say the self driving car, transportation, is the one that has the most hope for society. So how come the robotics community wasn't all over the place? And it was because we focused on methods and solutions and not on problems. Like if you go around today and ask your grandmother, what bugs you? What really makes you upset? I challenge any academic to do this and then realize how far your research is probably away from that today. At the very least, that's a good thing for academics to deliberate on. The other thing that's really nice in Silicon Valley is, Silicon Valley is full of smart people outside academia. So there's the Larry Pages and Mark Zuckerbergs in the world who are anywhere smarter, smarter than the best academics I've met in my life. And what they do is they are at a different level. They build the systems, they build the customer facing systems, they build things that people can use without technical education. And they are inspired by research. They're inspired by scientists. They hire the best PhDs from the best universities for a reason. So I think this kind of vertical integration between the real product, the real impact and the real thought, the real ideas, that's actually working surprisingly well in Silicon Valley. It did not work as well in other places in this nation. So when I worked at Carnegie Mellon, we had the world's finest computer science university, but there wasn't those people in Pittsburgh that would be able to take these very fine computer science ideas and turn them into massive, impactful products. That symbiosis seemed to exist pretty much only in Silicon Valley and maybe a bit in Boston and Austin. Yeah, with Stanford, that's really interesting. So if we look a little bit further on from the DARPA Grand Challenge and the launch of the Google self driving car, what do you see as the state, the challenges of autonomous vehicles as they are now is actually achieving that huge scale and having a huge impact on society? I'm extremely proud of what has been accomplished. And again, I'm taking a lot of credit for the work of others. And I'm actually very optimistic. And people have been kind of worrying, is it too fast? Is it too slow? Why is it not there yet? And so on. It is actually quite an interesting, hard problem. And in that a self driving car, to build one that manages 90% of the problems encountered in everyday driving is easy. We can literally do this over a weekend. To do 99% might take a month. Then there's 1% left. So 1% would mean that you still have a fatal accident every week, very unacceptable. So now you work on this 1% and the 99% of that, the remaining 1% is actually still relatively easy, but now you're down to like a hundredth of 1%. And it's still completely unacceptable in terms of safety. So the variety of things you encounter are just enormous. And that gives me enormous respect for human being that we're able to deal with the couch on the highway, or the deer in the headlights, or the blown tire that we've never been trained for. And all of a sudden have to handle it in an emergency situation and often do very, very successfully. It's amazing from that perspective, how safe driving actually is given how many millions of miles we drive every year in this country. We are now at a point where I believe the technology is there and I've seen it. I've seen it in Waymo, I've seen it in Aptiv, I've seen it in Cruise and in a number of companies and in Voyage where vehicles now driving around and basically flawlessly are able to drive people around in limited scenarios. In fact, you can go to Vegas today and order a Summon and Lift. And if you get the right setting of your app, you'll be picked up by a driverless car. Now there's still safety drivers in there, but that's a fantastic way to kind of learn what the limits are of technology today. And there's still some glitches, but the glitches have become very, very rare. I think the next step is gonna be to down cost it, to harden it, the entrapment, the sensors are not quite an automotive grade standard yet. And then to really build the business models, to really kind of go somewhere and make the business case. And the business case is hard work. It's not just, oh my God, we have this capability, people are just gonna buy it. You have to make it affordable. You have to find the social acceptance of people. None of the teams yet has been able to or gutsy enough to drive around without a person inside the car. And that's the next magical hurdle. We'll be able to send these vehicles around completely empty in traffic. And I think, I mean, I wait every day, wait for the news that Waymo has just done this. So, interesting you mentioned gutsy. Let me ask some maybe unanswerable question, maybe edgy questions. But in terms of how much risk is required, some guts in terms of leadership style, it would be good to contrast approaches. And I don't think anyone knows what's right. But if we compare Tesla and Waymo, for example, Elon Musk and the Waymo team, there's slight differences in approach. So on the Elon side, there's more, I don't know what the right word to use, but aggression in terms of innovation. And on Waymo side, there's more sort of cautious, safety focused approach to the problem. What do you think it takes? What leadership at which moment is right? Which approach is right? Look, I don't sit in either of those teams. So I'm unable to even verify like somebody says correct. In the end of the day, every innovator in that space will face a fundamental dilemma. And I would say you could put aerospace titans into the same bucket, which is you have to balance public safety with your drive to innovate. And this country in particular in the States has a hundred plus year history of doing this very successfully. Air travel is what a hundred times a safe per mile than ground travel, than cars. And there's a reason for it because people have found ways to be very methodological about ensuring public safety while still being able to make progress on important aspects, for example, like air and noise and fuel consumption. So I think that those practices are proven and they actually work. We live in a world safer than ever before. And yes, there will always be the provision that something goes wrong. There's always the possibility that someone makes a mistake or there's an unexpected failure. We can never guarantee to a hundred percent absolute safety other than just not doing it. But I think I'm very proud of the history of the United States. I mean, we've dealt with much more dangerous technology like nuclear energy and kept that safe too. We have nuclear weapons and we keep those safe. So we have methods and procedures that really balance these two things very, very successfully. You've mentioned a lot of great autonomous vehicle companies that are taking sort of the level four, level five, they jump in full autonomy with a safety driver and take that kind of approach and also through simulation and so on. There's also the approach that Tesla Autopilot is doing, which is kind of incrementally taking a level two vehicle and using machine learning and learning from the driving of human beings and trying to creep up, trying to incrementally improve the system until it's able to achieve level four autonomy. So perfect autonomy in certain kind of geographical regions. What are your thoughts on these contrasting approaches? Well, so first of all, I'm a very proud Tesla owner and I literally use the Autopilot every day and it literally has kept me safe. It is a beautiful technology specifically for highway driving when I'm slightly tired because then it turns me into a much safer driver. And I'm 100% confident that's the case. In terms of the right approach, I think the biggest change I've seen since I went to Waymo team is this thing called deep learning. I think deep learning was not a hot topic when I started Waymo or Google self driving cars. It was there, in fact, we started Google Brain at the same time in Google X. So I invested in deep learning, but people didn't talk about it, it wasn't a hot topic. And now it is, there's a shift of emphasis from a more geometric perspective where you use geometric sensors that give you a full 3D view when you do a geometric reasoning about, oh, this box over here might be a car towards a more human like, oh, let's just learn about it. This looks like the thing I've seen 10,000 times before. So maybe it's the same thing, machine learning perspective. And that has really put, I think, all these approaches on steroids. At Udacity, we teach a course in self driving cars. In fact, I think we've graduated over 20,000 or so people on self driving car skills. So every self driving car team in the world now uses our engineers. And in this course, the very first homework assignment is to do lane finding on images. And lane finding images for layman, what this means is you put a camera into your car or you open your eyes and you would know where the lane is. So you can stay inside the lane with your car. Humans can do this super easily. You just look and you know where the lane is, just intuitively. For machines, for a long time, it was super hard because people would write these kind of crazy rules. If there's like wine lane markers and here's what white really means, this is not quite white enough. So let's, oh, it's not white. Or maybe the sun is shining. So when the sun shines and this is white and this is a straight line, I mean, it's not quite a straight line because the road is curved. And do we know that there's really six feet between lane markings or not or 12 feet, whatever it is. And now what the students are doing, they would take machine learning. So instead of like writing these crazy rules for the lane marker, they'll say, hey, let's take an hour of driving and label it and tell the vehicle, this is actually the lane by hand. And then these are examples and have the machine find its own rules, what lane markings are. And within 24 hours, now every student that's never done any programming before in this space can write a perfect lane finder as good as the best commercial lane finders. And that's completely amazing to me. We've seen progress using machine learning that completely dwarfs anything that I saw 10 years ago. Yeah, and just as a side note, the self driving car nanodegree, the fact that you launched that many years ago now, maybe four years ago, three years ago is incredible that that's a great example of system level thinking sort of just taking an entire course that teaches you how to solve the entire problem. I definitely recommend people. It's become super popular and it's become actually incredibly high quality really with Mercedes and various other companies in that space. And we find that engineers from Tesla and Waymo are taking it today. The insight was that two things, one is existing universities will be very slow to move because they're departmentalized and there's no department for self driving cars. So between Mac E and double E and computer science, getting those folks together into one room is really, really hard. And every professor listening here will know, they'll probably agree to that. And secondly, even if all the great universities just did this, which none so far has developed a curriculum in this field, it is just a few thousand students that can partake because all the great universities are super selective. So how about people in India? How about people in China or in the Middle East or Indonesia or Africa? Why should those be excluded from the skill of building self driving cars? Are they any dumber than we are? Are we any less privileged? And the answer is we should just give everybody the skill to build a self driving car. Because if we do this, then we have like a thousand self driving car startups. And if 10% succeed, that's like a hundred, that means hundred countries now will have self driving cars and be safer. It's kind of interesting to imagine impossible to quantify, but the number, the, you know, over a period of several decades, the impact that has like a single course, like a ripple effect of society. If you, I just recently talked to Andrew who was creator of Cosmos show. It's interesting to think about how many scientists that show launched. And so it's really, in terms of impact, I can't imagine a better course than the self driving car course. That's, you know, there's other more specific disciplines like deep learning and so on that Udacity is also teaching, but self driving cars, it's really, really interesting course. And then it came at the right moment. It came at a time when there were a bunch of Acqui hires. Acqui hire is a acquisition of a company, not for its technology or its products or business, but for its people. So Acqui hire means maybe that a company of 70 people, they have no product yet, but they're super smart people and they pay a certain amount of money. So I took Acqui hires like GM Cruise and Uber and others, and did the math and said, hey, how many people are there and how much money was paid? And as a lower bound, I estimated the value of a self driving car engineer in these acquisitions to be at least $10 million, right? So think about this, you get yourself a skill and you team up and build a company and your worth now is $10 million. I mean, that's kind of cool. I mean, what other thing could you do in life to be worth $10 million within a year? Yeah, amazing. But to come back for a moment on to deep learning and its application in autonomous vehicles, what are your thoughts on Elon Musk's statement, provocative statement, perhaps that light air is a crutch. So this geometric way of thinking about the world may be holding us back if what we should instead be doing in this robotic space, in this particular space of autonomous vehicles is using camera as a primary sensor and using computer vision and machine learning as the primary way to... Look, I have two comments. I think first of all, we all know that people can drive cars without lighters in their heads because we only have eyes and we mostly just use eyes for driving. Maybe we use some other perception about our bodies, accelerations, occasionally our ears, certainly not our noses. So the existence proof is there, that eyes must be sufficient. In fact, we could even drive a car if someone put a camera out and then gave us the camera image with no latency, we would be able to drive a car that way the same way. So a camera is also sufficient. Secondly, I really love the idea that in the Western world, we have many, many different people trying different hypotheses. It's almost like an anthill, like if an anthill tries to forge for food, you can sit there as two ants and agree what the perfect path is and then every single ant marches for the most likely location of food is, or you can even just spread out. And I promise you the spread out solution will be better because if the discussing philosophical, intellectual ants get it wrong and they're all moving the wrong direction, they're going to waste a day and then they're going to discuss again for another week. Whereas if all these ants go in a random direction, someone's going to succeed and they're going to come back and claim victory and get the Nobel prize or whatever the ant equivalent is. And then they all march in the same direction. And that's great about society. That's great about the Western society. We're not plan based, we're not central based. We don't have a Soviet Union style central government that tells us where to forge. We just forge. We started in C Corp. We get investor money, go out and try it out. And who knows who's going to win. I like it. In your, when you look at the longterm vision of autonomous vehicles, do you see machine learning as fundamentally being able to solve most of the problems? So learning from experience. I'd say we should be very clear about what machine learning is and is not. And I think there's a lot of confusion. What it is today is a technology that can go through large databases of repetitive patterns and find those patterns. So in example, we did a study at Stanford two years ago where we applied machine learning to detecting skin cancer in images. And we harvested or built a data set of 129,000 skin photo shots that were all had been biopsied for what the actual situation was. And those included melanomas and carcinomas, also included rashes and other skin conditions, lesions. And then we had a network find those patterns. And it was by and large able to then detect skin cancer with an iPhone as accurately as the best board certified Stanford level dermatologist. We proved that. Now this thing was great in this one thing and finding skin cancer, but it couldn't drive a car. So the difference to human intelligence is we do all these many, many things and we can often learn from a very small data set of experiences. Whereas machines still need very large data sets and things that will be very repetitive. Now that's still super impactful because almost everything we do is repetitive. So that's gonna really transform human labor but it's not this almighty general intelligence. We're really far away from a system that will exhibit general intelligence. To that end, I actually commiserate the naming a little bit because artificial intelligence, if you believe Hollywood is immediately mixed into the idea of human suppression and machine superiority. I don't think that we're gonna see this in my lifetime. I don't think human suppression is a good idea. I don't see it coming. I don't see the technology being there. What I see instead is a very pointed focused pattern recognition technology that's able to extract patterns from large data sets. And in doing so, it can be super impactful. Super impactful. Let's take the impact of artificial intelligence on human work. We all know that it takes something like 10,000 hours to become an expert. If you're gonna be a doctor or a lawyer or even a really good driver, it takes a certain amount of time to become experts. Machines now are able and have been shown to observe people become experts and observe experts and then extract those rules from experts in some interesting way. They could go from law to sales to driving cars to diagnosing cancer. And then giving that capability to people who are completely new in their job. We now can, and that's been done. It's been done commercially in many, many instantiations. So that means we can use machine learning to make people expert on the very first day of their work. Like think about the impact. If your doctor is still in their first 10,000 hours, you have a doctor who is not quite an expert yet. Who would not want a doctor who is the world's best expert? And now we can leverage machines to really eradicate the error in decision making, error and lack of expertise for human doctors. That could save your life. If we can link on that for a little bit, in which way do you hope machines in the medical field could help assist doctors? You mentioned this sort of accelerating the learning curve or people, if they start a job or in the first 10,000 hours can be assisted by machines. How do you envision that assistance looking? So we built this app for an iPhone that can detect and classify and diagnose skin cancer. And we proved two years ago that it does pretty much as good or better than the best human doctors. So let me tell you a story. So there's a friend of mine, let's call him Ben. Ben is a very famous venture capitalist. He goes to his doctor and the doctor looks at a mole and says, hey, that mole is probably harmless. And for some very funny reason, he pulls out that phone with our app. He's a collaborator in our study. And the app says, no, no, no, no, this is a melanoma. And for background, melanomas are, and skin cancer is the most common cancer in this country. Melanomas can go from stage zero to stage four within less than a year. Stage zero means you can basically cut it out yourself with a kitchen knife and be safe. And stage four means your chances of living five more years in less than 20%. So it's a very serious, serious, serious condition. So this doctor who took out the iPhone, looked at the iPhone and was a little bit puzzled. He said, I mean, but just to be safe, let's cut it out and biopsy it. That's the technical term for let's get an in depth diagnostics that is more than just looking at it. And it came back as cancerous, as a melanoma. And it was then removed. And my friend, Ben, I was hiking with him and we were talking about AI. And I told him I do this work on skin cancer. And he said, oh, funny. My doctor just had an iPhone that found my cancer. So I was like completely intrigued. I didn't even know about this. So here's a person, I mean, this is a real human life, right? Like who doesn't know somebody who has been affected by cancer. Cancer is cause of death number two. Cancer is this kind of disease that is mean in the following way. Most cancers can actually be cured relatively easily if we catch them early. And the reason why we don't tend to catch them early is because they have no symptoms. Like your very first symptom of a gallbladder cancer or a pancreas cancer might be a headache. And when you finally go to your doctor because of these headaches or your back pain and you're being imaged, it's usually stage four plus. And that's the time when the occurring chances might be dropped to a single digit percentage. So if we could leverage AI to inspect your body on a regular basis without even a doctor in the room, maybe when you take a shower or what have you, I know this sounds creepy, but then we might be able to save millions and millions of lives. You've mentioned there's a concern that people have about near term impacts of AI in terms of job loss. So you've mentioned being able to assist doctors, being able to assist people in their jobs. Do you have a worry of people losing their jobs or the economy being affected by the improvements in AI? Yeah, anybody concerned about job losses, please come to Gdacity.com. We teach contemporary tech skills and we have a kind of implicit job promise. We often, when we measure, we spend way over 50% of our graders in new jobs and they're very satisfied about it. And it costs almost nothing, costs like 1,500 max or something like that. And so there's a cool new program that you agree with the U.S. government, guaranteeing that you will help us give scholarships that educate people in this kind of situation. Yeah, we're working with the U.S. government on the idea of basically rebuilding the American dream. So Gdacity has just dedicated 100,000 scholarships for citizens of America for various levels of courses that eventually will get you a job. And those courses are all somewhat related to the tech sector because the tech sector is kind of the hottest sector right now. And they range from interlevel digital marketing to very advanced self diving car engineering. And we're doing this with the White House because we think it's bipartisan. It's an issue that if you wanna really make America great, being able to be a part of the solution and live the American dream requires us to be proactive about our education and our skillset. It's just the way it is today. And it's always been this way. And we always had this American dream to send our kids to college. And now the American dream has to be to send ourselves to college. We can do this very, very, very efficiently and very, very, we can squeeze in in the evenings and things to online. So at all ages. All ages. So our learners go from age 11 to age 80. I just traveled Germany and the guy in the train compartment next to me was one of my students. It's like, wow, that's amazing. Think about impact. We've become the educator of choice for now, I believe officially six countries or five countries. Most in the Middle East, like Saudi Arabia and in Egypt. In Egypt, we just had a cohort graduate where we had 1100 high school students that went through programming skills, proficient at the level of a computer science undergrad. And we had a 95% graduation rate, even though everything's online, it's kind of tough, but we kind of trying to figure out how to make this effective. The vision is very, very simple. The vision is education ought to be a basic human right. It cannot be locked up behind ivory tower walls only for the rich people, for the parents who might be bribe themselves into the system. And only for young people and only for people from the right demographics and the right geography and possibly even the right race. It has to be opened up to everybody. If we are truthful to the human mission, if we are truthful to our values, we're gonna open up education to everybody in the world. So Udacity's pledge of 100,000 scholarships, I think is the biggest pledge of scholarships ever in terms of numbers. And we're working, as I said, with the White House and with very accomplished CEOs like Tim Cook from Apple and others to really bring education to everywhere in the world. Not to ask you to pick the favorite of your children, but at this point. Oh, that's Jasper. I only have one that I know of. Okay, good. In this particular moment, what nano degree, what set of courses are you most excited about at Udacity or is that too impossible to pick? I've been super excited about something we haven't launched yet in the building, which is when we talk to our partner companies, we have now a very strong footing in the enterprise world. And also to our students, we've kind of always focused on these hard skills, like the programming skills or math skills or building skills or design skills. And a very common ask is soft skills. Like how do you behave in your work? How do you develop empathy? How do you work on a team? What are the very basics of management? How do you do time management? How do you advance your career in the context of a broader community? And that's something that we haven't done very well at Udacity and I would say most universities are doing very poorly as well because we are so obsessed with individual test scores and pays a little attention to teamwork in education. So that's something I see us moving into as a company because I'm excited about this. And I think, look, we can teach people tech skills and they're gonna be great. But if you teach people empathy, that's gonna have the same impact. Maybe harder than self driving cars, but. I don't think so. I think the rules are really simple. You just have to, you have to want to engage. It's, we literally went in school and in K through 12, we teach kids like get the highest math score. And if you are a rational human being, you might evolve from this education say, having the best math score and the best English scores make me the best leader. And it turns out not to be that case. It's actually really wrong because making the, first of all, in terms of math scores, I think it's perfectly fine to hire somebody with great math skills. You don't have to do it yourself. You can hire someone with good empathy for you. That's much harder, but you can always hire someone with great math skills. But we live in an affluent world where we constantly deal with other people. And that's a beauty. It's not a nuisance. It's a beauty. So if we somehow develop that muscle that we can do that well and empower others in the workplace, I think we're gonna be super successful. And I know many fellow robot assistant computer scientists that I will insist to take this course. Not to be named here. Not to be named. Many, many years ago, 1903, the Wright brothers flew in Kitty Hawk for the first time. And you've launched a company of the same name, Kitty Hawk, with the dream of building flying cars, eVTOLs. So at the big picture, what are the big challenges of making this thing that actually have inspired generations of people about what the future looks like? What does it take? What are the biggest challenges? So flying cars has always been a dream. Every boy, every girl wants to fly. Let's be honest. Yes. And let's go back in our history of your dreaming of flying. I think honestly, my single most remembered childhood dream has been a dream where I was sitting on a pillow and I could fly. I was like five years old. I remember like maybe three dreams of my childhood, but that's the one I remember most vividly. And then Peter Thiel famously said, they promised us flying cars and they gave us 140 characters pointing as Twitter at the time, limited message size to 140 characters. So if you're coming back now to really go for these super impactful stuff like flying cars and to be precise, they're not really cars. They don't have wheels. They're actually much closer to a helicopter than anything else. They take off vertically and they fly horizontally, but they have important differences. One difference is that they are much quieter. We just released a vehicle called Project Heaviside that can fly over you as low as a helicopter and you basically can't hear. It's like 38 decibels. It's like, if you were inside the library, you might be able to hear it, but anywhere outdoors, your ambient noise is higher. Secondly, they're much more affordable. They're much more affordable than helicopters. And the reason is helicopters are expensive for many reasons. There's lots of single point of figures in a helicopter. There's a bolt between the blades that's caused Jesus bolt. And the reason why it's called Jesus bolt is that if this bolt breaks, you will die. There is no second solution in helicopter flight. Whereas we have these distributed mechanism. When you go from gasoline to electric, you can now have many, many, many small motors as opposed to one big motor. And that means if you lose one of those motors, not a big deal. Heaviside, if it loses a motor, has eight of those. If it loses one of those eight motors, so it's seven left, it can take off just like before and land just like before. We are now also moving into a technology that doesn't require a commercial pilot because in some level, flight is actually easier than ground transportation like in self driving cars. The world is full of like children and bicycles and other cars and mailboxes and curbs and shrubs and what have you. All these things you have to avoid. When you go above the buildings and tree lines, there's nothing there. I mean, you can do the test right now, look outside and count the number of things you see flying. I'd be shocked if you could see more than two things. It's probably just zero. In the Bay Area, the most I've ever seen was six. And maybe it's 15 or 20, but not 10,000. So the sky is very ample and very empty and very free. So the vision is, can we build a socially acceptable mass transit solution for daily transportation that is affordable? And we have an existence proof. Heaviside can fly 100 miles in range with still 30% electric reserves. It can fly up to like 180 miles an hour. We know that that solution at scale would make your ground transportation 10 times as fast as a car based on use census or statistics data, which means you would take your 300 hours of daily, of yearly commute down to 30 hours and give you 270 hours back. Who wouldn't want, I mean, who doesn't hate traffic? Like I hate, give me the person that doesn't hate traffic. I hate traffic. Every time I'm in traffic, I hate it. And if we could free the world from traffic, we have technology. We can free the world from traffic. We have the technology. It's there. We have an existence proof. It's not a technological problem anymore. Do you think there is a future where tens of thousands, maybe hundreds of thousands of both delivery drones and flying cars of this kind, EV talls fill the sky? I absolutely believe this. And there's obviously the societal acceptance is a major question. And of course, safety is. I believe in safety, we're gonna exceed ground transportation safety as has happened for aviation already, commercial aviation. And in terms of acceptance, I think one of the key things is noise. That's why we are focusing relentlessly on noise and we build perhaps the quietest electric vehicle ever built. The nice thing about the sky is it's three dimensional. So any mathematician will immediately recognize the difference between 1D of like a regular highway to 3D of a sky. But to make it clear for the layman, say you wanna make 100 vertical lanes of highway 101 in San Francisco, because you believe building 100 vertical lanes is the right solution. Imagine how much it would cost to stack 100 vertical lanes physically onto 101. That would be prohibitive. That would be consuming the world's GDP for an entire year just for one highway. It's amazingly expensive. In the sky, it would just be a recompilation of a piece of software because all these lanes are virtual. That means any vehicle that is in conflict with another vehicle would just go to different altitudes and then the conflict is gone. And if you don't believe this, that's exactly how commercial aviation works. When you fly from New York to San Francisco, another plane flies from San Francisco to New York, they are different altitudes. So they don't hit each other. It's a solved problem for the jet space and it will be a solved problem for the urban space. There's companies like Google Wing and Amazon working on very innovative solutions. How do we have space management? They use exactly the same principles as we use today to route today's jets. There's nothing hard about this. Do you envision autonomy being a key part of it so that the flying vehicles are either semi autonomous semi autonomous or fully autonomous? 100% autonomous. You don't want idiots like me flying in the sky, I promise you. And if you have 10,000, watch the movie, The Fifth Element to get a feel for what will happen if it's not autonomous. And a centralized, that's a really interesting idea of a centralized sort of management system for lanes and so on. So actually just being able to have similar as we have in the current commercial aviation, but scale it up to much, much more vehicles. That's a really interesting optimization problem. It is very mathematically, very, very straightforward. Like the gap we leave between jets is gargantuous. And part of the reason is there isn't that many jets. So it just feels like a good solution. Today, when you get vectored by air traffic control, someone talks to you, right? So any ATC controller might have up to maybe 20 planes on the same frequency. And then they talk to you, you have to talk back. And it feels right because there isn't more than 20 planes around anyhow, so you can talk to everybody. But if there's 20,000 things around, you can't talk to everybody anymore. So we have to do something that's called digital, like text messaging. Like we do have solutions. Like we have what, four or five billion smartphones in the world now, right? And they're all connected. And somehow we solve the scale problem for smartphones. We know where they all are. They can talk to somebody and they're very reliable. They're amazingly reliable. We could use the same system, the same scale for air traffic control. So instead of me as a pilot talking to a human being and in the middle of the conversation receiving a new frequency, like how ancient is that? We could digitize this stuff and digitally transmit the right flight coordinates. And that solution will automatically scale to 10,000 vehicles. We talked about empathy a little bit. Do you think we will one day build an AI system that a human being can love and that loves that human back, like in the movie, Her? Look, I'm a pragmatist. For me, AI is a tool. It's like a shovel. And the ethics of using the shovel are always with us, the people. And it has to be this way. In terms of emotions, I would hate to come into my kitchen and see that my refrigerator spoiled all my food, then have it explained to me that it fell in love with the dishwasher and it wasn't as nice as the dishwasher. So as a result, it neglected me. That would just be a bad experience and it would be a bad product. I would probably not recommend this refrigerator to my friends. And that's where I draw the line. I think to me, technology has to be reliable and has to be predictable. I want my car to work. I don't want to fall in love with my car. I just want it to work. I want it to compliment me, not to replace me. I have very unique human properties and I want the machines to make me, turn me into a superhuman. Like I'm already a superhuman today, thanks to the machines that surround me. And I give you examples. I can run across the Atlantic at near the speed of sound at 36,000 feet today. That's kind of amazing. I can, my voice now carries me all the way to Australia using a smartphone today. And it's not the speed of sound, which would take hours. It's the speed of light. My voice travels at the speed of light. How cool is that? That makes me superhuman. I would even argue my flushing toilet makes me superhuman. Just think of the time before flushing toilets. And maybe you have a very old person in your family that you can ask about this or take a trip to rural India to experience it. It makes me superhuman. So to me, what technology does, it compliments me. It makes me stronger. Therefore, words like love and compassion have very little interest in this for machines. I have interest in people. You don't think, first of all, beautifully put, beautifully argued, but do you think love has use in our tools? Compassion. I think love is a beautiful human concept. And if you think of what love really is, love is a means to convey safety, to convey trust. I think trust has a huge need in technology as well, not just people. We want to trust our technology the same way, in a similar way we trust people. In human interaction, standards have emerged and feelings, emotions have emerged, maybe genetically, maybe biologically, that are able to convey sense of trust, sense of safety, sense of passion, of love, of dedication that makes the human fabric. And I'm a big slacker for love. I want to be loved. I want to be trusted. I want to be admired. All these wonderful things. And because all of us, we have this beautiful system, I wouldn't just blindly copy this to the machines. Here's why. When you look at, say, transportation, you could have observed that up to the end of the 19th century, almost all transportation used any number of legs, from one leg to two legs to a thousand legs. And you could have concluded that is the right way to move about the environment. We've been made the exception of birds who use flapping wings. In fact, there are many people in aviation that flap wings to their arms and jump from cliffs. Most of them didn't survive. Then the interesting thing is that the technology solutions are very different. Like in technology, it's really easy to build a wheel. In biology, it's super hard to build a wheel. There's very few perpetually rotating things in biology and they usually run cells and things. In engineering, we can build wheels. And those wheels gave rise to cars. Similar wheels gave rise to aviation. Like there's no thing that flies that wouldn't have something that rotates, like a jet engine or helicopter blades. So the solutions have used very different physical laws than nature, and that's great. So for me to be too much focused on, oh, this is how nature does it, let's just replicate it. If you really believed that the solution to the agricultural evolution was a humanoid robot, you would still be waiting today. Again, beautifully put. You said that you don't take yourself too seriously. Did I say that? You want me to say that? Maybe. You're not taking me seriously. I'm not, that's right. Good, you're right, I don't wanna. I just made that up. But you have a humor and a lightness about life that I think is beautiful and inspiring to a lot of people. Where does that come from? The smile, the humor, the lightness amidst all the chaos of the hard work that you're in, where does that come from? I just love my life. I love the people around me. I'm just so glad to be alive. Like I'm, what, 52, hard to believe. People say 52 is a new 51, so now I feel better. But in looking around the world, looking around the world, just go back 200, 300 years. Humanity is, what, 300,000 years old? But for the first 300,000 years minus the last 100, our life expectancy would have been plus or minus 30 years roughly, give or take. So I would be long dead now. That makes me just enjoy every single day of my life because I don't deserve this. Why am I born today when so many of my ancestors died of horrible deaths, like famines, massive wars that ravaged Europe for the last 1,000 years mystically disappeared after World War II when the Americans and the Allies did something amazing to my country that didn't deserve it, the country of Germany. This is so amazing. And then when you're alive and feel this every day, then it's just so amazing what we can accomplish, what we can do. We live in a world that is so incredibly, vastly changing every day. Almost everything that we cherish from your smartphone to your flushing toilet, to all these basic inventions, your new clothes you're wearing, your watch, your plane, penicillin, I don't know, anesthesia for surgery, penicillin have been invented in the last 150 years. So in the last 150 years, something magical happened. And I would trace it back to Gutenberg and the printing press that has been able to disseminate information more efficiently than before that all of a sudden we were able to invent agriculture and nitrogen fertilization that made agriculture so much more potent that we didn't have to work in the farms anymore and we could start reading and writing and we could become all these wonderful things we are today, from airline pilot to massage therapist to software engineer. It's just amazing. Like living in that time is such a blessing. We should sometimes really think about this, right? Steven Pinker, who is a very famous author and philosopher whom I really adore, wrote a great book called Enlightenment Now. And that's maybe the one book I would recommend. And he asks the question, if there was only a single article written in the 20th century, it's only one article, what would it be? What's the most important innovation, the most important thing that happened? And he would say this article would credit a guy named Karl Bosch. And I challenge anybody, have you ever heard of the name Karl Foch? I hadn't, okay. There's a Bosch Corporation in Germany, but it's not associated with Karl Bosch. So I looked it up. Karl Bosch invented nitrogen fertilization. And in doing so, together with an older invention of irrigation, was able to increase the yields per agricultural land by a factor of 26. So a 2,500% increase in fertility of land. And that, so Steve Pinker argues, saved over 2 billion lives today. 2 billion people who would be dead if this man hadn't done what he had done, okay? Think about that impact and what that means to society. That's the way I look at the world. I mean, it's so amazing to be alive and to be part of this. And I'm so glad I lived after Karl Bosch and not before. I don't think there's a better way to end this, Sebastian. It's an honor to talk to you, to have had the chance to learn from you. Thank you so much for talking to me. Thanks for coming out. It's been a real pleasure. Thank you for listening to this conversation with Sebastian Thrun. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, get five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter. And now, let me leave you with some words of wisdom from Sebastian Thrun. It's important to celebrate your failures as much as your successes. If you celebrate your failures really well, if you say, wow, I failed, I tried, I was wrong, but I learned something, then you realize you have no fear. And when your fear goes away, you can move the world. Thank you for listening and hope to see you next time.
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59
The following is a conversation with S. James Gates, Jr. He's a theoretical physicist and professor at Brown University, working on supersymmetry, supergravity, and superstring theory. He served on former President Obama's Council of Advisors on Science and Technology, and he's now the coauthor of a new book titled Proving Einstein Right, about the scientists who set out to prove Einstein's theory of relativity. You may have noticed that I've been speaking with not just computer scientists, but philosophers, mathematicians, physicists, economists, and soon, much more. To me, AI is much bigger than deep learning, bigger than computing. It is our civilization's journey into understanding the human mind and creating echoes of it in the machine. That journey includes, of course, the world of theoretical physics and its practice of first principles mathematical thinking and exploring the fundamental nature of our reality. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F.R.I.D.M.A.N. If you leave a review on Apple Podcasts or YouTube or Twitter, consider mentioning ideas, people, topics you find interesting. It helps guide the future of this podcast. But in general, I just love comments that are full of kindness and thoughtfulness in them. This podcast is a side project for me, but I still put a lot of effort into it. So the positive words of support from an amazing community, from you, really help. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversations you may have noticed that you can skip to, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to First, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with S. James Gates Jr. You tell a story when you were eight. You had a profound realization that the stars in the sky are actually places that we could travel to one day. Do you think human beings will ever venture outside our solar system? Wow, the question of whether humanity gets outside of the solar system. It's going to be a challenge, and as long as the laws of physics that we have today are accurate and valid, it's going to be extraordinarily difficult. I'm a science fiction fan, as you probably know, so I love to dream of starships and traveling to other solar systems, but the barriers are just formidable. If we just kind of venture a little bit into science fiction, do you think the spaceships, if we are successful, that take us outside the solar system, will look like the ones we have today, or are fundamental breakthroughs necessary? In order to have genuine starships, probably some really radical views about the way the universe works are going to have to take place in our science. We could, with our current technology, think about constructing multigenerational starships where the people who get on them are not the people who get off at the other end. But even if we do that, the formidable problem is actually our bodies, which doesn't seem to be conscious for a lot of people. Even getting to Mars is going to present this challenge, because we live in this wonderful home, has a protective magnetic magnetosphere around it, and so we're shielded from cosmic radiation. Once you leave this shield, there are some estimates that, for example, if you sent someone to Mars, with our technology, probably about two years out there without the shield, they're going to be bombarded. That means radiation, probably means cancer. So that's one of the most formidable challenges, even if we could get over the technology. Do you think, so Mars is a harsh place. Elon Musk, SpaceX and other folks, NASA are really pushing to put a human being on Mars. Do you think, again, let's forgive me for lingering in science fiction land for a little bit. Do you think one day we may be able to colonize Mars? First, do you think we'll put a human on Mars, and then do you think we'll put many humans on Mars? So first of all, I am extraordinarily convinced we will not put a human on Mars by 2030, which is a date that you often hear in the public debate. What's the challenge there? What do you think? So there are a couple of ways that I could slice this, but the one that I think is simplest for people to understand involves money. So you look at how we got to the moon in the 1960s. It was about 10 year duration between the challenge that President Kennedy laid out and our successfully landing a moon. I was actually here at MIT when that first moon landing occurred, so I remember watching it on TV. But how did we get there? Well, we had this extraordinarily technical agency of the United States government, NASA. It consumed about 5% of the country's economic output. And so you say 5% of the economic output over about a 10 year period gets us 250,000 miles in space. Mars is about a hundred times farther. So you have at least a hundred times the challenge and we're spending about one tenth of the funds that we spent then as a government. So my claim is that it's at least a thousand times harder for me to imagine us getting to Mars by 2030. And he had that part that you mentioned in the speech that I just have to throw in there of JFK, of we do these things not because they're easy, but because they're hard. That's such a beautiful line that I would love to hear a modern president say about a scientific endeavor. Well, one day we live and hope that such a president will arise for our nation. But even if, like I said, even if you fix the technical problems, the biological engineering that I worry most about, however, I'm gonna go out on a limb here. I think that by 2090 or so, or 2100, should I say 120, I suspect we're gonna have a human on Mars. Wow, so you think that many years out, first a few tangents. You said bioengineering is a challenge. What's the challenge there? So as I said, the real problem with interstellar travel, aside from the technology challenges, the real problem is radiation. And how do you engineer either an environment or a body, because we see rapid advances going on in bioengineering, how do you engineer either a ship or a body so that something, some person that's recognizably human will survive the rigors of interplanetary space travel? It's much more difficult than most people seem to take into account. So if we could linger on the 2090, 2100, 2120, sort of thinking of that kind of, you know, and let's linger on money. Okay. So Elon Musk and Jeff Bezos are pushing the cost, trying to push the cost down. I mean, this is, so do you have hope as this actually sort of a brilliant big picture scientist, do you think a business entrepreneur can take science and make it cheaper and get it out there faster? So bending the cost curve is, you'll notice that has been an anchor. This is the simplest way for me to discuss this with people about what the challenge is. So yes, bending the cost curve is certainly critical if we're going to be successful. Now, you asked about the endeavors that are out there now sponsored by two very prominent American citizens, Jeff Bezos and Elon Musk. I'm disappointed actually in what I see in terms of the routes that are being pursued. So let me give you one example there. And this one is going to be a little bit more technical. So if you look at the kinds of rockets that both these organizations are creating, yes, it's wonderful, reusable technology to see a rocket go up and land on its fins just like it did in science fiction movies when I was a kid, that's astounding. But the real problem is those rockets, the technology that we're doing now is not really that different than what was used to go to the moon. And there are alternatives it turns out. There's an engine called a flare engine, which so a traditional rocket, if you look at the engine, it looks like a bell, right? And then the flame comes out the bottom. But there is a kind of engine called a flare engine, which is essentially, when you look at it, it looks like an exhaust pipe on like a fancy car that's long and elongated. And it's a type of rocket engine that we know there've been preliminary testing, we know it works. And it also is actually much more economical because what it does is allow you to vary the amount of thrust as you go up. In a way that you cannot do with one of these bell shaped engines. So you would think that an entrepreneur might try to have the breakthrough to use flare nozzles, as they're called, as a way to bend the cost curve. Because as we keep coming back, that's gonna be a big factor. But that's not happening. In fact, what we see is what I think of as incremental change in terms of our technology. So I'm not really very encouraged by what I personally see. So incremental change won't bend the cost curve. I don't see it. Just linger on the sci fi for one more question. Sure. Do you think we're alone in the universe? Are we the only intelligent form of life? So there is a quote by Carl Sagan, which I really love when I hear this question. And I recall the quote, and it goes something like, if we're the only conscious life in the universe, it's in a terrible waste of space because the universe is an incredibly big place. And when Carl made that statement, we didn't know about the profusion of planets that are out there. In the last decade, we've discovered over a thousand planets and a substantial number of those planets are Earth like in terms of being in the Goldilocks zone as it's called. So in my mind, it's practically inconceivable that we're the only conscious form of life in the universe. But that doesn't mean they've come to visit us. Do you think they would look, do you think we'll recognize alien life if we saw it? Do you think it'd look anything like the carbon base, the biological system we have on Earth today? It would depend on that life's native environment in which it arose. If that environment was sufficiently like our environment, there's a principle in biology and nature called convergence, which is that even if you have two biological systems that are totally separated from each other, if they face similar conditions, nature tends to converge on solutions. And so there might be similarities if this alien life form was born in a place that's kind of like this place. Physics appears to be quite similar, the laws of physics across the entirety of the universe. Do you think weirder things than we see on Earth can spring up out of the same kinds of laws of physics? From the laws of physics, I would say yes. First of all, if you look at carbon based life, why are we carbon based? Well, it turns out it's because of the way that carbon interacts with elements, which in fact is also a reflection on the electronic structure of the carbon nucleus. So you can look down the table of elements and say, well, gee, do we see similar elements? The answer is yes. And one that one often hears about in science fiction is silicon. So maybe there's a silicon based life form out there if the conditions are right. But I think it's presumptuous of us to think that we are the template by which all life has to appear. Before we dive into beautiful details, let me ask a big question. What to you is the most beautiful idea, maybe the most surprising, mysterious idea in physics? The most surprising idea to me is that we can actually do physics. The universe did not have to be constructed in such a way with our limited intellectual capacity that is actually put together in such a way and that we are put together in such a way that we can, with our mind's eye, delve incredibly deeply into the structure of the universe. That to me is pretty close to a miracle. So there are simple equations, relatively simple, that can describe things, the fundamental functions. They can describe everything about our reality. That's not, can you imagine universes where everything is a lot more complicated? Do you think there's something inherent about universes that simple laws are... Well, first of all, let me, this is a question that I encounter in a number of guides. A lot of people will raise the question about whether mathematics is the language of the universe. And my response is mathematics is the language that we humans are capable of using in describing the universe. It may have little to do with the universe, but in terms of our capacity, it's the microscope, it's the telescope through which we, it's the lens through which we are able to view the universe with the precision that no other human language allows. So could there be other universes? Well, I don't even know if this one looks like I think it does. But the beautiful surprising thing is that physics, there are laws of physics, very few laws of physics that can effectively compress down the functioning of the universe. Yes, that's extraordinarily surprising. I like to use the analogy with computers and information technology. If you worry about transmitting large bundles of data, one of the things that computer scientists do for us is they allow for processes that are called compression, where you take big packets of data and you press them down into much smaller packets, and then you transmit those and then unpack them at the other end. And so it looks a little bit to me like the universe has kind of done us a favor. It's constructed our minds in such a way that we have this thing called mathematics, which then as we look at the universe, teaches us how to carry out the compression process. A quick question about compression. Do you think the human mind can be compressed? The biology can be compressed? We talked about space travel. To be able to compress the information that captures some large percent of what it means to be me or you, and then be able to send that at the speed of light. Wow, that's a big question. And let me try to take it apart, unpack it into several pieces. I don't believe that wetware biology such as we are has an exclusive patent on intellectual consciousness. I suspect that other structures in the universe are perfectly capable of producing the data streams that we use to process, first of all, our observations of the universe and an awareness of ourself. I can imagine other structures can do that also. So that's part of what you were talking about, which I would have some disagreement with. Consciousness. What's the most interesting part of us humans? Is consciousness the thing? I think that's the most interesting thing about humans. And then you're saying that there's other entities throughout the universe. I can well imagine that the architecture that supports our consciousness, again, has no patent on consciousness. Just in case you have an interesting thought here, there's folks perhaps in philosophy called panpsychists that believe consciousness underlies everything. It is one of the fundamental laws of the universe. Do you have a sense that that could possibly fit into... I don't know the answer to that question. One part of that belief system is giya, which is that there's a kind of conscious life force about our planet. And I've encountered these things before. I don't quite know what to make of them. My own life experience, and I'll be 69 in about two months, and I have spent all my adulthood thinking about the way that mathematics interacts with nature and with us to try to understand nature. And all I can tell you from all of my integrated experience is that there is something extraordinarily mysterious to me about our universe. This is something that Einstein said from his life experience as a scientist. And this mysteriousness almost feels like the universe is our parent. It's a very strange thing perhaps to hear scientists say, but there are just so many strange coincidences that you just get a sense that something is going on. Well, I interrupted you. In terms of compressing what we're down to, we can send it at the speed of light. Yes. So the first thing is I would argue that it's probably very likely that artificial intelligence ultimately will develop something like consciousness, something that for us will probably be indistinguishable from consciousness. So that's what I meant by our biological processing equipment that we carry up here probably does not hold a patent on consciousness, because it's really about the data streams. As far as I can tell, that's what we are. We are self actuating, self learning data streams. That to me is most accurate way I can tell you what I've seen in my lifetime about what humans are at the level of consciousness. So if that's the case, then you just need to have an architecture that supports that information processing. So let's assume that that's true, that in fact what we call consciousness is really about a very peculiar kind of data stream. If that's the case, then if you can export that to a piece of hardware, something metal, electronic, what have you, then you certainly will, ultimately that kind of consciousness could get to Mars very quickly, it doesn't have our problems. You can engineer the body, as I said, it's a ship or a body, you engineer one or both. Send it at a speed of light, well, that one is a more difficult one because that now goes beyond just a matter of having a data stream. It's now the preservation of the information in the data stream. And so unless you can build something that's like a super, super, super version of the way the internet works because most people aren't aware that the internet itself is actually a miracle, it's based on a technology called message packaging. So if you could exponentiate message packaging in some way to preserve the information that's in the data stream, then maybe your dream becomes true. You mentioned with artificial intelligence, sort of us human beings not having a monopoly on consciousness. Does the idea of artificial intelligence systems, computational systems, being able to basically replacing us humans scare you, excite you? What do you think about that? So I'm gonna tell you about a conversation I once had with Eric Schmidt. I was sitting at a meeting with him and he was a few feet away and he turned to me and he said something like, you know, Jim, in maybe a decade or so, we're gonna have computers that do what you do. And my response was not unless they can dream because there's something about, the way that we humans actually generate creativity. It's somehow, I get this sense of my lived experience in watching creative people that it's somehow connected to the irrational parts of what goes on in our head and dreaming is part of that irrational. So unless you can build a piece of artificial intelligence that dreams, I have a strong suspicion that you will not get something that will fully be conscious by a definition that I would accept, for example. So you mentioned dreaming. You've played around with some out there fascinating ideas. How do you think, and we'll start diving into the world of the very small ideas of super symmetry and all that in terms of visualization, in terms of how do you think about it? How do you dream of it? How do you come up with ideas in that fascinating, mysterious space? So in my workspace, which is basically where I am charged with coming up on a mathematical palette with new ideas that will help me understand the structure of nature and hopefully help all of us understand the structure of nature. I've observed several different ways in which my creativity expresses itself. There's one mode which looks pretty normal, which I sort of think of as the Chinese water torture method. Drop, drop, drop, you get more and more information and suddenly it all congeals and you get a clear picture. And so that's kind of a standard way of working. And I think that's how most people think about the way technical people solve problems. That is kind of you accumulate this body of information and at a certain point you synthesize it and then boom, there's something new. But I've also observed in myself and other scientists that there are other ways that we are creative. And these other ways to me are actually far more powerful. I first personally experienced this when I was a freshman at MIT over in Baker House right across the campus. And I was in a calculus course, 1801 is called at MIT. And calculus comes in two different flavors. One of them is called differential calculus. The other is called integral calculus. Differential calculus is the calculus that Newton invented to describe motion. It turns out integral calculus was probably invented about 1700 years earlier by Archimedes. But we didn't know that when I was a freshman. But so that's what you study as a student. And the differential calculus part of the course was, to me, I wouldn't, how do I say this? It was something that by the drip, drip, drip method you could sort of figure it out. Now, the integral part of calculus, I could memorize the formula. That was not the formulae, that was not the problem. The problem was why, in my own mind, why do these formulae work? And because of that, when I was in the part of the calculus course where we had to do multiple substitutions to solve integrals, I had a lot of difficulty. I was emotionally involved in my education because this is where I think the passion of motion comes to. And it caused an emotional crisis that I was having these difficulties understanding the integral part of calculus. The why. The why, that's right, the why of it. Not the rote memorization of fact, but the why of it. Why does this work? And so one night I was over in my dormitory room in Baker House. I was trying to do a calculus problem set. I was getting nowhere. I got a terrific headache. I went to sleep and had this very strange dream. And when I woke, awakened, I could do three and four substitutions and integrals with relative ease. Now, this to me was an astounding experience because I had never before in my life understood that one subconscious is actually capable of being harnessed to do mathematics. I experienced it, this. And I've experienced this more than once. So this was just the first time why I remember it so. So that's why when it comes to like really wickedly tough problems, I think that the kind of creativity that you need to solve them is probably this second variety which comes somehow from dreaming. Do you think, again, I told you I'm Russian. So we romanticize suffering. But do you think part of that equation is the suffering leading up to that dreaming? So the suffering is, I am convinced that this kind of creative, this second mode of creativity as I like to call it, I'm convinced that this second mode of creativity is in fact that suffering is a kind of crucible that triggers it. Because the mind I think is struggling to get out of this. And the only way to actually solve the problem. And even though you're not consciously solving problems, something is going on. And I've talked about to a few other people and I've heard other similar stories. And so I guess what I think about it is it's a little bit by like the way that thermonuclear weapons work. I don't know if you know how they work. But a thermonuclear weapon is actually two bombs. It's an atomic bomb which sort of does a compression. And then you have a fusion bomb that goes off. And somehow that emotional pressure I think acts like the first stage of a thermonuclear weapon. That's when we get really big thoughts. The analogy between thermonuclear weapons and the subconscious, the connection there is, at least visually, is kind of interesting. There may be, Freud would have a few things to say. Well, part of it is probably based on my own trajectory through life. My father was in the US Army for 27 years. And so I started my life out on a military basis. And so a lot of probably the things that wander around in my subconscious are connected to that experience. I apologize for all the tangents, but. Well, you're doing it. You're doing it. But you're encouraging by answering the stupid questions. No, they're not stupid. You know, your father was in the Army. What do you think about, Neil deGrasse Tyson recently wrote a book on interlinking the progress of science to sort of the aspirations of our military endeavors and DARPA funding and so on. What do you think about war in general? Do you think we'll always have war? Do you think we'll always have conflict in the world? I'm not sure that we're going to be able to afford to have war always, because if. Strictly financially speaking? No, not in terms of finance, but in terms of consequences. So if you look at technology today, you can have non state actors acquire technology, for example, bioterrorism, whose impact is roughly speaking equivalent to what it used to take nations to impart on a population. I think the cost of war is ultimately, it's going to be a little, I think it's going to work a little bit like the Cold War. You know, we survived 50, 60 years as a species with these weapons that are so terrible that they could have actually ended our form of life on this planet, but it didn't. Why didn't it? Well, it's a very bizarre and interesting thing, but it was called mutually assured destruction. And so the cost was so great that people eventually figured out that you can't really use these things, which is kind of interesting, because if you read the history about the development of nuclear weapons, physicists actually realized this pretty quickly. I think it was maybe Schrodinger who said that these things are not really weapons. They're political implements. They're not weapons, because the cost is so high. And if you take that example and spread it out to the kind of technological development we're seeing now outside of nuclear physics, but I picked the example of biology, I could well imagine that there would be material science sorts of equivalents across a broad front of technology. You take that experience from nuclear weapons, and the picture that I see is that it would be possible to develop technologies that are so terrible that you couldn't use them, because the costs are too high. And that might cure us. And many people have argued that actually it prevented, nuclear weapons have prevented more military conflict than. It certainly froze the conflict domain. It's interesting that nowadays it was with the removal of the threat of mutually assured destruction that other forces took over in our geopolitics. Do you have worries of existential threats of nuclear weapons or other technologies like artificial intelligence? Do you think we humans will tend to figure out how to not blow ourselves up? I don't know, quite frankly. This is something I've thought about. And I'm not, I mean, so I'm a spectator in the sense that as a scientist, I collect and collate data. So I've been doing that all my life and looking at my species. And it's not clear to me that we are going to avoid a catastrophic, self induced ending. Are you optimistic? Not as a scientist, but as a single element speaker? I would say I wouldn't bet against us. Beautifully put. Let's dive into the world of the very small, if we could for a bit. What are the basic particles, either experimentally observed or hypothesized by physicists? So as we physicists look at the universe, you can, first of all, there are two big buckets of particles. That is the smallest objects that we are able to currently mathematically conceive and then experimentally verify that these ideas have a sense of accuracy to them. So one of those buckets we call matter. These are things like electrons, things that are like quarks, which are particles that exist inside of protons. And there's a whole family of these things. There are, in fact, 18 quarks and apparently six electron like objects that we call leptons. So that's one bucket. The other bucket that we see both in our mathematics as well as in our experimental equipment are a set of particles that you can call force carriers. The most familiar force carrier is the photon, the particle of light that allows you to see me. In fact, it's the same object that carries electric repulsion between like charges. From science fiction, we have the object called the graviton, which is talked about a lot in science fiction and Star Trek. But the graviton is also a mathematical object that we physicists have known about essentially since Einstein wrote his theory of general relativity. There are four forces in nature, the fundamental forces. There is the gravitational force. Its carrier is the graviton. There are three other forces in nature, the electromagnetic force, the strong nuclear force, and the weak nuclear force. And each one of these forces has one or more carriers. The photon is the carrier of the electromagnetic force. The strong nuclear force actually has eight carriers. They're called gluons. And then the weak nuclear force has three carriers. They're called the W plus, W minus, and Z bosons. So those are the things that both in mathematics and in experiments, by the way, the most precise experiments we're ever as a species able to conduct is about measuring the accuracy of these ideas. And we know that at least to one part in a billion, these ideas are right. So first of all, you've made it sound both elegant and simple. But is it crazy to you that there is force carriers? Like, is that supposed to be a trivial idea to think about? If we think about photons, gluons, that there's four fundamental forces of physics, and then those forces are expressed. There's carriers of those forces. Like, is that a kind of trivial thing? It's not a trivial thing at all. In fact, it was a puzzle for Sir Isaac Newton, because he's the first person to give us basically physics. Before Isaac Newton, physics didn't exist. What did exist was called natural philosophy, so discussions about using the methods of classical philosophy to understand nature, natural philosophy. So the Greeks, we call them scientists, but they were natural philosophers. Physics doesn't get born until Newton writes the Principia. One of the things that puzzled him was how gravity works, because if you read very carefully what he writes, he basically says, and I'm paraphrasing badly, but he basically says that someone who thinks deeply about this subject would find it inconceivable that an object in one place or location can magically reach out and affect another object with nothing intervening. And so it puzzled him. There's a puzzle of you, action at a distance. I mean, not as a physicist. It would, it would, except that I am a physicist, and we have long ago resolved this issue, and the resolution came about through a second great physicist. Most people have heard of Newton. Most people have heard of Einstein. But between the two of them, there was another extraordinarily great physicist, a man named James Clark Maxwell. And Maxwell, between these two other giants, taught us about electric and magnetic forces, and it's from his equations that one can figure out that there's a carrier called the photon. So this was resolved for physicists around 1860 or so. So what are bosons and fermions and hadrons, elementary and composites? Sure, so earlier I said. Two buckets. You have got two buckets if you wanna try to build the universe. You gotta start off with things on these two buckets. So you gotta have things, that's a matter, and then you have to have other objects that act on them to cause those things to cohere to fixed finite patterns, because you need those fixed finite patterns as building blocks. So that's the way our universe looks to people like me. Now, the building blocks do different things. So let's go back to these two buckets again. Let me start with a bucket containing the particle of light. Let me imagine I'm in a dusty room with two flashlights, and I have one flashlight, which I direct directly in front of me, and then I have you stand over to say my left and then we both take our flashlights and turn them on and make sure the beams go right through each other. And the beams do just that. They go right through each other. They don't bounce off of each other. The reason the room has to be dusty is because we wanna see the light. The room dust wasn't there. We wouldn't actually see the light until it got to the other wall, right? So you see the beam because it's the dust in the air. But the two beams actually pass right through each other. They literally pass right through. They don't affect each other at all. One acts like the other's not there. The particle of light is the simplest example that shows that behavior. That's a boson. Now let's imagine that we're in the same dusty room and this time you have a bucket of balls and I have a bucket of balls. And we try to throw them so that we get something like a beam, throwing them fast, right? If they collide, they don't just pass through each other. They bounce off of each other. Now that's mostly because they have electric charge and electric charges, light charges repel. But mathematically, I know how to turn off the electric charge. And if you do that, you'll find these still repel. And it's because they are these things we call fermions. So this is how you distinguish the things that are in the two buckets. They are either bosons or fermions. Which of them, and maybe you can mention the most popular of the bosons. The most recently discovered. It's like when I was in high school and there was a really popular majorette. Her name is the Higgs particle these days. Can you describe which of the bosons and the fermions have been discovered, hypothesized, which have been experimentally validated, what's still out there? Right, so the two buckets that I've actually described to you have all been first hypothesized and then verified by observation. With the Higgs boson being the most recent one of these things. We haven't actually verified the graviton interestingly enough. Mathematically, we have an expectation that gravitons exist. But we've not performed an experiment to show that this is an accurate idea that nature uses. So something has to be a carrier. For the force of gravity, exactly. Can it be something way more mysterious than we, so when you say the graviton, is it, would it be like the other particles, force carriers, or can it be something much more mysterious? In some ways, yes, but in other ways, no. It turns out that the graviton is also, if you look at Einstein's theory, he taught us about this thing he calls space time, which is, if you try to imagine it, you can sort of think of it as kind of a rubber surface. That's one popular depiction of space time. It's not an accurate depiction because the only accuracy is actually in the calculus that he uses, but that's close enough. So if you have a sheet of rubber, you can wave it. You can actually form a wave on it. Space time is enough like that so that when space time oscillates, you create these waves. These waves carry energy. We expect them to carry energy in quanta. That's what a graviton is. It's a wave in space time. And so the fact that we have seen the waves with LIGO over the course of the last three years, and we've recently used gravitational wave observatories to watch colliding black holes and neutron stars and all sorts of really cool stuff out there. So we know the waves exist, but in order to know that gravitons exist, you have to prove that these waves carry energy in energy packets. And that's what we don't have the technology to do yet. And perhaps briefly jumping to a philosophical question, does it make sense to you that gravity is so much weaker than the other forces? No. You see, now you've touched on a very deep mystery about physics. There are a lot of such questions in physics about why things are as they are. And as someone who believes that there are some things that certainly are coincidences, like you could ask the same question about, well, why are the planets at the orbits that they are around the sun? The answer turns out there is no good reason. It's just an accident. So there are things in nature that have that character. And perhaps the strength of the various forces is like that. On the other hand, we don't know that that's the case. And there may be some deep reasons about why the forces are ordered as they are, where the weakest force is gravity, the next weakest force is the weak interaction, the weak nuclear force, then there's electromagnetism, there's a strong force. We don't really have a good understanding of why this is the ordering of the forces. So some of the fascinating work you've done is in the space of supersymmetry, symmetry in general. Can you describe, first of all, what is supersymmetry? Yes, so you remember the two buckets I told you about perhaps earlier? So there are two buckets in our universe. So now I want you to think about drawing a pie that has four quadrants. So I want you to cut the piece of pie in fourths. So in one quadrant, I'm gonna put all the buckets that we talked about that are like the electron and quarks. In a different quadrant, I'm going to put all the force carriers. The other two quadrants are empty. Now, I showed you a picture of that. You'd see a circle. There would be a bunch of stuff in one upper quadrant and stuff in others. And then I would ask you a question. Does that look symmetrical to you? No. No. And that's exactly right because we humans actually have a very deeply programmed sense of symmetry. It's something that is part of that mystery of the universe. So how would you make it symmetrical? Or one way you could is by saying those two empty quadrants had things in them also. And if you do that, that's supersymmetry. So that's what I understood when I was a graduate student here at MIT in 1975 when the mathematics of this was first being born. Supersymmetry was actually born in the Ukraine in the late 60s, but we had this thing called the Iron Curtain. So we Westerners didn't know about it. But by the early 70s, independently, there were scientists in the West who had rediscovered supersymmetry. Bruno Zemeno and Julius Wess were their names. So this was around 71 or 72 when this happened. I started graduate school in 73. So around 74, 75, I was trying to figure out how to write a thesis so that I could become a physicist the rest of my life. I did a, I had a great advisor, Professor James Young who had taught me a number of things about electrons and weak forces and those sorts of things. But I decided that if I was going to have a really an opportunity to maximize my chances of being successful, I should strike it out in a direction that other people were not studying. And so as a consequence, I surveyed ideas that were going, that were being developed. And I came across the idea of supersymmetry. And it was so, the mathematics was so remarkable that I just, it bowled me over. I actually have two undergraduate degrees. My first undergraduate degree is actually mathematics. And my second is physics, even though I always wanted to be a physicist. Plan A, which involved getting good grades was mathematics. I was a mathematics major thinking about graduate school, but my heart was in physics. If we could take a small digression, what's to you the most beautiful idea in mathematics that you've encountered in this interplay between math and physics? It's the idea of symmetry. The fact that our innate sense of symmetry winds up aligning with just incredible mathematics, to me is the most beautiful thing. It's very strange, but true that if symmetries were perfect, we would not exist. And so even though we have these very powerful ideas about balance in the universe in some sense, it's only when you break those balances that you get creatures like humans and objects like planets and stars. So although they are a scaffold for reality, they cannot be the entirety of reality. So I'm kind of naturally attracted to parts of science and technology where symmetry plays a dominant role. And not just, I guess, symmetry as you said, but the magic happens when you break the symmetry. The magic happens when you break the symmetry. Okay, so diving right back in, you mentioned four quadrants. Yes. Two are filled with stuff we can, two buckets. And then there's crazy mathematical thing, ideas fulfilling the other two. What are those things? So earlier, the way I described these two buckets is I gave you a story that started out by putting us in a dusty room with two flashlights. And I said, turn on your flashlight, I'll turn on mine, the beams will go through each other. And the beams are composed of force carriers called photons. They carry the electromagnetic force and they pass right through each other. So imagine looking at the mathematics of such an object, which you don't have to imagine people like me do that. So you take that mathematics and then you ask yourself a question. You see, mathematics is a palette. It's just like a musical composer is able to construct variations on a theme. Well, a piece of mathematics in the hand of a physicist is something that we can construct variations on. So even though the mathematics that Maxwell gave us about light, we know how to construct variations on that. And one of the variations you can construct is to say, suppose you have a force carrier for electromagnetism that behaves like an electron in that it would bounce off of another one. That's changing a mathematical term in an equation. So if you did that, you would have a force carrier. So you would say first it belongs in this force carrying bucket, but it's got this property of bouncing off like electrons. So you say, well, gee, wait, no, that's not the right bucket. So you're forced to actually put it in one of these empty quadrants. So those sorts of things, basically we give them... So the photon mathematically can be accompanied by a photino. It's the thing that carries a force but has the rule of bouncing off. In a similar manner, you could start with an electron and you say, okay, so write down the mathematical electron. I know how to do that. A physicist named Dirac first told us how to do that back in the late 20s, early 30s. So take that mathematics. And then you say, let me look at that mathematics and find out what in the mathematics causes two electrons to bounce off of each other, even if I turn off the electrical charge. So I could do that. And now let me change that mathematical term. So now I have something that carries electrical charge, but if you take two of them, I'm sorry, if you turn their charges off, they'll pass through each other. So that puts things in the other quadrant. And those things we tend to call, we put the S in front of their name. So in the lower quadrant here, we have electrons and this now newly filled quadrant, we have selectors. And the quadrant over here, we had quarks. Over here, we have squarks. So now we've got this balanced pie. And that's basically what I understood as a graduate student in 1975 about this idea of supersymmetry, that it was going to fill up these two quadrants of the pie in a way that no one had ever thought about before. So I was amazed that no one else at MIT found this an interesting idea. So it led to my becoming the first person in MIT to really study supersymmetry. This is 1975, 76, 77. And in 77, I wrote the first PhD thesis in the physics department on this idea because I was drawn to the balance. Drawn to the symmetry. So what does that, first of all, is this fundamentally a mathematical idea? So how much experimental, and we'll have this theme. It's a really interesting one. When you explore the world of the small and in your new book talking about Approving Einstein, right, that we'll also talk about, there's this theme of kind of starting it, exploring crazy ideas first in the mathematics and then seeking for ways to experimentally validate them. Where do you put supersymmetry in that? It's closer than string theory. It has not yet been validated. In some sense, you mentioned Einstein, so let's go there for a moment. In our book, Approving Einstein Right, we actually do talk about the fact that Albert Einstein in 1915 wrote a set of equations which were very different from Newton's equations in describing gravity. These equations made some predictions that were different from Newton's predictions. It actually made three different predictions. One of them was not actually a prediction, but a postdiction, because it was known that Mercury was not orbiting the sun in the way that Newton would have told you. And so Einstein's theory actually describes Mercury orbiting in a way that was observed as opposed to what Newton would have told you. So that was one prediction. The second prediction that came out of the theory of general relativity, which Einstein wrote in 1915, was that if you, so let me describe an experiment and come back to it. Suppose I had a glass of water, and I filled the glass up, and then I moved the glass slowly back and forth between our two faces. It would appear to me like your face was moving, even though you weren't moving. I mean, it's actually, and what's causing it is because the light gets bent through the glass as it passes from your face to my eye. So Einstein in his 1915 theory of general relativity found out that gravity has the same effect on light as that glass of water. It would cause beams of light to bend. Now, Newton also knew this, but Einstein's prediction was that light would bend twice as much. And so here's a mathematical idea. Now, how do you actually prove it? Well, you've got to watch. Just a quick pause on that, just the language you're using. He found out. I can say he did a calculation. It's a really interesting notion that one of the beautiful things about this universe is you can do a calculation and combine with some of that magical intuition that physicists have, actually predict what would be, what's possible to experimentally validate. That's correct. So he found out in the sense that there seems to be something here and mathematically it should bend, gravity should bend light this amount. And so therefore that's something that could be potentially, and then come up with an experiment that could be validated. Right. And that's the way that actually modern physics, deeply fundamental modern physics, this is how it works. Earlier we spoke about the Higgs boson. So why did we go looking for it? The answer is that back in the late 60s and early 70s, some people wrote some equations and the equations predicted this. So then we went looking for it. So on supersymmetry for a second, there's these things called idynchrous symbols, these strange little graphs. Yes. You refer to them as revealing something like binary code underlying reality. First of all, can you describe these graphs? Describe these graphs, what are they? What are these beautiful little strange graphs? Well, first of all, idynchrous are an invention of mine, together with a colleague named Michael Fox. In 2005, we were looking at equations. Well, the story's a little bit more complicated and it'll take too long to explain all the details, but the Reader's Digest version is that we were looking at these equations and we figured out that all the data in a certain class of equations could be put in pictures. And the pictures, what do they look like? Well, they're just little balls. You have black balls and white balls. Those stand for those two buckets, by the way, that we talked about in reality. The white balls are things that are like particles of light. The black balls are like electrons. And then you can draw lines connecting these balls. And these lines are deeply mathematical objects and there's no way for me to, I have no physical model for telling you what the lines are. But if you were a mathematician, I would do a technical phrase saying, this is the orbit of the representation and the action of the symmetry generators. Mathematicians wouldn't understand that. Nobody else in their right mind would, so let's not go there. So, but we figured out that the data that was in the equations was in these funny pictures that we could draw. And so that was stunning, but it also was encouraging because there are problems with the equations, which I had first learned about in 1979 when I was down at Harvard and I went out to Caltech for the first time and working with a great scientist by the name of John Schwartz. There are problems in the equations we don't know how to solve. And so one of the things about solving problems that you don't know how to solve is that beating your head against a brick wall is probably not a good philosophy about how to solve it. So what do you need to do? You need to change your sense of reference, your frame of reference, your perspective. So when I saw these funny pictures, I thought, gee, that might be a way to solve these problems with equations that we don't know how to do. So that was for me one of the first attractions is that I now had an alternative language to try to attack a set of mathematical problems. But I quickly realized that A, this mathematical language was not known by mathematicians, which makes it pretty interesting because now you have to actually teach mathematicians about a piece of mathematics because that's how they make their living. And the great thing about working with mathematicians, of course, is the rigor with which they examine ideas. So they make your ideas better than they start out. So I start working with a group of mathematicians and it was in that collaboration that we figured out that these funny pictures had error correcting codes buried in them. Can you talk about what are error correcting codes? Ah, sure. So the simplest way to talk about error correcting codes is first of all, to talk about digital information. Digital information is basically strings of ones and zeros. They're called bits. So now let's imagine that I want to send you some bits. Well, maybe I could show you pictures, but maybe it's a rainy day or maybe the windows in your house are foggy. So sometimes when I show you a zero, you might interpret it as a one. Or other times when I show you a one, you might interpret it as a zero. So if that's the case, that means when I try to send you this data, it comes to you in corrupted form. And so the challenge is how do you get it to be uncorrupted? In the 1940s, a computer scientist named Hamming addressed the problem of how do you reliably transmit digital information? And what he came up with was a brilliant idea. Now, the way that you solve it is that you take the data that you want to send, the ones in your strings of ones and zeros, your favorite string, and then you dump more ones and zeros in, but you dump them in in a particular pattern. And this particular pattern is what a Hamming code is all about. So it's an error correcting code because if the person at the other end knows what the pattern's supposed to be, they can figure out when one's got changed to zeros, zero's got changed to one. So it turned out that our strange little objects that came from looking at the equations that we couldn't solve, it turns out that when you look at them deeply enough, you find out that they have ones and zeros buried in them. But even more astoundingly, the ones and zeros are not there randomly. They are in the pattern of error correcting codes. So this was an astounding thing that when we first got this result and tried to publish it, it took us three years to convince other physicists that we weren't crazy. Eventually we were able to publish it, I and this collaboration of mathematicians and other physicists. And so ever since then, I have actually been looking at the mathematics of these objects, trying to still understand properties of the equations. And I want to understand the properties of equations because I want to be able to try things like electrons. So as you can see, it's just like a two step removed process of trying to get back to reality. So what would you say is the most beautiful property of these Adinkra graphs, objects? What do you think, by the way, the word symbols, what do you think of them, these simple graphs? Are they objects or? How should we think about that? For people who work with mathematics like me, our mathematical concepts are, we often refer to them as objects because they feel like real things. Even though you can't see them or touch them, they're so much part of your interior life that it is as if you could. So we often refer to these things as objects, even though there's nothing objective about them. And what does a single graph represent in space? Okay, so the simplest of these graphs has to have one white ball and one black ball. That's that balance that we talked about earlier. Remember, we want to balance out the quadrants? Well, you can't do it unless you have a black ball and white ball. So the simplest of these objects looks like two little balls, one black, one white, connected by a single line. And what it's talking about is, as I said, a deep mathematical property related to symmetry. You've mentioned the error correcting codes, but is there a particular beautiful property that stands out to you about these objects that you just find? Yes, yes, there is. Early on in the development of it. Yes, there is. The craziest thing about these to me is that when you look at physics and try to write equations where information gets transmitted reliably, if you're in one of these super symmetrical systems with this extra symmetry, that doesn't happen unless there's an error correcting code present. So it's as if the universe says, you don't retransmit information unless there's something about an error correcting code. This to me is the craziest thing that I've ever personally encountered in my research. And it's actually got me to wondering how this could come about, because the only place in nature that we know about error correcting codes is genetics. And in genetics, we think it was evolution that causes error correcting codes to be in genomes. And so does that mean that there was some kind of form of evolution acting on the mathematical laws of the physics of our universe? This is a very bizarre and strange idea. And it's something I've wondered about from time to time since making these discoveries. Do you think such an idea could be fundamental, or is it emergent throughout all the different kinds of systems? I don't know whether it's fundamental. I probably will not live to find out. This is gonna be the work of probably some future either mathematician or physicist to figure out what these things actually mean. We have to talk a bit about the magical, the mysterious string theory, super string theory. Sure. There's still maybe this aspect of it, which is there's still for me from an outsider's perspective, this fascinating heated debate. On the status of string theory. Can you clarify this debate, perhaps articulating the various views and say where you land on it? So first of all, I doubt that I will be able to say anything to clarify the debate around string theory for a general audience. Part of the reason is because string theory has done something I've never seen the erectal physics do. It has broken out into consciousness of the general public before we're finished. You see, string theory doesn't actually exist because when we use the word theory, we mean a particular set of attributes. In particular, it means that you have an overarching paradigm that explains what it is that you're doing. No such overarching paradigm exists for string theory. What string theory is currently is an enormously large mutually reinforcing collection of mathematical facts in which we can find no contradictions. We don't know why it's there, but we can certainly say that without challenge. Now, just because you find a piece of mathematics doesn't mean that this applies to nature. And in fact, there has been a very heated debate about whether string theory is some sort of hysteria among the community of theoretical physicists, or whether it has something fundamental to say about our universe. We don't yet know the answer to that question. What those of us who study string theory will tell you are things like, string theory has been extraordinarily productive in getting us to think more deeply, even about mathematics that's not string theory, but the kind of mathematics that we've used to describe elementary particles. There have been spin offs from string theory, and this has been going on now for two decades almost, that have allowed us, for example, to more accurately calculate the force between electrons with the presence of quantum mechanics. This is not something you hear about in the public. There are other similar things. That kind of property I just told you about is what's called weak strong duality, and it comes directly from string theory. There are other things such as a property called holography, which allows one to take equations and look at them on the boundary of a space, and then to know information about inside a space without actually doing calculations there. This has come directly from string theory. So there are a number of direct mathematical effects that we learn as string theory, but we take these ideas and look at math that we already know and we find suddenly we're more powerful. This is a pretty good indication there's something interesting going on with string theory itself. So it's the early days of a powerful mathematical framework. That's what we have right now. What are the big, first of all, most people will probably, which as you said, most general public would know actually what string theory is, which is at the highest level, which is a fascinating fact. Well, string theory is what they do on the Big Bang Theory, right? One, can you maybe describe what is string theory, and two, what are the open challenges? So what is string theory? Well, the simplest explanation I can provide is to go back and ask what are particles, which is the question you first asked me. What's the smallest thing? Yeah, what's the smallest thing? So particles, one way I try to describe particles to people is start, I want you to imagine a little ball and I want you to let the size of that ball shrink until it has no extent whatsoever, but it still has the mass of the ball. That's actually what Newton was working with when he first invented physics. He's the real inventor of the massive particle, which is this idea that underlies all of physics. So that's where we start. It's a mathematical construct that you get by taking a limit of things that you know. So what's a string? Well, in the same analogy, I would say, now I want you to start with a piece of spaghetti. So we all know what that looks like. And now I want you to let the thickness of the spaghetti shrink until it has no thickness. Mathematically, I mean, in words, this makes no sense, but mathematically, this actually works and you get this mathematical object out. It has properties that are like spaghetti. It can wiggle and jiggle, but it can also move collectively like a piece of spaghetti. It's the mathematics of those sorts of objects that constitute string theory. And does the multidimensional, 11 dimensional, however many dimensional, more than four dimension, is that a crazy idea to you? Is that the stranger aspect of string theory to you? Not really, and also partly because of my own research. So earlier we talked about these strange symbols that we've discovered inside the equations. It turns out that to a very large extent, a Dinkers don't really care about the number of dimensions. They kind of have an internal mathematical consistency that allows them to be manifested in many different dimensions. Since supersymmetry is a part of string theory, then the same property you would expect to be inherited by string theory. However, another little known fact, which is not in the public debate, is that there are actually strings that are only four dimensional. This is something that was discovered at the end of the 80s by a scientist, by three different groups of physicists working independently. I and my friend Warren Siegel, who were at the University of Maryland at the time, were able to prove that there's mathematics that looks totally four dimensional, and yet it's a string. There was a group in Germany that used slightly different mathematics, but they found the same result. And then there was a group at Cornell who using yet a third piece of mathematics found the same result. So the fact that extra dimensions is so widely talked about in the public is partly a function of how the public has come to understand string theory and how the story has been told to them. But there are alternatives you don't know about. If we could talk about maybe experimental validation, and you're the coauthor of a recently published book, Proving Einstein Right, the human story of it too, the daring expeditions that change how we look at the universe. Do you see echoes of the early days of general relativity in the 1910s to the more stretched out to string theory? I do, I do. And that's one reason why I was happy to focus on the story of how Einstein became a global superstar. Earlier in our discussion, we went over his history where in 1915, he came up with this piece of mathematics, used it to do some calculations and then made a prediction. Yes. But making a prediction is not enough. Someone's got to go out and measure. And so string theory is in that in between zone. Now for Einstein, it was from 1915 to 1919. 1915 he makes the correct prediction. By the way, he made an incorrect prediction about the same thing in 1911, but he corrected himself in 1915. And by 1919, the first pieces of experimental observational data became available to say, yes, he's not wrong. And by 1922, the argument that based on observation was overwhelming that he was not wrong. Can you describe what special general relativity are just briefly? Sure. And what prediction Einstein made and maybe some or a memorable moment from the human journey of trying to prove this thing right, which is incredible. Right. So I'm very fortunate to have worked with a talented novelist who wanted to write a book that coincided with a book I wanted to write about how science kind of feels if you're a person, because it's actually people who do science, even though that may not be obvious to everyone. So for me, I wanted to write this book for a couple of reasons. I wanted young people to understand that the seeming alien giants that live before them were just as human as they are. They get married, they get divorced. They get married, they get divorced. They do terrible things. They do great things. They're people. They're just people like you. And so that part of telling the story allowed me to get that out there for both young people interested in the sciences as well as the public. But the other part of the story is I wanted to open up sort of what it was like. Now I'm a scientist. And so I will not pretend to be a great writer. I understand a lot about mathematics and I've even created my own mathematics that is kind of a weird thing to be able to do. But in order to tell the story, you really have to have an incredible master of the narrative. And that was my coauthor, Kathy Pelletier, who is a novelist. So we formed this conjoined brain, I used to call us. She used to call us Professor Higgins and Eliza Doolittle. My expression for us is that we were a conjoined brain to tell this story. And it allowed, so what are some magical moments? To me, the first magical moment in telling the story was looking at Albert Einstein and his struggle because although we regard him as a genius, as I said, in 1911, he actually made an incorrect prediction about bending starlight. And that's actually what set the astronomers off. In 1914, there was an eclipse. And by various accidents of war and weather and all sorts of things that we talk about in the book, no one was able to make the measurement. If they had made the measurement, it would have disagreed with his 1911 prediction because nature only has one answer. And so then you see how fortunate he was that wars and bad weather and accidents and transporting equipment stopped any measurements from being made. So he corrects himself in 1915, but the astronomers are already out there trying to make the measurement. So now he gives them a different number. And it turns out that's the number that nature agrees with. So it gives you a sense of this is a person struggling with something deeply. And although his deep insight led him to this, it is the circumstance of time, place and accident but through which we view him. And the story could have turned out very differently where first he makes a prediction, the measurements are made in 1914, they disagree with his prediction. And so what would the world view him as? Well, he's this professor who made this prediction that didn't get it right, yes? So the fragility of human history is illustrated by that story. And it's one of my favorite things. You also learn things like in our book, how eclipses and watching eclipses was a driver of the development of science in our nation when it was very young. In fact, even before we were a nation, it turns out there were citizens of this would be country that were going out trying to measure eclipses. So some fortune, some misfortune affects the progress of science. Absolutely. Especially with ideas as, to me at least, if I put myself back in those days, as radical as general relativity is. First, can you describe, if it's OK briefly, what general relativity is? And yeah, could you just take a moment of, yeah, put yourself in those shoes in the academic researchers, scientists of that time, and what is this theory? What is it trying to describe about our world? It's trying to answer the thing that left Isaac Newton puzzled. Isaac Newton says gravity magically goes from one place to another. He doesn't believe it, by the way. He knows that's not right. But the mathematics is so good that you have to say, well, I'll throw my qualms away because I'll use it. That's all we used to get a man from the Earth to the moon was that mathematics. So I'm one of those scientists, and I've seen this. And if I thought deeply about it, maybe I know that Newton himself wasn't comfortable. And so the first thing I would hope that I would feel is, gee, there's this young kid out there who has an idea to fill in this hole that was left with us by Sir Isaac Newton. That, I hope, would be my reaction. I have a suspicion. I'm kind of a mathematical creature. I was four years old when I first decided that science was what I wanted to do with my life. And so if my personality back then was like it is now, I think it's probably likely I would want to have studied his mathematics. What was a piece of mathematics that he was using to make this prediction? Because he didn't actually create that mathematics. That mathematics was created roughly 50 years before he lived. He's the person who harnessed it in order to make a prediction. In fact, he had to be taught this mathematics by a friend. So this is in our book. So putting myself in that time, I would want to, like I said, I think I would feel excitement. I would want to know what the mathematics is. And then I would want to do the calculations myself. Because one thing that physics is all about is that you don't have to take anybody's word for anything. You can do it yourself. It does seem that mathematics is a little bit more tolerant of radical ideas, or mathematicians, or people who find beauty in mathematics. All the white questions have no good answer. But let me ask, why do you think Einstein never got the Nobel Prize for general relativity? He got it for the photoelectric effect. That is correct. Well, first of all, that's something that is misunderstood about the Nobel Prize in physics. The Nobel Prize in physics is never given for purely proposing an idea. It is always given for proposing an idea that has observational support. So he could not get the Nobel Prize for either special relativity nor general relativity, because the provisions that Alfred Nobel left for the award prevent that. But after it's been validated, can he not get it then, or no? Yes, but remember the validation doesn't really come until the 1920s. But that's why they invented the second Nobel Prize. I mean, Marie Curie, you can get a second Nobel Prize for one of the greatest theories in physics. So let's be clear on this. The theory of general relativity had its critics even up until the 50s. So if the committee had wanted to give the prize for general relativity, there were vociferous critics of general relativity up until the 50s. Einstein died in 1955. What lessons do you draw from the story you tell in the book, from general relativity, from the radical nature of the theory, to looking at the future of string theory? Well, I think that the string theorists are probably going to retrace this path. But it's going to be far longer and more torturous, in my opinion. String theory is such a broad and deep development that, in my opinion, when it becomes acceptable, it's going to be because of a confluence of observations. It's not going to be a single observation. And I have to tell you that, so I gave a seminar here yesterday at MIT. And it's on an idea I have about how string theory can leave signatures in the cosmic microwave background, which is an astrophysical structure. And so if those kinds of observations are borne out, if perhaps other things related to the idea of supersymmetry are borne out, those are going to be the first powerful observationally based pieces of evidence that will begin to do what the Eddington expedition did in 1919. But that may take several decades. Do you think there will be Nobel prizes given for string theory? No, because I think it will exceed normal human lifetimes. But there are other prizes that are given. I mean, there is something called the Breakthrough Prize. There's a Russian immigrant, a Russian American immigrant named Yuri Milner, I believe his name, started this wonderful prize called the Breakthrough Prize. It's three times as much money as the Nobel Prize. And it gets awarded every year. And so something like one of those prizes is likely to be garnered at some point far earlier than a Nobel award. Jumping around a few topics. While you were at Caltech, you've gotten to interact, I believe, with Richard Feynman, I have to ask. Yes, Richard Feynman, indeed. Do you have any stories that stand out in your memory of that time? I have a fair number of stories, but I'm not prepared to tell them. They're not all politically correct. Let me see. Let me just say, I'll say the following. Richard Feynman, if you've ever read some of the books about him, in particular, there's a book called Surely You're Joking, Mr. Feynman. There's a series of books that starts with Surely You're Joking, Mr. Feynman. And I think the second one may be something like What Do You Care What They Say or something. I mean, the titles are all, there are three of them. When I read those books, I was amazed at how accurately those books portrayed the man that I interacted with. He was irreverent, he was fun, he was deeply intelligent, he was deeply human. And those books tell that story very effectively. Even just those moments, how did they affect you as a physicist? Well, one of the, well, it's funny because one of the things that, I didn't hear Feynman say this, but one of the things that is reported that he said is if you're in a bar stool as a physicist, and you can't explain to the guy on the bar stool next to you what you're doing, you don't understand what you're doing. And there's a lot of that that I think is correct, that when you truly understand something as complicated as string theory, when it's in its fully formed final development, it should be something you could tell to the person on the bar stool next to you. And that's something that affects the way I do science, quite frankly. It also affects the way I talk to the public about science. It's one of my mantras that I keep deeply, and try to keep deeply before me when I appear in public fora speaking about physics in particular and science in general. It's also something that Einstein said in a different way. He said he had these two different formulations. One of them is when the answer is simple, it's God speaking. And the other thing that he said was that what he did in his work was simply the distillation of common sense, that you distill down to something. And he also said you make things as simple as possible but no simpler. So all of those things, and certainly this attitude for me first seeing this was exemplified by being around Richard Feynman. So in all your work, you're always searching for the simplicity, for the simple, clear. I am, ultimately. Ultimately, I am. You served President Barack Obama's Council of Advisors in Science and Technology. For seven years, yes. For seven years with Eric Schmidt and several other brilliant people? Met Eric for the first time in 2009 when the council was called together. Yeah, I've seen pictures of you in that room. I mean, there's a bunch of brilliant people. It kind of looks amazing. What was that experience like, being called upon that kind of service? So let me go back to my father, first of all. I earlier mentioned that my father served 27 years in the US Army, starting in World War II. He went off in 1942, 43 to fight against the fascists. He was part of the supply corps that supplied General Patton as the tanks rolled across Western Europe, pushing back the forces of Nazism to meet up with our Russian comrades who were pushing the Nazis starting in Stalingrad. And the Second World War is actually a very interesting piece of history to know from both sides. Here in America, we typically don't. But I've actually studied history as an adult. So I actually know sort of the whole story. And on the Russian side, we don't know the Americans. We weren't taught the American side of the story. I know. I have many Russian friends, and we've had this conversation on many occasions. It's fascinating. But you know, like General Zhukov, for example, was something that you wouldn't know about, but you might not know about a Patton. But you're right. So Georgy Zhukov or Rokossovsky, I mean, there's a whole list of names that I've learned in the last 15 or 20 years looking at the Second World War. So your father was in the midst of that, probably one of the greatest wars in history. In the history of our species. And so the idea of service comes to me essentially from that example. So in 2009, when I first got a call from a Nobel laureate actually in biology, Harold Varmus, I was on my way to India, and I got this email message, and he said he needed to talk to me. And I said, OK, fine, we can talk. Got back to States I didn't hear from him. We went through several cycles of this, sending me a message, I want to talk to you, and then him never contacting us. Finally, I was on my way to give a physics presentation at the University of Florida in Gainesville, and Jess had stepped off a plane, and my mobile phone went off, and it was Harold. And so I said, Harold, why do you keep sending me messages that you want to talk but you never call? And he said, well, I'm sorry, things have been hectic and da, da, da, da, da. And then he said, if you were offered the opportunity to serve on the US President's Council of Advisors on Science and Technology, what would be your answer? I was amused at the formulation of the question, because it's clear there's a purpose of why the question is asked that way. But then he made it clear to me he wasn't joking. And literally, one of the few times in my life, my knees went weak and I had to hold myself up against a wall so that I didn't fall over. I doubt if most of us who have been the beneficiaries of the benefits of this country, when given that kind of opportunity, could say no. And I know I certainly couldn't say no. I was frightened out of my wits because I had never, although I have, my career in terms of policy recommendations is actually quite long, it goes back to the 80s, but I had never been called upon to serve as an advisor to a president of the United States. And it was very scary, but I did not feel that I could say no because I wouldn't be able to sleep with myself at night saying that I chickened out or whatever. And so I took the plunge and we had a pretty good run. There are things that I did in those seven years of which I'm extraordinarily proud. One of the ways I tell people is if you've ever seen that television cartoon called Schoolhouse Rock, there's this one story about how a bill becomes a law. And I've kind of lived that. There are things that I did that have now been codified in US law. Not everybody gets a chance to do things like that in life. What do you think is the, science and technology, especially in American politics, we haven't had a president who's an engineer or a scientist. What do you think is the role of a president like President Obama in understanding the latest ideas in science and tech? What was that experience like? Well, first of all, I've met other presidents beside President Obama. He is the most extraordinary president that I've ever encountered. Despite the fact that he went to Harvard. When I think about President Obama, he is a deep mystery to me. In the same way perhaps that the universe is a mystery. I don't really understand how that constellation of personality traits could come to fit within a single individual. But I saw them for seven years. So I'm convinced that I wasn't seeing fake news. I was seeing real data. He was just an extraordinary man. And one of the things that was completely clear was that he was not afraid and not intimidated to be in a room of really smart people. I mean, really smart people. That he was completely comfortable in asking some of the world's greatest experts, what do I do about this problem? And it wasn't that he was going to just take the problem and it wasn't that he was going to just take their answer, but he would listen to the advice. And that to me was extraordinary. As I said, I've been around other executives and I've never seen one quite like him. He's an extraordinary learner, is what I observed. And not just about science. He has a way of internalizing information in real time that I've never seen in a politician before. Even in extraordinarily complicated situations. Even scientific ideas. Scientific or non scientific. Complicated ideas don't have to be scientific ideas. But I have, like I said, seen him in real time process complicated ideas with a speed that was stunning. In fact, he shocked the entire council. I mean, we were all stunned at his capacity to be presented with complicated ideas and then to wrestle with them and internalize them. And then come back, more interestingly enough, come back with really good questions to ask. I've noticed this in an area that I understand more of artificial intelligence. I've seen him integrate information about artificial intelligence and then come out with these kind of Richard Feynman like insights. That's exactly right. And as I said, those of us who have been in that position, it is stunning to see it happen because you don't expect it. Yeah, he takes what, for a lot of sort of graduate students, takes like four years in a particular topic and he just does it in a few minutes. He sees it very naturally. You've mentioned that you would love to see experimental validation of super strength theory before you shove. Before I shuffle off this mortal coil. Which the poetry of that reference made me smile when I saw it. You know, people actually misunderstand it because it's not what, it doesn't mean what we generally take it to mean colloquially. But it's such a beautiful expression. Yeah, it is. It's from the Hamlet, to be or not to be speech. Which I still don't understand what that's about. But so many interpretations. Anyway, what are the most exciting problems in physics that are just within our reach of understanding and maybe solve the next few decades that you may be able to see? So in physics, you limited it to physics. Physics, mathematics, this kind of space of problems that fascinate you. Well, the one that looks on the immediate horizon like we're gonna get to is quantum computing. And that's gonna, if we actually get there, that's gonna be extraordinarily interesting. Do you think that's a fundamentally problem of theory or is it now in the space of engineering? It's in the space of engineering. I was out at a Q station, as you may know, Microsoft has this research facility in Santa Barbara. I was out there a couple of months in my capacity as a vice president of American Physical Society. And I had some things that were like lectures and they were telling me what they were doing. And it sure sounded like they knew what they were doing and that they were close to major breakthroughs. Yeah, that's a really exciting possibility there. But back to Hamlet, do you ponder mortality, your own mortality? Nope, my mother died when I was 11 years old. And so I immediately knew what the end of the story was for all of us. As a consequence, I've never spent a lot of time thinking about death. It'll come in its own good time. And sort of to me, the job of every human is to make the best and the most of the time that's given to us in order not for our own selfish gain, but to try to make this place a better place for someone else. And on the why of life, why do you think we are? I have no idea and I never even worried about it. For me, I have an answer, a local answer. The apparent why for me was because I'm supposed to do physics. But it's funny because there's so many other quantum mechanically speaking possibilities in your life, such as being an astronaut, for example. So you know about that, I see. Well, like Einstein and the vicissitudes that prevented the 1914 measurement of starlight vending, the universe is constructed in such a way that I didn't become an astronaut, which would have, for me, I would have faced the worst choice in my life, whether I would try to become an astronaut or whether I would try to do theoretical physics. Both of these dreams were born when I was four years old simultaneously. And so I can't imagine how difficult that decision would have been. The universe helped you out on that one. Not only in that one, but in many ones. It helped me out by allowing me to pick the right dad. Is there a day in your life you could relive because it made you truly happy? What day would that be if you could just look back? Being a theoretical physicist is like having Christmas every day. I have lots of joy in my life. The moments of invention, the moments of ideas, revelation. Yes, the only thing that exceed them are some family experiences like when my kids were born and that kind of stuff, but they're pretty high up there. Well, I don't see a better way to end it, Jim. Thank you so much. It was a huge honor talking to you today. This worked out better than I thought. I'm glad to hear it. And now, let me leave you with some words of wisdom from the great Albert Einstein for the rebels among us. Unthinking respect for authority is the greatest enemy of truth. Thank you for listening and hope to see you next time.
Jim Gates: Supersymmetry, String Theory and Proving Einstein Right | Lex Fridman Podcast #60
The following is a conversation with Melanie Mitchell. She's a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives, including adaptive complex systems, genetic algorithms, and the copycat cognitive architecture, which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors, Douglas Hofstadter and John Holland, to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence, A Guide for Thinking Humans. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First. Best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LexPodcast, you'll get $10 and Cash App will also donate $10 to First, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Melanie Mitchell. The name of your new book is Artificial Intelligence, subtitle, A Guide for Thinking Humans. The name of this podcast is Artificial Intelligence. So let me take a step back and ask the old Shakespeare question about roses. And what do you think of the term artificial intelligence for our big and complicated and interesting field? I'm not crazy about the term. I think it has a few problems because it means so many different things to different people. And intelligence is one of those words that isn't very clearly defined either. There's so many different kinds of intelligence, degrees of intelligence, approaches to intelligence. John McCarthy was the one who came up with the term artificial intelligence. And from what I read, he called it that to differentiate it from cybernetics, which was another related movement at the time. And he later regretted calling it artificial intelligence. Herbert Simon was pushing for calling it complex information processing, which got nixed, but probably is equally vague, I guess. Is it the intelligence or the artificial in terms of words that is most problematic, would you say? Yeah, I think it's a little of both. But it has some good sides because I personally was attracted to the field because I was interested in phenomenon of intelligence. And if it was called complex information processing, maybe I'd be doing something wholly different now. What do you think of, I've heard the term used, cognitive systems, for example, so using cognitive. Yeah, I mean, cognitive has certain associations with it. And people like to separate things like cognition and perception, which I don't actually think are separate. But often people talk about cognition as being different from sort of other aspects of intelligence. It's sort of higher level. So to you, cognition is this broad, beautiful mess of things that encompasses the whole thing. Memory, perception. Yeah, I think it's hard to draw lines like that. When I was coming out of grad school in 1990, which is when I graduated, that was during one of the AI winters. And I was advised to not put AI, artificial intelligence on my CV, but instead call it intelligence systems. So that was kind of a euphemism, I guess. What about to stick briefly on terms and words, the idea of artificial general intelligence, or like Yann LeCun prefers human level intelligence, sort of starting to talk about ideas that achieve higher and higher levels of intelligence and somehow artificial intelligence seems to be a term used more for the narrow, very specific applications of AI and sort of what set of terms appeal to you to describe the thing that perhaps we strive to create? People have been struggling with this for the whole history of the field and defining exactly what it is that we're talking about. You know, John Searle had this distinction between strong AI and weak AI. And weak AI could be general AI, but his idea was strong AI was the view that a machine is actually thinking, that as opposed to simulating thinking or carrying out processes that we would call intelligent. At a high level, if you look at the founding of the field of McCarthy and Searle and so on, are we closer to having a better sense of that line between narrow, weak AI and strong AI? Yes, I think we're closer to having a better idea of what that line is. Early on, for example, a lot of people thought that playing chess would be, you couldn't play chess if you didn't have sort of general human level intelligence. And of course, once computers were able to play chess better than humans, that revised that view. And people said, okay, well, maybe now we have to revise what we think of intelligence as. And so that's kind of been a theme throughout the history of the field is that once a machine can do some task, we then have to look back and say, oh, well, that changes my understanding of what intelligence is because I don't think that machine is intelligent, at least that's not what I wanna call intelligence. So do you think that line moves forever or will we eventually really feel as a civilization like we've crossed the line if it's possible? It's hard to predict, but I don't see any reason why we couldn't in principle create something that we would consider intelligent. I don't know how we will know for sure. Maybe our own view of what intelligence is will be refined more and more until we finally figure out what we mean when we talk about it. But I think eventually we will create machines in a sense that have intelligence. They may not be the kinds of machines we have now. And one of the things that that's going to produce is making us sort of understand our own machine like qualities that we in a sense are mechanical in the sense that like cells, cells are kind of mechanical. They have algorithms, they process information by and somehow out of this mass of cells, we get this emergent property that we call intelligence. But underlying it is really just cellular processing and lots and lots and lots of it. Do you think we'll be able to, do you think it's possible to create intelligence without understanding our own mind? You said sort of in that process we'll understand more and more, but do you think it's possible to sort of create without really fully understanding from a mechanistic perspective, sort of from a functional perspective how our mysterious mind works? If I had to bet on it, I would say, no, we do have to understand our own minds at least to some significant extent. But I think that's a really big open question. I've been very surprised at how far kind of brute force approaches based on say big data and huge networks can take us. I wouldn't have expected that. And they have nothing to do with the way our minds work. So that's been surprising to me, so it could be wrong. To explore the psychological and the philosophical, do you think we're okay as a species with something that's more intelligent than us? Do you think perhaps the reason we're pushing that line further and further is we're afraid of acknowledging that there's something stronger, better, smarter than us humans? Well, I'm not sure we can define intelligence that way because smarter than is with respect to what, computers are already smarter than us in some areas. They can multiply much better than we can. They can figure out driving routes to take much faster and better than we can. They have a lot more information to draw on. They know about traffic conditions and all that stuff. So for any given particular task, sometimes computers are much better than we are and we're totally happy with that, right? I'm totally happy with that. It doesn't bother me at all. I guess the question is which things about our intelligence would we feel very sad or upset that machines had been able to recreate? So in the book, I talk about my former PhD advisor, Douglas Hofstadter, who encountered a music generation program. And that was really the line for him, that if a machine could create beautiful music, that would be terrifying for him because that is something he feels is really at the core of what it is to be human, creating beautiful music, art, literature. He doesn't like the fact that machines can recognize spoken language really well. He personally doesn't like using speech recognition, but I don't think it bothers him to his core because it's like, okay, that's not at the core of humanity. But it may be different for every person what really they feel would usurp their humanity. And I think maybe it's a generational thing also. Maybe our children or our children's children will be adapted, they'll adapt to these new devices that can do all these tasks and say, yes, this thing is smarter than me in all these areas, but that's great because it helps me. Looking at the broad history of our species, why do you think so many humans have dreamed of creating artificial life and artificial intelligence throughout the history of our civilization? So not just this century or the 20th century, but really throughout many centuries that preceded it? That's a really good question, and I have wondered about that. Because I myself was driven by curiosity about my own thought processes and thought it would be fantastic to be able to get a computer to mimic some of my thought processes. I'm not sure why we're so driven. I think we want to understand ourselves better and we also want machines to do things for us. But I don't know, there's something more to it because it's so deep in the kind of mythology or the ethos of our species. And I don't think other species have this drive. So I don't know. If you were to sort of psychoanalyze yourself in your own interest in AI, are you, what excites you about creating intelligence? You said understanding our own selves? Yeah, I think that's what drives me particularly. I'm really interested in human intelligence, but I'm also interested in the sort of the phenomenon of intelligence more generally. And I don't think humans are the only thing with intelligence, or even animals. But I think intelligence is a concept that encompasses a lot of complex systems. And if you think of things like insect colonies or cellular processes or the immune system or all kinds of different biological or even societal processes have as an emergent property some aspects of what we would call intelligence. They have memory, they process information, they have goals, they accomplish their goals, et cetera. And to me, the question of what is this thing we're talking about here was really fascinating to me. And exploring it using computers seem to be a good way to approach the question. So do you think kind of of intelligence, do you think of our universe as a kind of hierarchy of complex systems? And then intelligence is just the property of any, you can look at any level and every level has some aspect of intelligence. So we're just like one little speck in that giant hierarchy of complex systems. I don't know if I would say any system like that has intelligence, but I guess what I wanna, I don't have a good enough definition of intelligence to say that. So let me do sort of a multiple choice, I guess. So you said ant colonies. So are ant colonies intelligent? Are the bacteria in our body intelligent? And then going to the physics world molecules and the behavior at the quantum level of electrons and so on, are those kinds of systems, do they possess intelligence? Like where's the line that feels compelling to you? I don't know. I mean, I think intelligence is a continuum. And I think that the ability to, in some sense, have intention, have a goal, have some kind of self awareness is part of it. So I'm not sure if, you know, it's hard to know where to draw that line. I think that's kind of a mystery. But I wouldn't say that the planets orbiting the sun is an intelligent system. I mean, I would find that maybe not the right term to describe that. And there's all this debate in the field of like what's the right way to define intelligence? What's the right way to model intelligence? Should we think about computation? Should we think about dynamics? And should we think about free energy and all of that stuff? And I think that it's a fantastic time to be in the field because there's so many questions and so much we don't understand. There's so much work to do. So are we the most special kind of intelligence in this kind of, you said there's a bunch of different elements and characteristics of intelligence systems and colonies. Is human intelligence the thing in our brain? Is that the most interesting kind of intelligence in this continuum? Well, it's interesting to us because it is us. I mean, interesting to me, yes. And because I'm part of, you know, human. But to understanding the fundamentals of intelligence, what I'm getting at, is studying the human, is sort of, if everything we've talked about, what you talk about in your book, what just the AI field, this notion, yes, it's hard to define, but it's usually talking about something that's very akin to human intelligence. Yeah, to me it is the most interesting because it's the most complex, I think. It's the most self aware. It's the only system, at least that I know of, that reflects on its own intelligence. And you talk about the history of AI and us, in terms of creating artificial intelligence, being terrible at predicting the future with AI, with tech in general. So why do you think we're so bad at predicting the future? Are we hopelessly bad? So no matter what, whether it's this decade or the next few decades, every time we make a prediction, there's just no way of doing it well, or as the field matures, we'll be better and better at it. I believe as the field matures, we will be better. And I think the reason that we've had so much trouble is that we have so little understanding of our own intelligence. So there's the famous story about Marvin Minsky assigning computer vision as a summer project to his undergrad students. And I believe that's actually a true story. Yeah, no, there's a write up on it. Everyone should read. It's like a, I think it's like a proposal that describes everything that should be done in that project. It's hilarious because it, I mean, you could explain it, but from my recollection, it describes basically all the fundamental problems of computer vision, many of which still haven't been solved. Yeah, and I don't know how far they really expect it to get. But I think that, and they're really, Marvin Minsky is a super smart guy and very sophisticated thinker. But I think that no one really understands or understood, still doesn't understand how complicated, how complex the things that we do are because they're so invisible to us. To us, vision, being able to look out at the world and describe what we see, that's just immediate. It feels like it's no work at all. So it didn't seem like it would be that hard, but there's so much going on unconsciously, sort of invisible to us that I think we overestimate how easy it will be to get computers to do it. And sort of for me to ask an unfair question, you've done research, you've thought about many different branches of AI through this book, widespread looking at where AI has been, where it is today. If you were to make a prediction, how many years from now would we as a society create something that you would say achieved human level intelligence or superhuman level intelligence? That is an unfair question. A prediction that will most likely be wrong. But it's just your notion because. Okay, I'll say more than 100 years. More than 100 years. And I quoted somebody in my book who said that human level intelligence is 100 Nobel Prizes away, which I like because it's a nice way to sort of, it's a nice unit for prediction. And it's like that many fantastic discoveries have to be made. And of course there's no Nobel Prize in AI, not yet at least. If we look at that 100 years, your sense is really the journey to intelligence has to go through something more complicated that's akin to our own cognitive systems, understanding them, being able to create them in the artificial systems, as opposed to sort of taking the machine learning approaches of today and really scaling them and scaling them and scaling them exponentially with both compute and hardware and data. That would be my guess. I think that in the sort of going along in the narrow AI that the current approaches will get better. I think there's some fundamental limits to how far they're gonna get. I might be wrong, but that's what I think. And there's some fundamental weaknesses that they have that I talk about in the book that just comes from this approach of supervised learning requiring sort of feed forward networks and so on. It's just, I don't think it's a sustainable approach to understanding the world. Yeah, I'm personally torn on it. Sort of everything you read about in the book and sort of what we're talking about now, I agree with you, but I'm more and more, depending on the day, first of all, I'm deeply surprised by the success of machine learning and deep learning in general. From the very beginning, when I was, it's really been my main focus of work. I'm just surprised how far it gets. And I'm also think we're really early on in these efforts of these narrow AI. So I think there'll be a lot of surprise of how far it gets. I think we'll be extremely impressed. Like my sense is everything I've seen so far, and we'll talk about autonomous driving and so on, I think we can get really far. But I also have a sense that we will discover, just like you said, is that even though we'll get really far in order to create something like our own intelligence, it's actually much farther than we realize. I think these methods are a lot more powerful than people give them credit for actually. So that of course there's the media hype, but I think there's a lot of researchers in the community, especially like not undergrads, right? But like people who've been in AI, they're skeptical about how far deep learning can get. And I'm more and more thinking that it can actually get farther than they'll realize. It's certainly possible. One thing that surprised me when I was writing the book is how far apart different people in the field are on their opinion of how far the field has come and what is accomplished and what's gonna happen next. What's your sense of the different, who are the different people, groups, mindsets, thoughts in the community about where AI is today? Yeah, they're all over the place. So there's kind of the singularity transhumanism group. I don't know exactly how to characterize that approach, which is sort of the sort of exponential, exponential progress where we're on the sort of almost at the hugely accelerating part of the exponential. And in the next 30 years, we're going to see super intelligent AI and all that, and we'll be able to upload our brains and that. So there's that kind of extreme view that most, I think most people who work in AI don't have. They disagree with that. But there are people who are, maybe aren't singularity people, but they do think that the current approach of deep learning is going to scale and is going to kind of go all the way basically and take us to true AI or human level AI or whatever you wanna call it. And there's quite a few of them. And a lot of them, like a lot of the people I've met who work at big tech companies in AI groups kind of have this view that we're really not that far. Just to linger on that point, sort of if I can take as an example, like Yann LeCun, I don't know if you know about his work and so his viewpoints on this. I do. He believes that there's a bunch of breakthroughs, like fundamental, like Nobel prizes that are needed still. But I think he thinks those breakthroughs will be built on top of deep learning. And then there's some people who think we need to kind of put deep learning to the side a little bit as just one module that's helpful in the bigger cognitive framework. Right, so I think somewhat I understand Yann LeCun is rightly saying supervised learning is not sustainable. We have to figure out how to do unsupervised learning, that that's gonna be the key. And I think that's probably true. I think unsupervised learning is gonna be harder than people think. I mean, the way that we humans do it. Then there's the opposing view, there's the Gary Marcus kind of hybrid view where deep learning is one part, but we need to bring back kind of these symbolic approaches and combine them. Of course, no one knows how to do that very well. Which is the more important part to emphasize and how do they fit together? What's the foundation? What's the thing that's on top? What's the cake? What's the icing? Right. Then there's people pushing different things. There's the people, the causality people who say, deep learning as it's formulated today completely lacks any notion of causality. And that's, dooms it. And therefore we have to somehow give it some kind of notion of causality. There's a lot of push from the more cognitive science crowd saying, we have to look at developmental learning. We have to look at how babies learn. We have to look at intuitive physics, all these things we know about physics. And as somebody kind of quipped, we also have to teach machines intuitive metaphysics, which means like objects exist. Causality exists. These things that maybe we're born with. I don't know that they don't have the, machines don't have any of that. They look at a group of pixels and maybe they get 10 million examples, but they can't necessarily learn that there are objects in the world. So there's just a lot of pieces of the puzzle that people are promoting and with different opinions of like how important they are and how close we are to being able to put them all together to create general intelligence. Looking at this broad field, what do you take away from it? Who is the most impressive? Is it the cognitive folks, the Gary Marcus camp, the on camp, unsupervised and their self supervised. There's the supervisors and then there's the engineers who are actually building systems. You have sort of the Andrej Karpathy at Tesla building actual, it's not philosophy, it's real like systems that operate in the real world. What do you take away from all this beautiful variety? I don't know if, these different views are not necessarily mutually exclusive. And I think people like Yann LeCun agrees with the developmental psychology of causality, intuitive physics, et cetera. But he still thinks that it's learning, like end to end learning is the way to go. Will take us perhaps all the way. Yeah, and that we don't need, there's no sort of innate stuff that has to get built in. This is, it's because it's a hard problem. I personally, I'm very sympathetic to the cognitive science side, cause that's kind of where I came in to the field. I've become more and more sort of an embodiment adherent saying that without having a body, it's gonna be very hard to learn what we need to learn about the world. That's definitely something I'd love to talk about in a little bit. To step into the cognitive world, then if you don't mind, cause you've done so many interesting things. If you look to copycat, taking a couple of decades step back, you, Douglas Hofstadter and others have created and developed copycat more than 30 years ago. That's painful to hear. So what is it? What is copycat? It's a program that makes analogies in an idealized domain, idealized world of letter strings. So as you say, 30 years ago, wow. So I started working on it when I started grad school in 1984. Wow, dates me. And it's based on Doug Hofstadter's ideas about that analogy is really a core aspect of thinking. I remember he has a really nice quote in the book by himself and Emmanuel Sandor called Surfaces and Essences. I don't know if you've seen that book, but it's about analogy and he says, without concepts, there can be no thought and without analogies, there can be no concepts. So the view is that analogy is not just this kind of reasoning technique where we go, shoe is to foot as glove is to what, these kinds of things that we have on IQ tests or whatever, but that it's much deeper, it's much more pervasive in every thing we do, in our language, our thinking, our perception. So he had a view that was a very active perception idea. So the idea was that instead of having kind of a passive network in which you have input that's being processed through these feed forward layers and then there's an output at the end, that perception is really a dynamic process where like our eyes are moving around and they're getting information and that information is feeding back to what we look at next, influences, what we look at next and how we look at it. And so copycat was trying to do that, kind of simulate that kind of idea where you have these agents, it's kind of an agent based system and you have these agents that are picking things to look at and deciding whether they were interesting or not and whether they should be looked at more and that would influence other agents. Now, how do they interact? So they interacted through this global kind of what we call the workspace. So it's actually inspired by the old blackboard systems where you would have agents that post information on a blackboard, a common blackboard. This is like very old fashioned AI. Is that, are we talking about like in physical space? Is this a computer program? It's a computer program. So agents posting concepts on a blackboard kind of thing? Yeah, we called it a workspace. And the workspace is a data structure. The agents are little pieces of code that you could think of them as little detectors or little filters that say, I'm gonna pick this place to look and I'm gonna look for a certain thing and is this the thing I think is important, is it there? So it's almost like, you know, a convolution in a way, except a little bit more general and saying, and then highlighting it in the workspace. Once it's in the workspace, how do the things that are highlighted relate to each other? Like what's, is this? So there's different kinds of agents that can build connections between different things. So just to give you a concrete example, what CopyCat did was it made analogies between strings of letters. So here's an example. ABC changes to ABD. What does IJK change to? And the program had some prior knowledge about the alphabet, knew the sequence of the alphabet. It had a concept of letter, successor of letter. It had concepts of sameness. So it has some innate things programmed in. But then it could do things like say, discover that ABC is a group of letters in succession. And then an agent can mark that. So the idea that there could be a sequence of letters, is that a new concept that's formed or that's a concept that's innate? That's a concept that's innate. Sort of, can you form new concepts or are all concepts innate? No. So in this program, all the concepts of the program were innate. So, cause we weren't, I mean, obviously that limits it quite a bit. But what we were trying to do is say, suppose you have some innate concepts, how do you flexibly apply them to new situations? And how do you make analogies? Let's step back for a second. So I really liked that quote that you say, without concepts, there could be no thought and without analogies, there can be no concepts. In a Santa Fe presentation, you said that it should be one of the mantras of AI. Yes. And that you also yourself said, how to form and fluidly use concept is the most important open problem in AI. Yes. How to form and fluidly use concepts is the most important open problem in AI. So let's, what is a concept and what is an analogy? A concept is in some sense a fundamental unit of thought. So say we have a concept of a dog, okay? And a concept is embedded in a whole space of concepts so that there's certain concepts that are closer to it or farther away from it. Are these concepts, are they really like fundamental, like we mentioned innate, almost like axiomatic, like very basic and then there's other stuff built on top of it? Or does this include everything? Are they complicated? You can certainly form new concepts. Right, I guess that's the question I'm asking. Can you form new concepts that are complex combinations of other concepts? Yes, absolutely. And that's kind of what we do in learning. And then what's the role of analogies in that? So analogy is when you recognize that one situation is essentially the same as another situation. And essentially is kind of the key word there because it's not the same. So if I say, last week I did a podcast interview actually like three days ago in Washington, DC. And that situation was very similar to this situation, although it wasn't exactly the same. It was a different person sitting across from me. We had different kinds of microphones. The questions were different. The building was different. There's all kinds of different things, but really it was analogous. Or I can say, so doing a podcast interview, that's kind of a concept, it's a new concept. I never had that concept before this year essentially. I mean, and I can make an analogy with it like being interviewed for a news article in a newspaper. And I can say, well, you kind of play the same role that the newspaper reporter played. It's not exactly the same because maybe they actually emailed me some written questions rather than talking and the writing, the written questions are analogous to your spoken questions. And there's just all kinds of similarities. And this somehow probably connects to conversations you have over Thanksgiving dinner, just general conversations. There's like a thread you can probably take that just stretches out in all aspects of life that connect to this podcast. I mean, conversations between humans. Sure, and if I go and tell a friend of mine about this podcast interview, my friend might say, oh, the same thing happened to me. Let's say, you ask me some really hard question and I have trouble answering it. My friend could say, the same thing happened to me, but it was like, it wasn't a podcast interview. It wasn't, it was a completely different situation. And yet my friend is seeing essentially the same thing. We say that very fluidly, the same thing happened to me. Essentially the same thing. But we don't even say that, right? We just say the same thing. You imply it, yes. Yeah, and the view that kind of went into say copycat, that whole thing is that that act of saying the same thing happened to me is making an analogy. And in some sense, that's what's underlies all of our concepts. Why do you think analogy making that you're describing is so fundamental to cognition? Like it seems like it's the main element action of what we think of as cognition. Yeah, so it can be argued that all of this generalization we do of concepts and recognizing concepts in different situations is done by analogy. That that's, every time I'm recognizing that say you're a person, that's by analogy because I have this concept of what person is and I'm applying it to you. And every time I recognize a new situation, like one of the things I talked about in the book was the concept of walking a dog, that that's actually making an analogy because all of the details are very different. So reasoning could be reduced down to essentially analogy making. So all the things we think of as like, yeah, like you said, perception. So what's perception is taking raw sensory input and it's somehow integrating into our understanding of the world, updating the understanding. And all of that has just this giant mess of analogies that are being made. I think so, yeah. If you just linger on it a little bit, like what do you think it takes to engineer a process like that for us in our artificial systems? We need to understand better, I think, how we do it, how humans do it. And it comes down to internal models, I think. People talk a lot about mental models, that concepts are mental models, that I can, in my head, I can do a simulation of a situation like walking a dog. And there's some work in psychology that promotes this idea that all of concepts are really mental simulations, that whenever you encounter a concept or situation in the world or you read about it or whatever, you do some kind of mental simulation that allows you to predict what's gonna happen, to develop expectations of what's gonna happen. So that's the kind of structure I think we need, is that kind of mental model that, and in our brains, somehow these mental models are very much interconnected. Again, so a lot of stuff we're talking about are essentially open problems, right? So if I ask a question, I don't mean that you would know the answer, only just hypothesizing. But how big do you think is the network graph, data structure of concepts that's in our head? Like if we're trying to build that ourselves, like it's, we take it, that's one of the things we take for granted. We think, I mean, that's why we take common sense for granted, we think common sense is trivial. But how big of a thing of concepts is that underlies what we think of as common sense, for example? Yeah, I don't know. And I'm not, I don't even know what units to measure it in. Can you say how big is it? That's beautifully put, right? But, you know, we have, you know, it's really hard to know. We have, what, a hundred billion neurons or something. I don't know. And they're connected via trillions of synapses. And there's all this chemical processing going on. There's just a lot of capacity for stuff. And their information's encoded in different ways in the brain. It's encoded in chemical interactions. It's encoded in electric, like firing and firing rates. And nobody really knows how it's encoded, but it just seems like there's a huge amount of capacity. So I think it's huge. It's just enormous. And it's amazing how much stuff we know. Yeah. And for, but we know, and not just know like facts, but it's all integrated into this thing that we can make analogies with. Yes. There's a dream of Semantic Web, and there's a lot of dreams from expert systems of building giant knowledge bases. Do you see a hope for these kinds of approaches of building, of converting Wikipedia into something that could be used in analogy making? Sure. And I think people have made some progress along those lines. I mean, people have been working on this for a long time. But the problem is, and this I think is the problem of common sense. Like people have been trying to get these common sense networks. Here at MIT, there's this concept net project, right? But the problem is that, as I said, most of the knowledge that we have is invisible to us. It's not in Wikipedia. It's very basic things about intuitive physics, intuitive psychology, intuitive metaphysics, all that stuff. If you were to create a website that described intuitive physics, intuitive psychology, would it be bigger or smaller than Wikipedia? What do you think? I guess described to whom? I'm sorry, but. No, that's really good. That's exactly right, yeah. That's a hard question, because how do you represent that knowledge is the question, right? I can certainly write down F equals MA and Newton's laws and a lot of physics can be deduced from that. But that's probably not the best representation of that knowledge for doing the kinds of reasoning we want a machine to do. So, I don't know, it's impossible to say now. And people, you know, the projects, like there's the famous psych project, right, that Douglas Linnaught did that was trying. That thing's still going? I think it's still going. And the idea was to try and encode all of common sense knowledge, including all this invisible knowledge in some kind of logical representation. And it just never, I think, could do any of the things that he was hoping it could do, because that's just the wrong approach. Of course, that's what they always say, you know. And then the history books will say, well, the psych project finally found a breakthrough in 2058 or something. So much progress has been made in just a few decades that who knows what the next breakthroughs will be. It could be. It's certainly a compelling notion what the psych project stands for. I think Linnaught was one of the earliest people to say common sense is what we need. That's what we need. All this like expert system stuff, that is not gonna get you to AI. You need common sense. And he basically gave up his whole academic career to go pursue that. And I totally admire that, but I think that the approach itself will not, in 2040 or wherever, be successful. What do you think is wrong with the approach? What kind of approach might be successful? Well, if I knew that. Again, nobody knows the answer, right? If I knew that, you know, one of my talks, one of the people in the audience, this is a public lecture, one of the people in the audience said, what AI companies are you investing in? I'm like, well, I'm a college professor for one thing, so I don't have a lot of extra funds to invest, but also like no one knows what's gonna work in AI, right? That's the problem. Let me ask another impossible question in case you have a sense. In terms of data structures that will store this kind of information, do you think they've been invented yet, both in hardware and software? Or is it something else needs to be, are we totally, you know? I think something else has to be invented. That's my guess. Is the breakthroughs that's most promising, would that be in hardware or in software? Do you think we can get far with the current computers? Or do we need to do something that you see? I see what you're saying. I don't know if Turing computation is gonna be sufficient. Probably, I would guess it will. I don't see any reason why we need anything else. So in that sense, we have invented the hardware we need, but we just need to make it faster and bigger, and we need to figure out the right algorithms and the right sort of architecture. Turing, that's a very mathematical notion. When we try to have to build intelligence, it's now an engineering notion where you throw all that stuff. Well, I guess it is a question. People have brought up this question, and when you asked about, like, is our current hardware, will our current hardware work? Well, Turing computation says that our current hardware is, in principle, a Turing machine, right? So all we have to do is make it faster and bigger. But there have been people like Roger Penrose, if you might remember, that he said, Turing machines cannot produce intelligence because intelligence requires continuous valued numbers. I mean, that was sort of my reading of his argument. And quantum mechanics and what else, whatever. But I don't see any evidence for that, that we need new computation paradigms. But I don't know if we're, you know, I don't think we're gonna be able to scale up our current approaches to programming these computers. What is your hope for approaches like CopyCat or other cognitive architectures? I've talked to the creator of SOAR, for example. I've used ActR myself. I don't know if you're familiar with it. Yeah, I am. What do you think is, what's your hope of approaches like that in helping develop systems of greater and greater intelligence in the coming decades? Well, that's what I'm working on now, is trying to take some of those ideas and extending it. So I think there are some really promising approaches that are going on now that have to do with more active generative models. So this is the idea of this simulation in your head, the concept, when you, if you wanna, when you're perceiving a new situation, you have some simulations in your head. Those are generative models. They're generating your expectations. They're generating predictions. So that's part of a perception. You have a metamodel that generates a prediction then you compare it with, and then the difference. And you also, that generative model is telling you where to look and what to look at and what to pay attention to. And it, I think it affects your perception. It's not that just you compare it with your perception. It becomes your perception in a way. It's kind of a mixture of the bottom up information coming from the world and your top down model being imposed on the world is what becomes your perception. So your hope is something like that can improve perception systems and that they can understand things better. Yes. To understand things. Yes. What's the, what's the step, what's the analogy making step there? Well, there, the idea is that you have this pretty complicated conceptual space. You can talk about a semantic network or something like that with these different kinds of concept models in your brain that are connected. So, so let's, let's take the example of walking a dog. So we were talking about that. Okay. Let's say I see someone out in the street walking a cat. Some people walk their cats, I guess. Seems like a bad idea, but. Yeah. So my model, my, you know, there's connections between my model of a dog and model of a cat. And I can immediately see the analogy of that those are analogous situations, but I can also see the differences and that tells me what to expect. So also, you know, I have a new situation. So another example with the walking the dog thing is sometimes people, I see people riding their bikes with a leash, holding a leash and the dogs running alongside. Okay, so I know that the, I recognize that as kind of a dog walking situation, even though the person's not walking, right? And the dog's not walking. Because I have these models that say, okay, riding a bike is sort of similar to walking or it's connected, it's a means of transportation, but I, because they have their dog there, I assume they're not going to work, but they're going out for exercise. You know, these analogies help me to figure out kind of what's going on, what's likely. But sort of these analogies are very human interpretable. So that's that kind of space. And then you look at something like the current deep learning approaches, they kind of help you to take raw sensory information and to sort of automatically build up hierarchies of what you can even call them concepts. They're just not human interpretable concepts. What's your, what's the link here? Do you hope, sort of the hybrid system question, how do you think the two can start to meet each other? What's the value of learning in this systems of forming, of analogy making? The goal of, you know, the original goal of deep learning in at least visual perception was that you would get the system to learn to extract features that at these different levels of complexity. So maybe edge detection and that would lead into learning, you know, simple combinations of edges and then more complex shapes and then whole objects or faces. And this was based on the ideas of the neuroscientists, Hubel and Wiesel, who had seen, laid out this kind of structure in brain. And I think that's right to some extent. Of course, people have found that the whole story is a little more complex than that. And the brain of course always is and there's a lot of feedback. So I see that as absolutely a good brain inspired approach to some aspects of perception. But one thing that it's lacking, for example, is all of that feedback, which is extremely important. The interactive element that you mentioned. The expectation, right, the conceptual level. Going back and forth with the expectation, the perception and just going back and forth. So, right, so that is extremely important. And, you know, one thing about deep neural networks is that in a given situation, like, you know, they're trained, right? They get these weights and everything, but then now I give them a new image, let's say. They treat every part of the image in the same way. You know, they apply the same filters at each layer to all parts of the image. There's no feedback to say like, oh, this part of the image is irrelevant. I shouldn't care about this part of the image. Or this part of the image is the most important part. And that's kind of what we humans are able to do because we have these conceptual expectations. So there's a, by the way, a little bit of work in that. There's certainly a lot more in what's under the, called attention in natural language processing knowledge. It's a, and that's exceptionally powerful. And it's a very, just as you say, it's a really powerful idea. But again, in sort of machine learning, it all kind of operates in an automated way. That's not human interpret. It's not also, okay, so that, right. It's not dynamic. I mean, in the sense that as a perception of a new example is being processed, those attention's weights don't change. Right, so I mean, there's a kind of notion that there's not a memory. So you're not aggregating the idea of like, this mental model. Yes. I mean, that seems to be a fundamental idea. There's not a really powerful, I mean, there's some stuff with memory, but there's not a powerful way to represent the world in some sort of way that's deeper than, I mean, it's so difficult because, you know, neural networks do represent the world. They do have a mental model, right? But it just seems to be shallow. It's hard to criticize them at the fundamental level, to me at least. It's easy to criticize them. Well, look, like exactly what you're saying, mental models sort of almost put a psychology hat on, say, look, these networks are clearly not able to achieve what we humans do with forming mental models, analogy making and so on. But that doesn't mean that they fundamentally cannot do that. Like it's very difficult to say that. I mean, at least to me, do you have a notion that the learning approach is really, I mean, they're going to not only are they limited today, but they will forever be limited in being able to construct such mental models. I think the idea of the dynamic perception is key here. The idea that moving your eyes around and getting feedback. And that's something that, you know, there's been some models like that. There's certainly recurrent neural networks that operate over several time steps. But the problem is that the actual, the recurrence is, you know, basically the feedback is at the next time step is the entire hidden state of the network, which is, it turns out that that doesn't work very well. But see, the thing I'm saying is mathematically speaking, it has the information in that recurrence to capture everything, it just doesn't seem to work. So like, you know, it's like, it's the same Turing machine question, right? Yeah, maybe theoretically, computers, anything that's Turing, a universal Turing machine can be intelligent, but practically, the architecture might be very specific. Kind of architecture to be able to create it. So just, I guess it sort of asks almost the same question again is how big of a role do you think deep learning needs, will play or needs to play in this, in perception? I think that deep learning as it's currently, as it currently exists, you know, will place, that kind of thing will play some role. But I think that there's a lot more going on in perception. But who knows, you know, the definition of deep learning, I mean, it's pretty broad. It's kind of an umbrella for a lot of different things. So what I mean is purely sort of neural networks. Yeah, and a feed forward neural networks. Essentially, or there could be recurrence, but sometimes it feels like, for instance, I talked to Gary Marcus, it feels like the criticism of deep learning is kind of like us birds criticizing airplanes for not flying well, or that they're not really flying. Do you think deep learning, do you think it could go all the way? Like Yann LeCun thinks. Do you think that, yeah, the brute force learning approach can go all the way? I don't think so, no. I mean, I think it's an open question, but I tend to be on the innateness side that there's some things that we've been evolved to be able to learn, and that learning just can't happen without them. So one example, here's an example I had in the book that I think is useful to me, at least, in thinking about this. So this has to do with the Deep Minds Atari game playing program, okay? And it learned to play these Atari video games just by getting input from the pixels of the screen, and it learned to play the game Breakout 1,000% better than humans, okay? That was one of their results, and it was great. And it learned this thing where it tunneled through the side of the bricks in the breakout game, and the ball could bounce off the ceiling and then just wipe out bricks. Okay, so there was a group who did an experiment where they took the paddle that you move with the joystick and moved it up two pixels or something like that. And then they looked at a deep Q learning system that had been trained on Breakout and said, could it now transfer its learning to this new version of the game? Of course, a human could, and it couldn't. Maybe that's not surprising, but I guess the point is it hadn't learned the concept of a paddle. It hadn't learned the concept of a ball or the concept of tunneling. It was learning something, you know, we looking at it kind of anthropomorphized it and said, oh, here's what it's doing in the way we describe it. But it actually didn't learn those concepts. And so because it didn't learn those concepts, it couldn't make this transfer. Yes, so that's a beautiful statement, but at the same time, by moving the paddle, we also anthropomorphize flaws to inject into the system that will then flip how impressed we are by it. What I mean by that is, to me, the Atari games were, to me, deeply impressive that that was possible at all. So like I have to first pause on that, and people should look at that, just like the game of Go, which is fundamentally different to me than what Deep Blue did. Even though there's still a tree search, it's just everything DeepMind has done in terms of learning, however limited it is, is still deeply surprising to me. Yeah, I'm not trying to say that what they did wasn't impressive. I think it was incredibly impressive. To me, it's interesting. Is moving the board just another thing that needs to be learned? So like we've been able to, maybe, maybe, been able to, through the current neural networks, learn very basic concepts that are not enough to do this general reasoning, and maybe with more data. I mean, the interesting thing about the examples that you talk about beautifully is it's often flaws of the data. Well, that's the question. I mean, I think that is the key question, whether it's a flaw of the data or not. Because the reason I brought up this example was because you were asking, do I think that learning from data could go all the way? And this was why I brought up the example, because I think, and this is not at all to take away from the impressive work that they did, but it's to say that when we look at what these systems learn, do they learn the things that we humans consider to be the relevant concepts? And in that example, it didn't. Sure, if you train it on moving, you know, the paddle being in different places, maybe it could deal with, maybe it would learn that concept. I'm not totally sure. But the question is, you know, scaling that up to more complicated worlds, to what extent could a machine that only gets this very raw data learn to divide up the world into relevant concepts? And I don't know the answer, but I would bet that without some innate notion that it can't do it. Yeah, 10 years ago, I 100% agree with you as the most experts in AI system, but now I have a glimmer of hope. Okay, I mean, that's fair enough. And I think that's what deep learning did in the community is, no, no, if I had to bet all my money, it's 100% deep learning will not take us all the way. But there's still other, it's still, I was so personally sort of surprised by the Atari games, by Go, by the power of self play of just game playing against each other that I was like many other times just humbled of how little I know about what's possible in this approach. Yeah, I think fair enough. Self play is amazingly powerful. And that goes way back to Arthur Samuel, right, with his checker plane program, which was brilliant and surprising that it did so well. So just for fun, let me ask you on the topic of autonomous vehicles. It's the area that I work at least these days most closely on, and it's also area that I think is a good example that you use as sort of an example of things we as humans don't always realize how hard it is to do. It's like the constant trend in AI, but the different problems that we think are easy when we first try them and then realize how hard it is. Okay, so you've talked about autonomous driving being a difficult problem, more difficult than we realize. Humans give it credit for it. Why is it so difficult? What are the most difficult parts in your view? I think it's difficult because of the world is so open ended as to what kinds of things can happen. So you have sort of what normally happens, which is just you drive along and nothing surprising happens, and autonomous vehicles can do, the ones we have now evidently can do really well on most normal situations as long as the weather is reasonably good and everything. But if some, we have this notion of edge cases or things in the tail of the distribution, we call it the long tail problem, which says that there's so many possible things that can happen that was not in the training data of the machine that it won't be able to handle it because it doesn't have common sense. Right, it's the old, the paddle moved problem. Yeah, it's the paddle moved problem, right. And so my understanding, and you probably are more of an expert than I am on this, is that current self driving car vision systems have problems with obstacles, meaning that they don't know which obstacles, which quote unquote obstacles they should stop for and which ones they shouldn't stop for. And so a lot of times I read that they tend to slam on the brakes quite a bit. And the most common accidents with self driving cars are people rear ending them because they were surprised. They weren't expecting the machine, the car to stop. Yeah, so there's a lot of interesting questions there. Whether, because you mentioned kind of two things. So one is the problem of perception, of understanding, of interpreting the objects that are detected correctly. And the other one is more like the policy, the action that you take, how you respond to it. So a lot of the car's braking is a kind of notion of, to clarify, there's a lot of different kind of things that are people calling autonomous vehicles. But the L4 vehicles with a safety driver are the ones like Waymo and Cruise and those companies, they tend to be very conservative and cautious. So they tend to be very, very afraid of hurting anything or anyone and getting in any kind of accidents. So their policy is very kind of, that results in being exceptionally responsive to anything that could possibly be an obstacle, right? Right, which the human drivers around it, it behaves unpredictably. Yeah, that's not a very human thing to do, caution. That's not the thing we're good at, especially in driving. We're in a hurry, often angry and et cetera, especially in Boston. And then there's sort of another, and a lot of times, machine learning is not a huge part of that. It's becoming more and more unclear to me how much sort of speaking to public information because a lot of companies say they're doing deep learning and machine learning just to attract good candidates. The reality is in many cases, it's still not a huge part of the perception. There's LiDAR and there's other sensors that are much more reliable for obstacle detection. And then there's Tesla approach, which is vision only. And there's, I think a few companies doing that, but Tesla most sort of famously pushing that forward. And that's because the LiDAR is too expensive, right? Well, I mean, yes, but I would say if you were to for free give to every Tesla vehicle, I mean, Elon Musk fundamentally believes that LiDAR is a crutch, right, famously said that. That if you want to solve the problem of machine learning, LiDAR should not be the primary sensor is the belief. The camera contains a lot more information. So if you want to learn, you want that information. But if you want to not to hit obstacles, you want LiDAR, right? Sort of it's this weird trade off because yeah, sort of what Tesla vehicles have a lot of, which is really the thing, the fallback, the primary fallback sensor is radar, which is a very crude version of LiDAR. It's a good detector of obstacles except when those things are standing, right? The stopped vehicle. Right, that's why it had problems with crashing into stop fire trucks. Stop fire trucks, right. So the hope there is that the vision sensor would somehow catch that. And for, there's a lot of problems with perception. They are doing actually some incredible stuff in the, almost like an active learning space where it's constantly taking edge cases and pulling back in. There's this data pipeline. Another aspect that is really important that people are studying now is called multitask learning, which is sort of breaking apart this problem, whatever the problem is, in this case driving, into dozens or hundreds of little problems that you can turn into learning problems. So this giant pipeline, it's kind of interesting. I've been skeptical from the very beginning, but become less and less skeptical over time how much of driving can be learned. I still think it's much farther than the CEO of that particular company thinks it will be, but it's constantly surprising that through good engineering and data collection and active selection of data, how you can attack that long tail. And it's an interesting open question that you're absolutely right. There's a much longer tail and all these edge cases that we don't think about, but it's a fascinating question that applies to natural language and all spaces. How big is that long tail? And I mean, not to linger on the point, but what's your sense in driving in these practical problems of the human experience? Can it be learned? So the current, what are your thoughts of sort of Elon Musk thought, let's forget the thing that he says it'd be solved in a year, but can it be solved in a reasonable timeline or do fundamentally other methods need to be invented? So I don't, I think that ultimately driving, so it's a trade off in a way, being able to drive and deal with any situation that comes up does require kind of full human intelligence. And even in humans aren't intelligent enough to do it because humans, I mean, most human accidents are because the human wasn't paying attention or the humans drunk or whatever. And not because they weren't intelligent enough. And not because they weren't intelligent enough, right. Whereas the accidents with autonomous vehicles is because they weren't intelligent enough. They're always paying attention. Yeah, they're always paying attention. So it's a trade off, you know, and I think that it's a very fair thing to say that autonomous vehicles will be ultimately safer than humans because humans are very unsafe. It's kind of a low bar. But just like you said, I think humans got a better rap, right. Because we're really good at the common sense thing. Yeah, we're great at the common sense thing. We're bad at the paying attention thing. Paying attention thing, right. Especially when we're, you know, driving is kind of boring and we have these phones to play with and everything. But I think what's going to happen is that for many reasons, not just AI reasons, but also like legal and other reasons, that the definition of self driving is going to change or autonomous is going to change. It's not going to be just, I'm going to go to sleep in the back and you just drive me anywhere. It's going to be more certain areas are going to be instrumented to have the sensors and the mapping and all of the stuff you need for, that the autonomous cars won't have to have full common sense and they'll do just fine in those areas as long as pedestrians don't mess with them too much. That's another question. That's right. But I don't think we will have fully autonomous self driving in the way that like most, the average person thinks of it for a very long time. And just to reiterate, this is the interesting open question that I think I agree with you on, is to solve fully autonomous driving, you have to be able to engineer in common sense. Yes. I think it's an important thing to hear and think about. I hope that's wrong, but I currently agree with you that unfortunately you do have to have, to be more specific, sort of these deep understandings of physics and of the way this world works and also the human dynamics. Like you mentioned, pedestrians and cyclists, actually that's whatever that nonverbal communication as some people call it, there's that dynamic that is also part of this common sense. Right. And we humans are pretty good at predicting what other humans are going to do. And how our actions impact the behaviors of this weird game theoretic dance that we're good at somehow. And the funny thing is, because I've watched countless hours of pedestrian video and talked to people, we humans are also really bad at articulating the knowledge we have. Right. Which has been a huge challenge. Yes. So you've mentioned embodied intelligence. What do you think it takes to build a system of human level intelligence? Does it need to have a body? I'm not sure, but I'm coming around to that more and more. And what does it mean to be, I don't mean to keep bringing up Yann LeCun. He looms very large. Well, he certainly has a large personality. Yes. He thinks that the system needs to be grounded, meaning he needs to sort of be able to interact with reality, but doesn't think it necessarily needs to have a body. So when you think of... So what's the difference? I guess I want to ask, when you mean body, do you mean you have to be able to play with the world? Or do you also mean like there's a body that you have to preserve? Oh, that's a good question. I haven't really thought about that, but I think both, I would guess. Because I think intelligence, it's so hard to separate it from our desire for self preservation, our emotions, all that non rational stuff that kind of gets in the way of logical thinking. Because the way, if we're talking about human intelligence or human level intelligence, whatever that means, a huge part of it is social. We were evolved to be social and to deal with other people. And that's just so ingrained in us that it's hard to separate intelligence from that. I think AI for the last 70 years or however long it's been around, it has largely been separated. There's this idea that there's like, it's kind of very Cartesian. There's this thinking thing that we're trying to create, but we don't care about all this other stuff. And I think the other stuff is very fundamental. So there's idea that things like emotion can get in the way of intelligence. As opposed to being an integral part of it. Integral part of it. So, I mean, I'm Russian, so romanticize the notions of emotion and suffering and all that kind of fear of mortality, those kinds of things. So in AI, especially. By the way, did you see that? There was this recent thing going around the internet. Some, I think he's a Russian or some Slavic had written this thing, anti the idea of super intelligence. I forgot, maybe he's Polish. Anyway, so it all these arguments and one was the argument from Slavic pessimism. My favorite. Do you remember what the argument is? It's like nothing ever works. Everything sucks. So what do you think is the role? Like that's such a fascinating idea that what we perceive as sort of the limits of the human mind, which is emotion and fear and all those kinds of things are integral to intelligence. Could you elaborate on that? Like why is that important, do you think? For human level intelligence. At least for the way the humans work, it's a big part of how it affects how we perceive the world. It affects how we make decisions about the world. It affects how we interact with other people. It affects our understanding of other people. For me to understand what you're likely to do, I need to have kind of a theory of mind and that's very much a theory of emotion and motivations and goals. And to understand that, we have this whole system of mirror neurons. I sort of understand your motivations through sort of simulating it myself. So it's not something that I can prove that's necessary, but it seems very likely. So, okay. You've written the op ed in the New York Times titled We Shouldn't Be Scared by Superintelligent AI and it criticized a little bit Stuart Russell and Nick Bostrom. Can you try to summarize that article's key ideas? So it was spurred by an earlier New York Times op ed by Stuart Russell, which was summarizing his book called Human Compatible. And the article was saying if we have superintelligent AI, we need to have its values aligned with our values and it has to learn about what we really want. And he gave this example. What if we have a superintelligent AI and we give it the problem of solving climate change and it decides that the best way to lower the carbon in the atmosphere is to kill all the humans? Okay. So to me, that just made no sense at all because a superintelligent AI, first of all, trying to figure out what a superintelligence means and it seems that something that's superintelligent can't just be intelligent along this one dimension of, okay, I'm going to figure out all the steps, the best optimal path to solving climate change and not be intelligent enough to figure out that humans don't want to be killed, that you could get to one without having the other. And, you know, Bostrom, in his book, talks about the orthogonality hypothesis where he says he thinks that a system's, I can't remember exactly what it is, but like a system's goals and its values don't have to be aligned. There's some orthogonality there, which didn't make any sense to me. So you're saying in any system that's sufficiently not even superintelligent, but as opposed to greater and greater intelligence, there's a holistic nature that will sort of, a tension that will naturally emerge that prevents it from sort of any one dimension running away. Yeah, yeah, exactly. So, you know, Bostrom had this example of the superintelligent AI that makes, that turns the world into paper clips because its job is to make paper clips or something. And that just, as a thought experiment, didn't make any sense to me. Well, as a thought experiment or as a thing that could possibly be realized? Either. So I think that, you know, what my op ed was trying to do was say that intelligence is more complex than these people are presenting it. That it's not like, it's not so separable. The rationality, the values, the emotions, the, all of that, that it's, the view that you could separate all these dimensions and build a machine that has one of these dimensions and it's superintelligent in one dimension, but it doesn't have any of the other dimensions. That's what I was trying to criticize that I don't believe that. So can I read a few sentences from Yoshua Bengio who is always super eloquent? So he writes, I have the same impression as Melanie that our cognitive biases are linked with our ability to learn to solve many problems. They may also be a limiting factor for AI. However, this is a may in quotes. Things may also turn out differently and there's a lot of uncertainty about the capabilities of future machines. But more importantly for me, the value alignment problem is a problem well before we reach some hypothetical superintelligence. It is already posing a problem in the form of super powerful companies whose objective function may not be sufficiently aligned with humanity's general wellbeing, creating all kinds of harmful side effects. So he goes on to argue that the orthogonality and those kinds of things, the concerns of just aligning values with the capabilities of the system is something that might come long before we reach anything like superintelligence. So your criticism is kind of really nice to saying this idea of superintelligent systems seem to be dismissing fundamental parts of what intelligence would take. And then Yoshua kind of says, yes, but if we look at systems that are much less intelligent, there might be these same kinds of problems that emerge. Sure, but I guess the example that he gives there of these corporations, that's people, right? Those are people's values. I mean, we're talking about people, the corporations are, their values are the values of the people who run those corporations. But the idea is the algorithm, that's right. So the fundamental person, the fundamental element of what does the bad thing is a human being. Yeah. But the algorithm kind of controls the behavior of this mass of human beings. Which algorithm? For a company that's the, so for example, if it's an advertisement driven company that recommends certain things and encourages engagement, so it gets money by encouraging engagement and therefore the company more and more, it's like the cycle that builds an algorithm that enforces more engagement and may perhaps more division in the culture and so on, so on. I guess the question here is sort of who has the agency? So you might say, for instance, we don't want our algorithms to be racist. Right. And facial recognition, some people have criticized some facial recognition systems as being racist because they're not as good on darker skin than lighter skin. That's right. Okay. But the agency there, the actual facial recognition algorithm isn't what has the agency. It's not the racist thing, right? It's the, I don't know, the combination of the training data, the cameras being used, whatever. But my understanding of, and I agree with Bengio there that he, I think there are these value issues with our use of algorithms. But my understanding of what Russell's argument was is more that the machine itself has the agency now. It's the thing that's making the decisions and it's the thing that has what we would call values. Yes. So whether that's just a matter of degree, it's hard to say, right? But I would say that's sort of qualitatively different than a face recognition neural network. And to broadly linger on that point, if you look at Elon Musk or Stuart Russell or Bostrom, people who are worried about existential risks of AI, however far into the future, the argument goes is it eventually happens. We don't know how far, but it eventually happens. Do you share any of those concerns and what kind of concerns in general do you have about AI that approach anything like existential threat to humanity? So I would say, yes, it's possible, but I think there's a lot more closer in existential threats to humanity. As you said, like a hundred years for your time. It's more than a hundred years. More than a hundred years. Maybe even more than 500 years. I don't know. So the existential threats are so far out that the future is, I mean, there'll be a million different technologies that we can't even predict now that will fundamentally change the nature of our behavior, reality, society, and so on before then. Yeah, I think so. I think so. And we have so many other pressing existential threats going on right now. Nuclear weapons even. Nuclear weapons, climate problems, poverty, possible pandemics. You can go on and on. And I think worrying about existential threat from AI is not the best priority for what we should be worrying about. That's kind of my view, because we're so far away. But I'm not necessarily criticizing Russell or Bostrom or whoever for worrying about that. And I think some people should be worried about it. It's certainly fine. But I was more getting at their view of what intelligence is. So I was more focusing on their view of superintelligence than just the fact of them worrying. And the title of the article was written by the New York Times editors. I wouldn't have called it that. We shouldn't be scared by superintelligence. No. If you wrote it, it'd be like we should redefine what you mean by superintelligence. I actually said something like superintelligence is not a sort of coherent idea. But that's not something the New York Times would put in. And the follow up argument that Yoshua makes also, not argument, but a statement, and I've heard him say it before. And I think I agree. He kind of has a very friendly way of phrasing it. It's good for a lot of people to believe different things. He's such a nice guy. Yeah. But it's also practically speaking like we shouldn't be like, while your article stands, like Stuart Russell does amazing work. Bostrom does amazing work. You do amazing work. And even when you disagree about the definition of superintelligence or the usefulness of even the term, it's still useful to have people that like use that term, right? And then argue. Sure. I absolutely agree with Benjo there. And I think it's great that, you know, and it's great that New York Times will publish all this stuff. That's right. It's an exciting time to be here. What do you think is a good test of intelligence? Is natural language ultimately a test that you find the most compelling, like the original or the higher levels of the Turing test kind of? Yeah, I still think the original idea of the Turing test is a good test for intelligence. I mean, I can't think of anything better. You know, the Turing test, the way that it's been carried out so far has been very impoverished, if you will. But I think a real Turing test that really goes into depth, like the one that I mentioned, I talk about in the book, I talk about Ray Kurzweil and Mitchell Kapoor have this bet, right? That in 2029, I think is the date there, a machine will pass the Turing test and they have a very specific, like how many hours, expert judges and all of that. And, you know, Kurzweil says yes, Kapoor says no. We only have like nine more years to go to see. But I, you know, if something, a machine could pass that, I would be willing to call it intelligent. Of course, nobody will. They will say that's just a language model, if it does. So you would be comfortable, so language, a long conversation that, well, yeah, you're, I mean, you're right, because I think probably to carry out that long conversation, you would literally need to have deep common sense understanding of the world. I think so. And the conversation is enough to reveal that. I think so. So another super fun topic of complexity that you have worked on, written about. Let me ask the basic question. What is complexity? So complexity is another one of those terms like intelligence. It's perhaps overused. But my book about complexity was about this wide area of complex systems, studying different systems in nature, in technology, in society in which you have emergence, kind of like I was talking about with intelligence. You know, we have the brain, which has billions of neurons. And each neuron individually could be said to be not very complex compared to the system as a whole. But the system, the interactions of those neurons and the dynamics, creates these phenomena that we call intelligence or consciousness, you know, that we consider to be very complex. So the field of complexity is trying to find general principles that underlie all these systems that have these kinds of emergent properties. And the emergence occurs from like underlying the complex system is usually simple, fundamental interactions. Yes. And the emergence happens when there's just a lot of these things interacting. Yes. Sort of what, and then most of science to date, can you talk about what is reductionism? Well, reductionism is when you try and take a system and divide it up into its elements, whether those be cells or atoms or subatomic particles, whatever your field is, and then try and understand those elements. And then try and build up an understanding of the whole system by looking at sort of the sum of all the elements. So what's your sense? Whether we're talking about intelligence or these kinds of interesting complex systems, is it possible to understand them in a reductionist way, which is probably the approach of most of science today, right? I don't think it's always possible to understand the things we want to understand the most. So I don't think it's possible to look at single neurons and understand what we call intelligence, to look at sort of summing up, and sort of the summing up is the issue here. One example is that the human genome, right, so there was a lot of work on excitement about sequencing the human genome because the idea would be that we'd be able to find genes that underlies diseases. But it turns out that, and it was a very reductionist idea, you know, we figure out what all the parts are, and then we would be able to figure out which parts cause which things. But it turns out that the parts don't cause the things that we're interested in. It's like the interactions, it's the networks of these parts. And so that kind of reductionist approach didn't yield the explanation that we wanted. What do you, what do you use the most beautiful complex system that you've encountered? The most beautiful. That you've been captivated by. Is it sort of, I mean, for me, is the simplest to be cellular automata. Oh, yeah. So I was very captivated by cellular automata and worked on cellular automata for several years. Do you find it amazing or is it surprising that such simple systems, such simple rules in cellular automata can create sort of seemingly unlimited complexity? Yeah, that was very surprising to me. How do you make sense of it? How does that make you feel? Is it just ultimately humbling or is there a hope to somehow leverage this into a deeper understanding and even able to engineer things like intelligence? It's definitely humbling. How humbling in that also kind of awe inspiring that it's that awe inspiring like part of mathematics that these credibly simple rules can produce this very beautiful, complex, hard to understand behavior. And that's, it's mysterious, you know, and surprising still. But exciting because it does give you kind of the hope that you might be able to engineer complexity just from simple rules. Can you briefly say what is the Santa Fe Institute, its history, its culture, its ideas, its future? So I've never, as I mentioned to you, I've never been, but it's always been this, in my mind, this mystical place where brilliant people study the edge of chaos. Yeah, exactly. So the Santa Fe Institute was started in 1984 and it was created by a group of scientists, a lot of them from Los Alamos National Lab, which is about a 40 minute drive from Santa Fe Institute. They were mostly physicists and chemists, but they were frustrated in their field because they felt so that their field wasn't approaching kind of big interdisciplinary questions like the kinds we've been talking about. And they wanted to have a place where people from different disciplines could work on these big questions without sort of being siloed into physics, chemistry, biology, whatever. So they started this institute and this was people like George Cowen, who was a chemist in the Manhattan Project, and Nicholas Metropolis, a mathematician, physicist, Marie Gail Mann, physicist. So some really big names here. Ken Arrow, Nobel Prize winning economist, and they started having these workshops. And this whole enterprise kind of grew into this research institute that itself has been kind of on the edge of chaos its whole life because it doesn't have a significant endowment. And it's just been kind of living on whatever funding it can raise through donations and grants and however it can, you know, business associates and so on. But it's a great place. It's a really fun place to go think about ideas that you wouldn't normally encounter. I saw Sean Carroll, a physicist. Yeah, he's on the external faculty. And you mentioned that there's, so there's some external faculty and there's people that are... A very small group of resident faculty, maybe about 10 who are there for five year terms that can sometimes get renewed. And then they have some postdocs and then they have this much larger on the order of 100 external faculty or people like me who come and visit for various periods of time. So what do you think is the future of the Santa Fe Institute? And if people are interested, like what's there in terms of the public interaction or students or so on that could be a possible interaction with the Santa Fe Institute or its ideas? Yeah, so there's a few different things they do. They have a complex system summer school for graduate students and postdocs and sometimes faculty attend too. And that's a four week, very intensive residential program where you go and you listen to lectures and you do projects and people really like that. I mean, it's a lot of fun. They also have some specialty summer schools. There's one on computational social science. There's one on climate and sustainability, I think it's called. There's a few and then they have short courses where just a few days on different topics. They also have an online education platform that offers a lot of different courses and tutorials from SFI faculty. Including an introduction to complexity course that I taught. Awesome. And there's a bunch of talks too online from the guest speakers and so on. They host a lot of... Yeah, they have sort of technical seminars and colloquia and they have a community lecture series like public lectures and they put everything on their YouTube channel so you can see it all. Watch it. Douglas Hofstadter, author of Ghetto Escherbach, was your PhD advisor. He mentioned a couple of times in collaborator. Do you have any favorite lessons or memories from your time working with him that continues to this day? Just even looking back throughout your time working with him. One of the things he taught me was that when you're looking at a complex problem, to idealize it as much as possible to try and figure out what is the essence of this problem. And this is how the copycat program came into being was by taking analogy making and saying, how can we make this as idealized as possible but still retain really the important things we want to study? And that's really been a core theme of my research, I think. And I continue to try and do that. And it's really very much kind of physics inspired. Hofstadter was a PhD in physics. That was his background. It's like first principles kind of thing. You're reduced to the most fundamental aspect of the problem so that you can focus on solving that fundamental aspect. Yeah. And in AI, people used to work in these micro worlds, right? Like the blocks world was very early important area in AI. And then that got criticized because they said, oh, you can't scale that to the real world. And so people started working on much more real world like problems. But now there's been kind of a return even to the blocks world itself. We've seen a lot of people who are trying to work on more of these very idealized problems for things like natural language and common sense. So that's an interesting evolution of those ideas. So perhaps the blocks world represents the fundamental challenges of the problem of intelligence more than people realize. It might. Yeah. When you look back at your body of work and your life, you've worked in so many different fields. Is there something that you're just really proud of in terms of ideas that you've gotten a chance to explore, create yourself? So I am really proud of my work on the copycat project. I think it's really different from what almost everyone has done in AI. I think there's a lot of ideas there to be explored. And I guess one of the happiest days of my life. You know, aside from like the births of my children was the birth of copycat when it actually started to be able to make really interesting analogies. And I remember that very clearly. It was a very exciting time. Well, you kind of gave life to an artificial system. That's right. In terms of what people can interact, I saw there's like a, I think it's called MetaCat. MetaCat. MetaCat. And there's a Python 3 implementation. If people actually wanted to play around with it and actually get into it and study it and maybe integrate into whether it's with deep learning or any other kind of work they're doing. What would you suggest they do to learn more about it and to take it forward in different kinds of directions? Yeah, so that there's Douglas Hofstadter's book called Fluid Concepts and Creative Analogies talks in great detail about copycat. I have a book called Analogy Making as Perception, which is a version of my PhD thesis on it. There's also code that's available that you can get it to run. I have some links on my webpage to where people can get the code for it. And I think that that would really be the best way to get into it. Just dive in and play with it. Well, Melanie, it was an honor talking to you. I really enjoyed it. Thank you so much for your time today. Thanks. It's been really great. Thanks for listening to this conversation with Melanie Mitchell. And thank you to our presenting sponsor, Cash App. Download it. Use code LexPodcast. You will get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon or connect with me on Twitter. And now let me leave you with some words of wisdom from Douglas Hofstadter and Melanie Mitchell. Without concepts, there can be no thought. Without analogies, there can be no concepts. And Melanie adds, how to form and fluidly use concepts is the most important open problem in AI. Thank you for listening and hope to see you next time.
Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61
The following is a conversation with Donald Knuth, one of the greatest and most impactful computer scientists and mathematicians ever. He's the recipient of the 1974 Turing Award, considered the Nobel Prize of Computing. He's the author of the multi volume work, the Magnum Opus, The Art of Computer Programming. He made several key contributions to the rigorous analysis of computational complexity of algorithms, including the popularization of asymptotic notation, that we all affectionately know as the big O notation. He also created the tech typesetting system, which most computer scientists, physicists, mathematicians, and scientists and engineers in general use to write technical papers and make them look beautiful. I can imagine no better guest to end 2019 with than Don, one of the kindest, most brilliant people in our field. This podcast was recorded many months ago. It's one I avoided because perhaps counterintuitively, the conversation meant so much to me. If you can believe it, I knew even less about recording back then, so the camera angle is a bit off. I hope that's OK with you. The office space was a big cramp for filming, but it was a magical space where Don does most of his work. It meant a lot to me that he would welcome me into his home. It was quite a journey to get there. As many people know, he doesn't check email, so I had to get creative. The effort was worth it. I've been doing this podcast on the side for just over a year. Sometimes I had to sacrifice a bit of sleep, but always happy to do it and to be part of an amazing community of curious minds. Thank you for your kind words of support and for the interesting discussions, and I look forward to many more of those in 2020. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation that you can skip to, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LexPodcast, you'll get $10 and Cash App will also donate $10 to First, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Donald Knuth. In 1957 at Case Tech, you were once allowed to spend several evenings with a IBM 650 computer as you've talked about in the past and you fell in love with computing then. Can you take me back to that moment with the IBM 650? What was it that grabbed you about that computer? So the IBM 650 was this machine that, well, it didn't fill a room, but it was big and noisy. But when I first saw it, it was through a window and there were just a lot of lights flashing on it. And I was a freshman, I had a job with the statistics group and I was supposed to punch cards for data and then sort them on another machine, but then they got this new computer, came in and it had interesting lights, okay. So, well, but I had a key to the building so I could get in and look at it and got a manual for it. And my first experience was based on the fact that I could punch cards, basically, which is a big thing for the, but the IBM 650 was big in size, but incredibly small in power. In resources. In memory, it had 2,000 words of memory and a word of memory was 10 decimal digits plus a sign. And it would do, to add two numbers together, you could probably expect that would take, I'll say three milliseconds. So. It took pretty fast, the memory is the constraint, the memory is the problem. That was why it took three milliseconds, because it took five milliseconds for the drum to go around and you had to wait, I don't know, five cycle times. If you have an instruction, one position on the drum, then it would be ready to read the data for the instruction and go three notches. The drum is 50 cycles around and you go three cycles and you can get the data and then you can go another three cycles and get to your next instruction if the instruction is there, otherwise you spin until you get to the right place. And we had no random access memory whatsoever until my senior year. Senior year, we got 50 words of random access memory which were priceless and we would move stuff up to the random access memory in 60 word chunks and then we would start again. So subroutine wanted to go up there. Could you have predicted the future 60 years later of computing from then? You know, in fact, the hardest question I was ever asked was what could I have predicted? In other words, the interviewer asked me, she said, you know, what about computing has surprised you? And immediately I ran, I rattled off a couple dozen things and then she said, okay, so what didn't surprise? And I tried for five minutes to think of something that I would have predicted and I couldn't. But let me say that this machine, I didn't know, well, there wasn't much else in the world at that time. The 650 was the first machine that there were more than a thousand of ever. Before that there were, you know, each machine there might be a half a dozen examples, maybe a couple dozen. The first mass market, mass produced. The first one, yeah, done in quantity. And IBM didn't sell them, they rented them, but they rented them to universities that had a great deal. And so that's why a lot of students learned about computers at that time. So you refer to people, including yourself, who gravitate toward a kind of computational thinking as geeks, at least I've heard you use that terminology. It's true that I think there's something that happened to me as I was growing up that made my brain structure in a certain way that resonates with computers. So there's this space of people, 2% of the population, you empirically estimate. That's been fairly constant over most of my career. However, it might be different now because kids have different experiences when they're young. Obviously. So what does the world look like to a geek? What is this aspect of thinking that is unique to, that makes a geek? This is a hugely important question. In the 50s, IBM noticed that there were geeks and nongeeks, and so they tried to hire geeks. And they put out ads with papers saying, if you play chess, come to Madison Avenue for an interview or something like this. They were trying for some things. So what is it that I find easy and other people tend to find harder? And I think there's two main things. One is this, is the ability to jump levels of abstraction. So you see something in the large and you see something in the small and you pass between those unconsciously. So you know that in order to solve some big problem, what you need to do is add one to a certain register and that gets you to another step. And below the, I don't go down to the electron level, but I knew what those milliseconds were, what the drum was like on the 650. I knew how I was gonna factor a number or find a root of an equation or something because of what was doing. And as I'm debugging, I'm going through, did I make a key punch error? Did I write the wrong instruction? Do I have the wrong thing in a register? And each level is different. And this idea of being able to see something at lots of levels and fluently go between them seems to me to be much more pronounced in the people that resonate with computers like I do. So in my books, I also don't stick just to the high level, but I mix low level stuff with high level and this means that some people think that I should write better books and it's probably true. But other people say, well, but that's, if you think like that, then that's the way to train yourself, keep mixing the levels and learn more and more how to jump between. So that's the one thing. The other thing is that it's more of a talent to be able to deal with non uniformity where there's case one, case two, case three, instead of having one or two rules that govern everything. So it doesn't bother me if I need, like an algorithm has 10 steps to it, each step does something else that doesn't bother me, but a lot of pure mathematics is based on one or two rules which are universal and so this means that people like me sometimes work with systems that are more complicated than necessary because it doesn't bother us that we didn't figure out the simple rule. And you mentioned that while Jacobi, Boole, Abel, all the mathematicians in the 19th century may have had symptoms of geek, the first 100% legit geek was Turing, Alan Turing. I think he had, yeah, a lot more of this quality than anybody, just from reading the kind of stuff he did. So how does Turing, what influence has Turing had on you in your way of thinking? I didn't know that aspect of him until after I graduated some years. As undergraduate we had a class that talked about computability theory and Turing machines and it was all, it sounded like a very specific kind of purely theoretical approach to stuff. So when, how old were they when I learned that he had a design machine and that he wrote a wonderful manual for Manchester machines and he invented all kinds of subroutines and he was a real hacker that he had his hands dirty. I thought for many years that he had only done purely formal work, as I started reading his own publications, I could feel this kinship and of course he had a lot of peculiarities, like he wrote numbers backwards because I mean, left to right instead of right to left because that's, it was easier for computers to process them that way. What do you mean left to right? He would write pi as 9, 5, 1, 4.3, I mean, okay. Right, got it. 4, 1.3, on the blackboard. I mean, he had trained himself to do that because the computers he was working with worked that way inside. Trained himself to think like a computer. There you go, that's geek thinking. You've practiced some of the most elegant formalism in computer science and yet you're the creator of a concept like literate programming which seems to move closer to natural language type of description of programming. Yeah, absolutely. How do you see those two as conflicting as the formalism of theory and the idea of literate programming? So there we are in a nonuniform system where I don't think one size fits all and I don't think all truth lies in one kind of expertise. And so somehow, in a way you'd say my life is a convex combination of English and mathematics. And you're okay with that. And not only that, I think. Thrive in it. I wish, you know, I want my kids to be that way, I want, et cetera, you know, use left brain, right brain at the same time. You got a lot more done. That was part of the bargain. And I've heard that you didn't really read for pleasure until into your 30s, you know, literature. That's true. You know more about me than I do but I'll try to be consistent with what you read. Yeah, no, just believe me. I just go with whatever story I tell you. It'll be easier that way. The conversation works. Right, yeah, no, that's true. So I've heard mention of Philip Roth's American Pastoral, which I love as a book. I don't know if, it was mentioned as something, I think, that was meaningful to you as well. In either case, what literary books had a lasting impact on you? What literature, what poetry? Yeah, okay, good question. So I met Roth. Oh, really? Well, we both got doctors from Harvard on the same day, so we had lunch together and stuff like that. But he knew that, you know, computer books would never sell. Well, all right, so you say you were a teenager when you left Russia, so I have to say that Tolstoy was one of the big influences on me, especially like Anna Karnina, not because of particularly of the plot of the story, but because there's this character who, you know, the philosophical discussions, the whole way of life is worked out there among the characters, and so that I thought was especially beautiful. On the other hand, Dostoevsky, I didn't like at all because I felt that his genius was mostly because he kept forgetting what he had started out to do, and he was just sloppy. I didn't think that he polished his stuff at all, and I tend to admire somebody who dots the I's and crosses the T's. So the music of the pros is what you admire more than... I certainly do admire the music of the language, which I couldn't appreciate in the Russian original, but I can in Victor Hugo, because French is closer. But Tolstoy, I like the same reason. I like Herman Wouk as a novelist. I think like his book, Marjorie Morningstar, has a similar character in Hugo who developed his own personal philosophy, and it goes in the... What's consistent? Yeah, right, and it's worth pondering. So, yeah. So you don't like Nietzsche, and... Like what? You don't like Friedrich Nietzsche, or... Nietzsche, yeah, no, no, yeah, this has... I keep seeing quotations from Nietzsche, and he never tempt me to read any further. Well, he's full of contradictions, and you will certainly not appreciate him. But Schiller, I'm trying to get across what I appreciate in literature, and part of it is, as you say, the music of the language, the way it flows, and take Raymond Chandler versus Dashiell Hammett. Dashiell Hammett's sentences are awful, and Raymond Chandler's are beautiful, they just flow. So I don't read literature because it's supposed to be good for me, or because somebody said it's great, but I find things that I like. I mean, you mentioned you were dressed like James Bond, so I love Ian Fleming. I think he had a really great gift for... If he has a golf game, or a game of bridge, or something and this comes into his story, it'll be the most exciting golf game, or the absolute best possible hands of bridge that exists, and he exploits it and tells it beautifully. So in connecting some things here, looking at literate programming and being able to convey in code algorithms to a computer in a way that mimics how humans speak, what do you think about natural language in general and the messiness of our human world, about trying to express difficult things? So the idea of literate programming is really to try to understand something better by seeing it from at least two perspectives, the formal and the informal. If we're trying to understand a complicated thing, if we can look at it in different ways. And so this is in fact the key to technical writing, a good technical writer trying not to be obvious about it, but says everything twice, formally and informally, or maybe three times, but you try to give the reader a way to put the concept into his own brain or her own brain. Is that better for the writer or the reader or both? Well, the writer just tries to understand the reader. That's the goal of a writer, is to have a good mental image of the reader and to say what the reader expects next and to impress the reader with what has impressed the writer why something is interesting. So when you have a computer program, we try to, instead of looking at it as something that we're just trying to give instruction to the computer, what we really wanna be is giving insight to the person who's gonna be maintaining this program or to the programmer himself when he's debugging it as to why this stuff is being done. And so all the techniques of exposition that a teacher uses or a book writer uses make you a better programmer if your program is gonna be not just a one shot deal. So how difficult is that? Do you see hope for the combination of informal and formal for the programming task? Yeah, I'm the wrong person to ask, I guess, because I'm a geek, but I think for a geek it's easy. Some people have difficulty writing and that might be because there's something in their brain structure that makes it hard for them to write or it might be something just that they haven't had enough practice. I'm not the right one to judge, but I don't think you can teach any person any particular skill. I do think that writing is half of my life and so I put it together in a literate program. Even when I'm writing a one shot program, I write it in a literate way because I get it right faster that way. Now does it get compiled automatically? Or? So I guess on the technical side, my question was how difficult is it to design a system where much of the programming is done informally? Informally? Yeah, informally. I think whatever works to make it understandable is good, but then you have to also understand how informal is. You have to know the limitations. So by putting the formal and informal together, this is where it gets locked into your brain. You can say informally, well, I'm working on a problem right now, so. Let's go there. Can you give me an example of connecting the informal and the formal? Well, it's a little too complicated an example. There's a puzzle that's self referential. It's called a Japanese arrow puzzle. And you're given a bunch of boxes. Each one points north, east, south, or west. And at the end, you're supposed to fill in each box with the number of distinct numbers that it points to. So if I put a three in a box, that means that, and it's pointing to five other boxes, that means that there's gonna be three different numbers in those five boxes. And those boxes are pointing, one of them might be pointing to me, one of them might be pointing the other way. But anyway, I'm supposed to find a set of numbers that obeys this complicated condition that each number counts how many distinct numbers it points to. And so a guy sent me his solution to this problem where he presents formal statements that say either this is true or this is true or this is true. And so I try to render that formal statement informally and I try to say, I contain a three and the guys I'm pointing to contain the numbers one, two, and six. So by putting it informally and also I convert it into a dialogue statement, that helps me understand the logical statement that he's written down as a string of numbers in terms of some abstract variables that he had. That's really interesting. So maybe an extension of that, there has been a resurgence in computer science and machine learning and neural networks. So using data to construct algorithms. So it's another way to construct algorithms, really. Yes, exactly. If you can think of it that way. So as opposed to natural language to construct algorithms, use data to construct algorithms. So what's your view of this branch of computer science where data is almost more important than the mechanism of the algorithm? It seems to be suited to a certain kind of non geek, which is probably why it's taken off. It has its own community that really resonates with that. But it's hard to trust something like that because nobody, even the people who work with it, they have no idea what has been learned. That's a really interesting thought that it makes algorithms more accessible to a different community, a different type of brain. Yep. And that's really interesting because just like literate programming perhaps could make programming more accessible to a certain kind of brain. There are people who think it's just a matter of education and anybody can learn to be a great programmer. Anybody can learn to be a great skier. I wish that were true, but I know that there's a lot of things that I've tried to do and I was well motivated and I kept trying to build myself up and I never got past a certain level. I can't view, for example, I can't view three dimensional objects in my head. I have to make a model and look at it and study it from all points of view and then I start to get some idea. But other people are good at four dimensions. Physicists. Yeah. So let's go to the art of computer programming. In 1962, you set the table of contents for this magnum opus, right? Yep. It was supposed to be a single book with 12 chapters. Now today, what is it, 57 years later, you're in the middle of volume four of seven? In the middle of volume four B is. Four B. More precisely. Can I ask you for an impossible task, which is try to summarize the book so far maybe by giving a little examples. So from the sorting and the search and the combinatorial algorithms, if you were to give a summary, a quick elevator summary. Elevator, that's great. Yeah, right. But depending how many floors there are in the building. Yeah. The first volume called Fundamental Algorithms talks about something that you can't, the stuff you can't do without. You have to know the basic concepts of what is a program, what is an algorithm. And it also talks about a low level machine so you can have some kind of an idea what's going on. And it has basic concepts of input and output and subroutines. Induction. Induction, right. Mathematical, preliminary. So the thing that makes my book different from a lot of others is that I try to, not only present the algorithm, but I try to analyze them, which means quantitatively I say, not only does it work, but it works this fast. Okay, and so I need math for that. And then there's the standard way to structure data inside and represent information in the computer. So that's all volume one. Volume two talks, it's called Seminumerical Algorithms. And here we're writing programs, but we're also dealing with numbers. Algorithms deal with any kinds of objects, but specific when there's objects or numbers, well then we have certain special paradigms that apply to things that involve numbers. And so there's arithmetic on numbers and there's matrices full of numbers, there's random numbers, and there's power series full of numbers. There's different algebraic concepts that have numbers in structured ways. And arithmetic in the way a computer would think about arithmetic, so floating point. Floating point arithmetic, high precision arithmetic, not only addition, subtraction, multiplication, but also comparison of numbers. So then volume three talks about. I like that one, sorting and search. Sorting and search. I love sorting. Right, so here we're not dealing necessarily with numbers because you sort letters and other objects and searching we're doing all the time with Google nowadays, but I mean, we have to find stuff. So again, algorithms that underlie all kinds of applications. None of these volumes is about a particular application, but the applications are examples of why people want to know about sorting, why people want to know about random numbers. So then volume four goes into combinatorial algorithm. This is where we have zillions of things to deal with and here we keep finding cases where one good idea can make something go more than a million times faster. And we're dealing with problems that are probably never gonna be solved efficiently, but that doesn't mean we give up on them and we have this chance to have good ideas and go much, much faster on them. So that's combinatorial algorithms and those are the ones that are, I mean, you said sorting is most fun for you. It's true, it's fun, but combinatorial algorithms are the ones that I always enjoyed the most because that's when my skillet programming had most payoff. The difference between an obvious algorithm that you think up first thing and an interesting, subtle algorithm that's not so obvious, but run circles around the other one, that's where computer science really comes in. And a lot of these combinatorial methods were found first in applications to artificial intelligence or cryptography. And in my case, I just liked them and it was associated more with puzzles. Do you like them most in the domain of graphs and graph theory? Graphs are great because they're terrific models of so many things in the real world and you throw numbers on a graph, you got a network and so there you have many more things. But combinatorial in general is any arrangement of objects that has some kind of higher structure, nonrandom structure and is it possible to put something together satisfying all these conditions? Like I mentioned arrows a minute ago, is there a way to put these numbers on a bunch of boxes that are pointing to each other? Is that gonna be possible at all? That's volume four. That's volume four. What does the future hold? Volume four A was part one. And what happened was in 1962, when I started writing down a table of contents, it wasn't gonna be a book about computer programming in general, it was gonna be a book about how to write compilers. And I was asked to write a book explaining how to write a compiler. And at that time, there were only a few dozen people in the world who had written compilers and I happened to be one of them. And I also had some experience writing for like the campus newspaper and things like that. So I said, okay, great. I'm the only person I know who's written a compiler but hasn't invented any new techniques for writing compilers. And all the other people I knew had super ideas but I couldn't see that they would be able to write a book that would describe anybody else's ideas with their own. So I could be the journalist and I could explain what all these cool ideas about compiler writing were. And then I started putting down, well, yeah, you need to have a chapter about data structures. You need to have some introductory material. I wanna talk about searching because a compiler writer has to look up the variables in a symbol table and find out which, when you write the name of a variable in one place, it's supposed to be the same as the one you put somewhere else. So you need all these basic techniques and I kinda know some arithmetic and stuff. So I threw in these chapters and I threw in a chapter on combinatorics because that was what I really enjoyed programming the most but there weren't many algorithms known about combinatorial methods in 1962. So that was a kind of a short chapter but it was sort of thrown in just for fun. And chapter 12 was gonna be actual compilers, applying all the stuff in chapters one to 11 to make compilers. Well, okay, so that was my table of contents from 1962. And during the 70s, the whole field of combinatorics went through a huge explosion. People talk about a combinatorial explosion and they usually mean by that that the number of cases goes up, you know, you change end to end plus one and all of a sudden your problem has gotten more than 10 times harder. But there was an explosion of ideas about combinatorics in the 70s to the point that like take 1975, I bet you more than half of all the journals of computer science were about combinatorial methods. What kind of problems were occupying people's minds? What kind of problems in combinatorics? Was it satisfiability, graph theory? Yeah, graph theory was quite dominant. I mean, but all of the NP hard problems that you have like Hamiltonian path. Travel salesman. Going beyond graphs, you had operation research whenever there was a small class of problems that had efficient solutions and they were usually associated with matron theory, special mathematical construction. But once we went to things that involve three things at a time instead of two, all of a sudden things got harder. So we had satisfiability problems where if you have clauses, every clause has two logical elements in it, then we can satisfy it in linear time. We can test for satisfiability in linear time, but if you allow yourself three variables in the clause, then nobody knows how to do it. So these articles were about trying to find better ways to solve cryptography problems and graph theory problems. We have lots of data, but we didn't know how to find the best subsets of the data. Like with sorting, we could get the answer. Didn't take long. So how did it continue to change from the 70s to today? Yeah, so now there may be half a dozen conferences whose topic is combinatorics, a different kind, but fortunately I don't have to rewrite my book every month like I had to in the 70s. But still there's huge amount of work being done and people getting better ideas on these problems that don't seem to have really efficient solutions, but we still do a lot more with them. And so this book that I'm finishing now is, I've got a whole bunch of brand new methods that as far as I know, there's no other book that covers this particular approach. And so I'm trying to do my best of exploring the tip of the iceberg and I try out lots of things and keep rewriting as I find better method. So what's your writing process like? What's your thinking and writing process like every day? What's your routine even? Yeah, I guess it's actually the best question because I spend seven days a week doing it. You're the most prepared to answer it. Yeah, but okay. So the chair I'm sitting in is where I do... It's where the magic happens. Well, reading and writing, the chair is usually sitting over there where I have other books, some reference books, but I found this chair which was designed by a Swedish guy anyway. It turns out this is the only chair I can really sit in for hours and hours and not know that I'm in a chair. But then I have the standup desk right next to us and so after I write something with pencil and eraser, I get up and I type it and revise and rewrite. I'm standing up. The kernel of the idea is first put on paper. Yeah. Right. And I'll write maybe five programs a week, of course, literate programming. And these are, before I describe something in my book, I always program it to see how it's working and I try it a lot. So for example, I learned at the end of January, I learned of a breakthrough by four Japanese people who had extended one of my methods in a new direction. And so I spent the next five days writing a program to implement what they did. And then they had only generalized part of what I had done so then I had to see if I could generalize more parts of it. And then I had to take their approach and I had to try it out on a couple of dozen of the other problems I had already worked out with my old methods. And so that took another couple of weeks. And then I started to see the light and I started writing the final draft and then I would type it up, involve some new mathematical questions. And so I wrote to my friends who might be good at solving those problems and they solved some of them. So I put that in as exercises. And so a month later, I had absorbed one new idea that I learned and I'm glad I heard about it in time. Otherwise, I wouldn't put my book out before I'd heard about the idea. On the other hand, this book was supposed to come in at 300 pages and I'm up to 350 now. That added 10 pages to the book. But if I learn about another one, my publisher is gonna shoot me. Well, so in that process, in that one month process, are some days harder than others? Are some days harder than others? Well, yeah, my work is fun, but I also work hard and every big job has parts that are a lot more fun than others. And so many days I'll say, why do I have to have such high standards? Why couldn't I just be sloppy and not try this out and just report the answer? But I know that people are calling me to do this and so, okay, so, okay, Don, I'll grit my teeth and do it. And then the joy comes out when I see that actually, I'm getting good results and I get even more when I see that somebody has actually read and understood what I wrote and told me how to make it even better. I did wanna mention something about the method. So I got this tablet here, where I do the first, the first writing of concepts, okay, so. And what language is that in? Right, so take a look at it, but you know, here, random say, explain how to draw such skewed pixel diagrams, okay, so. I got this paper about 40 years ago when I was visiting my sister in Canada and they make tablets of paper with this nice large size and just the right. A very small space between lines. Small spaces, yeah, yeah, take a look. Maybe also just show it. Yeah. Yeah. Wow. You know, I've got these manuscripts going back to the 60s. And those are where I'm getting my ideas on paper, okay. But I'm a good typist. In fact, I went to typing school when I was in high school and so I can type faster than I think. So then when I do the editing, stand up and type, then I revise this and it comes out a lot different than what, you know, for style and rhythm and things like that come out at the typing stage. And you type in tech. And I type in tech. And can you think in tech? No. So. To a certain extent, I have only a small number of idioms that I use. Like, you know, I'm beginning with theorem, I do something for displayed equation, I do something and so on. But I have to see it and. In the way that it's on paper here. Yeah, right. So for example, Turing wrote, what, The Other Direction. You don't write macros, you don't think in macros. Not particularly, but when I need a macro, I'll go ahead and do it. But the thing is, I also write to fit. I mean, I'll change something if I can save a line. You know, it's like haiku. I'll figure out a way to rewrite the sentence so that it'll look better on the page. And I shouldn't be wasting my time on that, but I can't resist because I know it's only another 3% of the time or something like that. And it could also be argued that that is what life is about. Ah, yes, in fact, that's true. Like, I work in the garden one day a week and that's kind of a description of my life is getting rid of weeds, you know, removing bugs for programs. So, you know, a lot of writers talk about, you know, basically suffering, the writing processes, having, you know, it's extremely difficult. And I think of programming, especially, or technical writing that you're doing can be like that. Do you find yourself, methodologically, how do you every day sit down to do the work? Is it a challenge? You kind of say it's, you know, it's fun. But it'd be interesting to hear if there are non fun parts that you really struggle with. Yeah, so the fun comes when I'm able to put together ideas of two people who didn't know about each other. And so I might be the first person that saw both of their ideas. And so then, you know, then I get to make the synthesis and that gives me a chance to be creative. But the dredge work is where I've got to chase everything down to its root. This leads me into really interesting stuff. I mean, I learn about Sanskrit and I try to give credit to all the authors. And so I write to people who know the authors if they're dead or I communicate this way. And I got to get the math right. And I got to tackle all my programs, try to find holes in them. And I rewrite the programs after I get a better idea. Is there ever dead ends? Oh yeah, I throw stuff out, yeah. One of the things that I spend a lot of time preparing, a major example based on the game of baseball. And I know a lot of people for whom baseball is the most important thing in the world. But I also know a lot of people for whom cricket is the most important in the world or soccer or something. You know, and I realized that if I had a big example, I mean, it was gonna have a fold out illustration and everything. And I was saying, well, what am I really teaching about algorithms here where I had this baseball example? And if I was a person who knew only cricket, wouldn't they, what would they think about this? And so I've ripped the whole thing out. But I had something that would have really appealed to people who grew up with baseball as a major theme in their life. Which is a lot of people, but still a minority. Small minority, I took out bowling too. Even a smaller minority. What's the art in the art of programming? Why is there, of the few words in the title, why is art one of them? Yeah, well, that's what I wrote my Turing lecture about. And so when people talk about art, it really, I mean, what the word means is something that's not in nature. So when you have artificial intelligence, art comes from the same root, saying that this is something that was created by human beings. And then it's gotten a further meaning often of fine art, which has this beauty to the mix. And so we have things that are artistically done, and this means not only done by humans, but also done in a way that's elegant and brings joy. And has, I guess, Tolstoy versus Dostoevsky going back. But anyway, it's that part that says that it's done well, as well as not only different from nature. In general, then, art is what human beings are specifically good at. And when they say artificial intelligence, well, they're trying to mimic human beings. But there's an element of fine art and beauty. You are one. That's what I try to also say, that you can write a program and make a work of art. So now, in terms of surprising, what ideas in writing from search to the combinatorial algorithms, what ideas have you come across that were particularly surprising to you that changed the way you see a space of problems? I get a surprise every time I have a bug in my program, obviously. But that isn't really what you're at. More transformational than surprising. For example, in volume 4A, I was especially surprised when I learned about data structure called BDD, Boolean Decision Diagram. Because I sort of had the feeling that as an old timer, and I've been programming since the 50s, and BDDs weren't invented until 1986. And here comes a brand new idea that revolutionizes the way to represent a Boolean function. And Boolean functions are so basic to all kinds of things in, I mean, logic is, underlies it. Everything we can describe, all of what we know in terms of logic somehow, and propositional logic, I thought that was cut and dried and everything was known. But here comes Randy Bryant and discovers that BDDs are incredibly powerful. Then, so that means I have a whole new section to the book that I never would have thought of until 1986, not even until 1990s, when people started to use it for a billion dollar of applications. And it was the standard way to design computers for a long time, until SAT solvers came along in the year 2000. So that's another great big surprise. So a lot of these things have totally changed the structure of my book. And the middle third of volume 4B is about SAT solvers, and that's 300 plus pages, which is all about material, mostly about material that was discovered in this century. And I had to start from scratch and meet all the people in the field and write 15 different SAT solvers that I wrote while preparing that. Seven of them are described in the book. Others were from my own experience. So newly invented data structures or ways to represent? A whole new class of algorithm. Whole new class of algorithm. Yeah, and the interesting thing about the BDDs was that the theoreticians started looking at it and started to describe all the things you couldn't do with BDDs. And so they were getting a bad name because, okay, they were useful, but they didn't solve every problem. I'm sure that the theoreticians are, in the next 10 years, are gonna show why machine learning doesn't solve everything. But I'm not only worried about the worst case, I get a huge delight when I can actually solve a problem that I couldn't solve before. Even though I can't solve the problem that it suggests is a further problem, I know that I'm way better than I was before. And so I found out that BDDs could do all kinds of miraculous things. And so I had to spend quite a few years learning about that territory. So in general, what brings you more pleasure? Proving or showing a worst case analysis of an algorithm or showing a good average case or just showing a good case? That something good, pragmatically can be done with this algorithm. Yeah, I like a good case that is maybe only a million times faster than I was able to do before. But, and not worry about the fact that it's still gonna take too long if I double the size of the problem. So that said, you popularized the asymptotic notation for describing running time, obviously in the analysis of algorithms. Worst case is such an important part. Do you see any aspects of that kind of analysis as lacking and notation too? Well, the main purpose should have notations that help us for the problems we wanna solve. And so they match our intuitions. And people who worked in number theory had used asymptotic notation in a certain way, but it was only known to a small group of people. And I realized that, in fact, it was very useful to be able to have a notation for something that we don't know exactly what it is, but we only know partial about it. And so instead, so for example, instead of big O notation, let's just take a much simpler notation where I'd say zero or one, or zero, one or two. And suppose that when I had been in high school, we would be allowed to put in the middle of our formula, X plus zero, one or two equals Y, okay? And then we would learn how to multiply two such expressions together and deal with them. Well, the same thing big O notation says, here's something that's, I'm not sure what it is, but I know it's not too big. I know it's not bigger than some constant times N squared or something like that. So I write big O of N squared. And now I learned how to add big O of N squared to big O of N cubed. And I know how to add big O of N squared to plus one and square that and how to take logarithmic exponentials where I have big O's in the middle of them. And that turned out to be hugely valuable in all of the work that I was trying to do as I'm trying to figure out how good an algorithm is. So have there been algorithms in your journey that perform very differently in practice than they do in theory? Well, the worst case of a combinatorial algorithm is almost always horrible. But we have SAT solvers that are solving, where one of the last exercises in that part of my book was to figure out a problem that has 100 variables that's difficult for a SAT solver. But you would think that a problem with 100 billion variables has, requires you to do two to the 100th operations because that's the number of possibilities when you have 100 billion variables in two to the 100th. Two to the 100th is way bigger than we can handle. 10 to the 17th is a lot. You've mentioned over the past few years that you believe P may be equal to NP, but that it's not really, if somebody does prove that P equals NP, it will not directly lead to an actual algorithm to solve difficult problems. Can you explain your intuition here? Has it been changed? And in general, on the difference between easy and difficult problems of P and NP and so on? Yeah, so the popular idea is if an algorithm exists, then somebody will find it. And it's just a matter of writing it down. But many more algorithms exist than anybody can understand or ever make use of. Or discover, yeah. Because they're just way beyond human comprehension. The total number of algorithms is more than mind boggling. So we have situations now where we know that algorithms exist, but we don't have the farthest idea what the algorithms are. There are simple examples based on game playing where you have, where you say, well, there must be an algorithm that exists to win in the game of Hex because, for the first player to win in the game of Hex because Hex is always either a win for the first player or the second player. Well, what's the game of Hex? There's a game of Hex which is based on putting pebbles onto a hexagonal board and the white player tries to get a white path from left to right and the black player tries to get a black path from bottom to top. And how does capture occur? Just so I understand. And there's no capture. You just put pebbles down one at a time. But there's no draws because after all the white and black are played, there's either gonna be a white path across from east to west or a black path from bottom to top. So there's always, it's the perfect information game and people take turns like tic tac toe. And the hex board can be different sizes. But anyway, there's no possibility of a draw and players move one at a time. And so it's gotta be either a first player win or a second player win. Mathematically, you follow out all the trees and either there's always a win for the first player, second player, okay. And it's finite. The game is finite. So there's an algorithm that will decide. You can show it has to be one or the other because the second player could mimic the first player with kind of a pairing strategy. And so you can show that it has to be one way or the other. But we don't know any algorithm anyway. We don't know the third or the fourth. There are cases where you can prove the existence of a solution but nobody knows any way how to find it. But more like the algorithm question, there's a very powerful theorem in graph theory by Robinson and Seymour that says that every class of graphs that is closed under taking minors has a polynomial time algorithm to determine whether it's in this class or not. Now a class of graphs, for example, planar graphs. These are graphs that you can draw in a plane without crossing lines. And a planar graph, taking minors means that you can shrink an edge into a point or you can delete an edge. And so you start with a planar graph and shrink any edge to a point is still planar. Delete an edge is still planar. Okay, now, but there are millions of different ways to describe a family of graph that still remains the same under taking minor. And Robertson and Seymour proved that any such family of graphs, there's a finite number of minimum graphs that are obstructions so that if it's not in the family, then it has to contain, then there has to be a way to shrink it down until you get one of these bad minimum graphs that's not in the family. In the case of a planar graph, the minimum graph is a five pointed star where everything points to another and the minimum graph consisting of trying to connect three utilities to three houses without crossing lines. And so there are two bad graphs that are not planar and every non planar graph contains one of these two bad graphs by shrinking and removing edges. Sorry, can you say it again? So he proved that there's a finite number of these bad graphs. There's always a finite number. So somebody says, here's a family. It's hard to believe. And they present a sequence of 20 papers. I mean, it's deep work, but it's. Because that's for any arbitrary class. So for any arbitrary class that's closed under taking minors. That's closed under, maybe I'm not understanding because it seems like a lot of them are closed under taking minors. Almost all the important classes of graphs are. There are tons of such graphs, but also hundreds of them that arise in applications. I have a book over here called classes of graphs and it's amazing how many different classes of graphs that people have looked at. So why do you bring up this theorem or this proof? So now there are lots of algorithms that are known for special class of graphs. For example, if I have a certain, if I have a chordal graph then I can color it efficiently. If I have some kind of graphs, it'll make a great network. So you'd like to test, somebody gives you a graph and says, oh, is it in this family of graphs? If so, then I can go to the library and find an algorithm that's gonna solve my problem on that graph. Okay, so we wanna have a graph that says, an algorithm that says, you give me a graph, I'll tell you whether it's in this family or not, okay? And so all I have to do is test whether or not that does this given graph have a minor, that's one of the bad ones. A minor is everything you can get by shrinking and removing edges. And given any minor, there's a polynomial time algorithm saying, I can tell whether this is a minor of you. And there's a finite number of bad cases. So I just try, does it have this bad case? Polynomial time, I got the answer. Does it have this bad case? Polynomial time, I got the answer. Total polynomial time. And so I've solved the problem. However, all we know is that the number of minors is finite. We don't know what, we might only know one or two of those minors, but we don't know if we've got a, if we've got 20 of them, we don't know, there might be 21, 25, there's just some, all we know is that it's finite. So here we have a polynomial time algorithm that we don't know. That's a really great example of what you worry about or why you think P equals NP won't be useful. But still, why do you hold the intuition that P equals NP? P equals NP because you have to rule out so many possible algorithms as being not working. You can take the graph and you can represent it as in terms of certain prime numbers, and then you can multiply those together, and then you can take the bitwise AND and construct some certain constant in polynomial time. And then that's perfectly valid algorithm. And there's so many algorithms of that kind. A lot of times we see random, take data and we get coincidences that some fairly random looking number actually is useful because it happens to solve a problem just because there's so many hairs on your head. But it seems like unlikely that two people are gonna have the same number of hairs on their head. But they're obvious, but you can count how many people there are and how many hairs on the head. There must be people walking around in the country that have the same number of hairs on their head. Well, that's a kind of a coincidence that you might say also this particular combination of operations just happens to prove that the graph has a Hamiltonian path. I see lots of cases where unexpected things happen when you have enough possibilities. So because the space of possibility is so huge, your intuition just says it's not. You have to rule them all out. And so that's the reason for my intuition. It's by no means a proof. I mean, some people say, well, P can't equal NP because you've had all these smart people. The smartest designers of algorithms have been racking their brains for years and years and there's million dollar prizes out there and none of them, nobody has thought of the algorithm. So there must be no such algorithm. On the other hand, I can use exactly the same logic and I can say, well, P must be equal to NP because there's so many smart people out here have been trying to prove it unequal to NP and they've all failed. This kind of reminds me of the discussion about the search for aliens. They've been trying to look for them and we haven't found them yet, therefore they don't exist. But you can show that there's so many planets out there that they very possibly could exist. Yeah, right, and then there's also the possibility that they exist but they all discovered machine learning or something and then blew each other up. Well, on that small, quick tangent, let me ask, do you think there's intelligent life out there in the universe? I have no idea. Do you hope so? Do you think about it? I don't spend my time thinking about things that I could never know, really. And yet you do enjoy the fact that there's many things you don't know. You do enjoy the mystery of things. I enjoy the fact that I have limits, yeah. But I don't take time to answer unsolvable questions. Got it. Well, because you've taken on some tough questions that may seem unsolvable. You have taken on some tough questions that may seem unsolvable, but they're in the space. It gives me a thrill when I can get further than I ever thought I could, yeah. But much like with religion, these. I'm glad that there's no proof that God exists or not. I mean, I think. It would spoil the mystery. It would be too dull, yeah. So to quickly talk about the other art of artificial intelligence, what's your view? Artificial intelligence community has developed as part of computer science and in parallel with computer science since the 60s. What's your view of the AI community from the 60s to now? So all the way through, it was the people who were inspired by trying to mimic intelligence or to do things that were somehow the greatest achievements of intelligence that had been inspiration to people who have pushed the envelope of computer science maybe more than any other group of people. So all the way through, it's been a great source of good problems to sink teeth into. Sink teeth into and getting partial answers and then more and more successful answers over the years. So this has been the inspiration for lots of the great discoveries of computer science. Are you yourself captivated by the possibility of creating, of algorithms having echoes of intelligence in them? Not as much as most of the people in the field, I guess, I would say, but that's not to say that they're wrong or that it's just, you asked about my own personal preferences, but the thing that I worry about is when people start believing that they've actually succeeded and because the, it seems to me, there's a huge gap between really understanding something and being able to pretend to understand something and give the illusion of understanding something. Do you think it's possible to create without understanding? Yeah. So to. Oh, I do that all the time too, I mean. So I use random numbers, but there's still this great gap. I don't assert that it's impossible, but I don't see anything coming any closer to really the kind of stuff that I would consider intelligence. So you've mentioned something that, on that line of thinking, which I very much agree with, so The Art of Computer Programming as the book is focused on single processor algorithms, and for the most part, you mentioned. That's only because I set the table of contents in 1962, you have to remember. For sure, there's no. I'm glad I didn't wait until 1965 or something. That's, one book, maybe we'll touch on the Bible, but one book can't always cover the entirety of everything. So I'm glad the table of contents for The Art of Computer Programming is what it is. But you did mention that you thought that an understanding of the way ant colonies are able to perform incredibly organized tasks might well be the key to understanding human cognition. So these fundamentally distributed systems. So what do you think is the difference between the way Don Knuth would sort a list and an ant colony would sort a list or perform an algorithm? Sorting a list isn't the same as cognition, though, but I know what you're getting at is. Well, the advantage of ant colonies, at least we can see what they're doing. We know which ant has talked to which other ant, and it's much harder with the brains to know to what extent neurons are passing signal. So I'm just saying that ant colony might be, if they have the secret of cognition, think of an ant colony as a cognitive single being rather than as a colony of lots of different ants. I mean, just like the cells of our brain and the microbiome and all that is interacting entities, but somehow I consider myself to be a single person. Well, an ant colony, you can say, might be cognitive somehow. It's some suggestion. Yeah, I mean, okay, I smash a certain ant and the organism's saying, hmm, that stung. What was that? But if we're going to crack the secret of cognition, it might be that we could do so by psyching out how ants do it because we have a better chance to measure their communicating by pheromones and by touching each other in sight, but not by much more subtle phenomenon like electric currents going through. But even a simpler version of that, what are your thoughts of maybe Conway's Game of Life? Okay, so Conway's Game of Life is able to simulate any computable process. And any deterministic process is... I like how you went there. I mean, that's not its most powerful thing, I would say. I mean, it can simulate it, but the magic is that the individual units are distributed and extremely simple. Yes, we understand exactly what the primitives are. The primitives, just like with the ant colony, even simpler, though. But still, it doesn't say that I understand life. I mean, it gives me a better insight into what does it mean to have a deterministic universe? What does it mean to have free choice, for example? Do you think God plays dice? Yes. I don't see any reason why God should be forbidden from using the most efficient ways to... I mean, we know that dice are extremely important in efficient algorithms. There are things that couldn't be done well without randomness. And so, I don't see any reason why God should be prohibited. When the algorithm requires it, you don't see why the physics should constrain it. So, in 2001, you gave a series of lectures at MIT about religion and science. No, that was in 1999. The book came out in 2001. So, in 1999, you spent a little bit of time in Boston enough to give those lectures. And I read the 2001 version, most of it. It's quite fascinating to read. I recommend people, it's a transcription of your lectures. So, what did you learn about how ideas get started and grow from studying the history of the Bible? So, you've rigorously studied a very particular part of the Bible. What did you learn from this process about the way us human beings as a society develop and grow ideas, share ideas, and are defined by those ideas? Well, it's hard to summarize that. I wouldn't say that I learned a great deal of really definite things where I could make conclusions, but I learned more about what I don't know. You have a complex subject, which is really beyond human understanding. So, we give up on saying, I'm never gonna get to the end of the road and I'm never gonna understand it, but you say, but maybe it might be good for me to get closer and closer and learn more and more about something. And so, how can I do that efficiently? And the answer is, well, use randomness. And so, try a random subset of that is within my grasp and study that in detail, instead of just studying parts that somebody tells me to study, or instead of studying nothing because it's too hard. So, I decided, for my own amusement once, that I would take a subset of the verses of the Bible and I would try to find out what the best thinkers have said about that small subset. And I had about, let's say 60 verses out of 3,000, I think it's one out of 500 or something like this. And so, then I went to the libraries, which are well indexed. I spent, for example, at Boston Public Library, I would go once a week for a year and I went, I have done times to Hanover Harvard Library to look at this, that weren't in the Boston Public, where scholars had looked and you can go down the shelves and you can look in the index and say, oh, is this verse mentioned anywhere in this book? If so, look at page 105. So, in other words, I could learn not only about the Bible, but about the secondary literature about the Bible, the things that scholars have written about it. And so, that gave me a way to zoom in on parts of the thing, so that I could get more insight. And so, I look at it as a way of giving me some firm pegs, which I could hang pieces of information, but not as things where I would say, and therefore, this is true. In this random approach of sampling the Bible, what did you learn about the most central, one of the biggest accumulation of ideas in our history? It seemed to me that the main thrust was not the one that most people think of as saying, oh, don't have sex or something like this, but that the main thrust was to try to figure out how to live in harmony with God's wishes. I'm assuming that God exists, and as I say, I'm glad that there's no way to prove this, because I would run through the proof once, and then I'd forget it, and I would never speculate about spiritual things and mysteries otherwise, and I think my life would be very incomplete. So, I'm assuming that God exists, but a lot of the people say God doesn't exist, but that's still important to them. And so, in a way, that might still be, whether God is there or not, in some sense, God is important to them. One of the verses I studied, Doc, you can interpret it as saying that it's much better to be an atheist than not to care at all. So, I would say it's similar to the P equals NP discussion. You mentioned a mental exercise that I'd love it if you could partake in yourself, a mental exercise of being God. So, if you were God, Doc Neuth, how would you present yourself to the people of Earth? You mentioned your love of literature, and there's this book that really I can recommend to you. Yeah, the title, I think, is Blasphemy. It talks about God revealing Himself through a computer in Los Alamos, and it's the only book that I've ever read where the punchline was really the very last word of the book and explained the whole idea of the book. And so, I'd only give that away, but it's really very much about this question that you raised. But suppose God said, okay, my previous means of communication with the world are not the best for the 21st century, so what should I do now? And it's conceivable that God would choose the way that's described in this book. Another way to look at this exercise is looking at the human mind, looking at the human spirit, the human life in a systematic way. I think mostly you want to learn humility. You want to realize that once we solve one problem, that doesn't mean that all of a sudden other problems are going to drop out. And we have to realize that there are things beyond our ability. I see hubris all around. Yeah, well said. If you were to run program analysis on your own life, how did you do in terms of correctness, running time, resource use, asymptotically speaking, of course? Okay, yeah, well, I would say that question has not been asked me before. And I started out with library subroutines and learning how to be an automaton that was obedient, and I had the great advantage that I didn't have anybody to blame for my failures. If I started not understanding something, I knew that I should stop playing ping pong, and it was my fault that I wasn't studying hard enough or something, rather than that somebody was discriminating against me in some way. And I don't know how to avoid the existence of biases in the world, but I know that that's an extra burden that I didn't have to suffer from. And then I found from parents, I learned the idea of service to other people as being more important than what I get out of stuff myself. I know that I need to be happy enough in order to be able to be of service, but I came to a philosophy finally that I phrase as, point eight is enough. There was a TV show once called Eight is Enough, which was about somebody had eight kids. But I say point eight is enough, which means if I can have a way of rating happiness, I think it's good design to have an organism that's happy about 80% of the time. And if it was 100% of the time, it would be like everybody's on drugs and everything collapses and nothing works because everybody's just too happy. Do you think you've achieved that point eight optimal balance? There are times when I'm down and I know that I've actually been programmed to be depressed a certain amount of time. And if that gets out of kilter and I'm more depressed than usual, sometimes I find myself trying to think, now, who should I be mad at today? There must be a reason why. But then I realize it's just my chemistry telling me that I'm supposed to be mad at somebody, and so I trigger it up and say, okay, go to sleep and get better. But if I'm not 100% happy, that doesn't mean that I should find somebody that's screwing me and try to silence them. But I'm saying, okay, I'm not 100% happy, but I'm happy enough to be part of a sustainable situation. So that's kind of the numerical analysis I do. You've converged towards the optimal, which for human life is a point eight. I hope it's okay to talk about, as you talked about previously, in 2006 you were diagnosed with prostate cancer. Has that encounter with mortality changed you in some way or the way you see the world? Yeah, it did. The first encounter with mortality was when my dad died, and I went through a month when I sort of came to be comfortable with the fact that I was going to die someday. And during that month, I don't know, I felt okay, but I couldn't sing. And I couldn't do original research either. I sort of remember after three or four weeks, the first time I started having a technical thought that made sense and was maybe slightly creative, I could sort of feel that something was starting to move again. So I felt very empty until I came to grips with it. I learned that this is sort of a standard grief process that people go through. Okay, so then now I'm at a point in my life, even more so than in 2006, where all of my goals have been fulfilled except for finishing the art of computer programming. I had one major unfulfilled goal. I'd wanted all my life to write a piece of music, and I had an idea for a certain kind of music that I thought ought to be written, at least somebody ought to try to do it. And I felt that it wasn't going to be easy, but I wanted proof of concept. I wanted to know if it was going to work or not, and so I spent a lot of time. And finally, I finished that piece, and we had the world premiere last year on my 80th birthday, and we had another premiere in Canada, and there's talk of concerts in Europe and various things. But that's done. It's part of the world's music now, and it's either good or bad, but I did what I was hoping to do. So the only thing that I have on my agenda is to try to do as well as I can with the art of computer programming until I go to CINA. Do you think there's an element of.8 that might apply there?.8? Well, I look at it more that I got actually to 1.0 when that concert was over with. So in 2006, I was at.8, so when I was diagnosed with prostate cancer, then I said, okay, well, I've had all kinds of good luck all my life, and I have nothing to complain about, so I might die now, and we'll see what happens. And so quite seriously, I had no expectation that I deserved better. I didn't make any plans for the future. I came out of the surgery and spent some time learning how to walk again and so on. It was painful for a while, but I got home, and I realized I hadn't really thought about what to do next. I hadn't any expectation. I said, okay, hey, I'm still alive. Okay, now I can write some more books. But I didn't come with the attitude that this was terribly unfair, and I just said, okay, I was accepting whatever turned out. I'd gotten more than my share already, so why should I? When I got home, I realized that I had really not thought about the next step, what I would do after I would be able to work again. I was comfortable with the fact that it was at the end, but I was hoping that I would still be able to learn about satisfiability and also someday even write music. I didn't start seriously on the music project until 2012. So I'm going to be in huge trouble if I don't talk to you about this. In the 70s, you've created the tech typesetting system together with MetaFont language for font description and computer modern family of typefaces. That has basically defined the methodology and the aesthetic of countless research fields, math, physics, beyond computer science, so on. Okay, well, first of all, thank you. I think I speak for a lot of people in saying that. But question in terms of beauty. There's a beauty to typography that you've created, and yet beauty is hard to quantify. How does one create beautiful letters and beautiful equations? Perhaps there's no words to be describing the process. So the great Harvard mathematician George D. Burkhoff wrote a book in the 30s called Aesthetic Measure where he would have pictures of vases and underneath would be a number. And this was how beautiful the vase was. And he had a formula for this. And he actually also wrote about music. So I thought maybe part of my musical composition I would try to program his algorithms so that I would write something that had the highest number by his score. Well, it wasn't quite rigorous enough for a computer to do. But anyway, people have tried to put numerical value on beauty, and he did probably the most serious attempt. And George Gershwin's teacher also wrote two volumes where he talked about his method of composing music. But you're talking about another kind of beauty and letters and letter phrases. Elegance and whatever that curvature is. Right. And so that's in the eye of the beholder, as they say. But striving for excellence in whatever definition you want to give to beauty, then you try to get as close to that as you can somehow. I guess I'm trying to ask, and there may not be a good answer, what loose definitions were you operating under with the community of people that you were working under? Well, the loose definition, I wanted it to appeal to me. To you personally. Yeah. That's a good start, right? Yeah. No, and it failed that test when Volume Two came out with the new printing, and I was expecting it to be the happiest day of my life. And I felt like a burning, like how angry I was that I opened the book and it was in the same beige covers, but it didn't look right on the page. The number two was particularly ugly. I couldn't stand any page that had a two in its page number. And I was expecting that. I spent all this time making measurements and I had looked at stuff in different ways and I had great technology, but I wasn't done. I had to retune the whole thing after 1961. Has it ever made you happy, finally? Oh, yes. Or is it a 0.8? No, and so many books have come out that would never have been written without this. It's just a joy. But now, I mean, all these pages that are sitting up there, if I didn't like them, I would change them. Nobody else has this ability. They have to stick with what I gave them. Yeah. So, in terms of the other side of it, there's the typography, so the look of the type and the curves and the lines. What about the spacing? What about the? The spacing between the white space. Yeah. It seems like you could be a little bit more systematic about the layout or technical. Oh, yeah. You can always go further. I didn't stop at 0.8, but I stopped at about 0.98. It seems like you're not following your own rule for happiness. No, no, no. Of course, there's this, what is the Japanese word, wabi sabi or something, where the most beautiful works of art are those that have flaws because then the person who perceives them adds their own appreciation and that gives the viewer more satisfaction or so on. But no, no, with typography, I wanted it to look as good as I could in the vast majority of cases, and then when it doesn't, then I say, okay, that's 2% more work for the author. But I didn't want to say that my job was to get to 100% and take all the work away from the author. That's what I meant by that. So if you were to venture a guess, how much of the nature of reality do you think we humans understand? You mentioned you appreciate mystery. How much of the world about us is shrouded in mystery? If you were to put a number on it, what percent of it all do we understand? How many leading zeros, 0.00? I don't know. I think it's infinitesimal. How do we think about that and what do we do about that? Do we continue one step at a time? Yeah, we muddle through. I mean, we do our best. We realize that nobody's perfect and we try to keep advancing, but we don't spend time saying we're not there, we're not all the way to the end. Some mathematicians that would be in the office next to me when I was in the math department, they would never think about anything smaller than countable infinity. We intersected that countable infinity because I rarely got up to countable infinity. I was always talking about finite stuff. But even limiting to finite stuff, which the universe might be, there's no way to really know whether the universe isn't just made out of capital N, whatever units you want to call them, quarks or whatever, where capital N is some finite number. All of the numbers that are comprehensible are still way smaller than almost all finite numbers. I got this one paper called Supernatural Numbers where I guess you probably ran into something called Knuth arrow notation. Did you ever run into that? Anyway, so you take the number, I think it's like, and I called it Super K, I named it after myself, but arrow notation is something like 10 and then four arrows and a three or something like that. Now, the arrow notation, if you have no arrows, that means multiplication. X, Y means X times X times X times X, Y times. If you have one arrow, that means exponentiation. So X one arrow Y means X to the X to the X to the X Y times. So I found out, by the way, that this notation was invented by a guy in 1830 and he was one of the English nobility who spent his time thinking about stuff like this. And it was exactly the same concept that I used arrows and he used a slightly different notation. But anyway, and then this Ackermann's function is based on the same kind of ideas, but Ackermann was 1920s. But anyway, you've got this number 10 quadruple arrow three. So that says, well, we take 10 to the 10 to the 10 to the 10 to the 10th and how many times do we do that? Oh, 10 double arrow two times or something. I mean, how tall is that stack? But then we do that again because that was only 10 quadruple arrow two. It gets to be a pretty large number. It gets way beyond comprehension. But it's so small compared to what finite numbers really are because I'm only using four arrows and 10 and a three. I mean, let's have that many number arrows. The boundary between infinite and finite is incomprehensible for us humans anyway. Infinity is a useful way for us to think about extremely large things. And we can manipulate it, but we can never know that the universe is actually anywhere near that. So I realize how little we know. But we found an awful lot of things that are too hard for any one person to know, even in our small universe. Yeah, and we did pretty good. So when you go up to heaven and meet God and get to ask one question that would get answered, what question would you ask? What kind of browser do you have up here? No, actually, I don't think it's meaningful to ask this question, but I certainly hope we had good internet. Okay, on that note, that's beautiful actually. Don, thank you so much. It was a huge honor to talk to you. I really appreciate it. Well, thanks for the gamut of questions. Yeah, it was fun. Thanks for listening to this conversation with Donald Knuth, and thank you to our presenting sponsor, Cash App. Download it, use Code Lex Podcast, you'll get $10, and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter. And now, let me leave you with some words of wisdom from Donald Knuth. We should continually be striving to transform every art into a science, and in the process, we advance the art. Thank you for listening, and hope to see you next time.
Donald Knuth: Algorithms, Complexity, and The Art of Computer Programming | Lex Fridman Podcast #62
The following is a conversation with Stephen Kotkin, a professor of history at Princeton University and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet Union, including the first two of a three volume work on Stalin, and he is currently working on volume three. You may have noticed that I've been speaking with not just computer scientists, but physicists, engineers, historians, neuroscientists, and soon much more. To me, artificial intelligence is much bigger than deep learning, bigger than computing. It is our civilization's journey into understanding the human mind and creating echoes of it in the machine. To me, that journey must include a deep historical and psychological understanding of power. Technology puts some of the greatest power in the history of our civilization into the hands of engineers and computer scientists. This power must not be abused. And the best way to understand how such abuse can be avoided is to not be blind to the lessons of history. As Stephen Kotkin brilliantly articulates, Stalin was arguably one of the most powerful humans in history. I've read many books on Joseph Stalin, Vladimir Putin, and the wars of the 20th century. I hope you understand the value of such knowledge to all of us, especially to engineers and scientists who built the tools of power in the 21st century. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it 5 stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction, I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has an investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Your services are provided by Cash App Investing, a subsidiary of Square, and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store, Google Play, and use code LexPodcast, you'll get $10, and Cash App will also donate $10 to FIRST, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Stephen Kotkin. Do all human beings crave power? No. Human beings crave security. They crave love. They crave adventure. They crave power, but not equally. Some human beings nevertheless do crave power. For sure. What words is that deeply in the psychology of people? Is it something you're born with? Is it something you develop? Some people crave a position of leadership or of standing out, of being recognized, and that could be starting out in the school years on the schoolyard. It could be within their own family, not just in their peer group. Those kind of people we often see craving leadership positions from a young age often end up in positions of power. But they can be varied positions of power. You can have power in an institution where your power is purposefully limited. For example, there's a board or a consultative body or a separation of powers. Not everyone craves power whereby they're the sole power or they're their unconstrained power. That's a little bit less usual. We may think that everybody does, but not everybody does. Those people who do crave that kind of power, unconstrained, the ability to decide as much as life or death of other people, those people are not everyday people. They're not the people you encounter in your daily life for the most part. Those are extraordinary people. Most of them don't have the opportunity to live that dream. Very few of them, in fact, end up with the opportunity to live that dream. So percentage wise, in your sense, if we think of George Washington, for example, would most people given the choice of absolute power over a country versus maybe the capped power that the United States presidential role, at least at the founding of the country represented, what do you think most people would choose? Well, Washington was in a position to exercise far greater power than he did. And in fact, he didn't take that option. He was more interested in seeing institutionalization, of seeing the country develop strong institutions rather than an individual leader like himself have excess power. So that's very important. So like I said, not everyone craves unconstrained power, even if they're very ambitious. And of course, Washington was very ambitious. He was a successful general before he was a president. So that clearly comes from the influences on your life, where you grow up, how you grow up, how you raised, what kind of values are imparted to you along the way. You can understand power as the ability to share, or you can understand or the ability to advance something for the collective in a collective process, not an individual process. So power comes in many different varieties. And ambition doesn't always equate to despotic power. Right power is something different from ordinary institutional power that we see. The president of MIT does not have unconstrained power. The president of MIT rightly must consult with other members of the administration, with the faculty members, to a certain extent with the student body and certainly with the trustees of MIT. Those constraints make the institution strong and enduring and make the decisions better than they would be if he had unconstrained power. But you can't say that the president is not ambitious. Of course, the president is ambitious. We worry about unconstrained power. We worry about executive authority that's not limited. That's the definition of authoritarianism or tyranny. Unlimited or barely limited executive authority. Executive authority is necessary to carry out many functions. We all understand that. That's why MIT has an executive, has a president. But unlimited or largely unconstrained executive power is detrimental to even the person who exercises that power. So what do you think? It's an interesting notion. We kind of take it for granted that constraints on executive power is a good thing. But why is that necessarily true? So what is it about absolute power that does something bad to the human mind? So you know, the popular saying of absolute power corrupts absolutely. Is that the case? That the power in itself is the thing that corrupts the mind in some kind of way where it leads to a bad leadership over time? People make more mistakes when they're not challenged. When they don't have to explain things and get others to vote and go along with it. When they can make a decision without anybody being able to block their decision or to have input necessarily on their decision. You're more prone to mistakes. You're more prone to extremism. There's a temptation there. For example, we have separation of powers in the United States. The Congress, right, has authority that the president doesn't have. As for example, in budgeting, the so called power of the purse. This can be very frustrating. People want to see things happen and they complain that there's a do nothing Congress or that the situation is stalemated. But actually that's potentially a good thing. In fact, that's how our system was designed. Our system was designed to prevent things happening in government. And there's frustration with that, but ultimately that's the strength of the institutions we have. And so when you see unconstrained executive authority, there can be a lot of dynamism. A lot of things can get done quickly. But those things can be like, for example, what happened in China under Mao or what happened in the Soviet Union under Stalin or what happened in Haiti under Papa Doc and then Baby Doc or fill in the blank, right? What happens sometimes in corporations where a corporate leader is not constrained by the shareholders, by the board or by anything. And they can seem to be a genius for a while, but eventually it catches up to them. And so the idea of constraints on executive power is absolutely fundamental to the American system, American way of thinking. And not only America, obviously large other parts of the world that have a similar system, not an identical system, but a similar system of checks and balances on executive power. And so the case that I study, the only checks and balances on executive power are circumstantial. So for example, distances in the country, it's hard to do something over 5,000 miles or the amount of time in a day, it's hard for a leader to get to every single thing the leader wants to get to because there are only 24 hours in a day. Those are circumstantial constraints on executive power. They're not institutional constraints on executive power. One of the constraints on executive power that United States has versus Russia, maybe something you've implied and actually spoke directly to is there's something in the Russian people and the Soviet people that are attracted to authoritarian power, psychologically speaking, or at least the kind of leaders that sought authoritarian power throughout its history. And that desire for that kind of human is a lack of a constraint. In America, it seems as people, we desire somebody not like Stalin, somebody more like George Washington. So that's another constraint, the belief of the people, what they admire in a leader, what they seek in a leader. So maybe you can speak to, well, first of all, can you speak briefly to that psychology of, is there a difference between the Russian people and the American people in terms of just what we find attractive in a leader? Not as great a difference as it might seem. There are unfortunately many Americans who would be happy with an authoritarian leader in the country. It's by no means a majority. It's not even a plurality, but nonetheless, it's a real sentiment in the population. Sometimes because they feel frustrated because things are not getting done. Sometimes because they're against something that's happening in the political realm and they feel it has to be corrected and corrected quickly. It's a kind of impulse. People can regret the impulse later on, that the impulse is motivated by reaction to their environment. In the Russian case, we have also people who crave, sometimes known as a strong hand, an iron hand, an authoritarian leader, because they want things to be done and be done more quickly that align with their desires. But I'm not sure it's a majority in the country today. Certainly in Stalin's time, this was a widespread sentiment and people had few alternatives that they understood or could appeal to. Nowadays in the globalized world, the citizens of Russia can see how other systems have constraints on executive power and the life isn't so bad there. In fact, the life might even be better. So the impatience, the impulsive quality, the frustration does sometimes in people reinforce their craving for the unconstrained executive to quote, get things done or shake things up. Yes, that's true. But in the Russian case, I'm not sure it's cultural today. I think it might be more having to do with the failures, the functional failures of the kind of political system that they tried to institute after the Soviet collapse. And so it may be frustration with the version of constraints on executive power they got and how it didn't work the way it was imagined, which has led to a sense in which nonconstrained executive power could fix things. But I'm not sure that that's a majority sentiment in the Russian case, although it's hard to measure because under authoritarian regimes, a public opinion is shaped by the environment in which people live, which is very constrained in terms of public opinion. But on that point, why at least from a distance does there seem to nevertheless be support for the current Russian president Vladimir Putin? Is that have to do with the fact that measuring, getting good metrics and statistics on support is difficult in authoritarian governments, or is there still something appealing to that kind of power to the people? I think we have to give credit to President Putin for understanding the psychology of the Russians to whom he appeals. Many of them were the losers in the transition from communism. They were the ones whose pensions were destroyed by inflation or whose salaries didn't go up or whose regions were abandoned. They were not the winners for the most part, and so I think there's an understanding on his part of their psychology. Putin has grown in the position. He was not a public politician when he first started out. He was quite poor in public settings. He didn't have the kind of political instincts that he has now. He didn't have the appeal to traditional values and the Orthodox Church and some of the other dimensions of his rule today. So yes, we have to give some credit to Putin himself for this in addition to the frustrations and the mass of the people. But let's think about it this way in addition, without taking away the fact that he's become a better retail politician over time and that sentiment has shifted because of the disappointments with the transition with the population. When I ask my kids, am I a good dad? My kids don't have any other dad to measure me against. I'm the only dad they know, and I'm the only dad they can choose or not choose. If they don't choose me, they still get me as dad, right? So with Putin today, he's the only dad that the Russian people have. Now, if my kids were introduced to alternative fathers, they might be better than me. They might be more loving, more giving, funnier, richer, whatever it might be. They might be more appealing. There are some blood ties there for sure that I have with my kids, but they would at least be able to choose alternatives and then I would have to win their favor in that constellation of alternatives. If President Putin were up against real alternatives, if the population had real choice and that choice could express itself and have resources and have media and everything else the way he does, maybe he would be very popular and maybe his popularity would not be as great as it currently is. So the absence of alternatives is another factor that reinforces his authority and his popularity. Having said that, there are many authoritarian leaders who deny any alternatives to the population and are not very popular. So denial of alternatives doesn't guarantee you the popularity. You still have to figure out the mass psychology and be able to appeal to it. So in the Russian case, the winners from the transition live primarily in the big cities and are self employed or entrepreneurial. Even if they're not self employed, they're able to change careers. They have tremendous skills and talent and education and knowledge as well as these entrepreneurial or dynamic personalities. Putin also appealed to them. He did that with Medvedev and it was a very clever ruse. He himself appealed to the losers from the transition, the small towns, the rural, the people who were not well off and he had them for the most part. Not all. We don't want to generalize to say that he had every one of them because those people have views of their own, sometimes in contradiction with the president of Russia. And then he appealed to the opposite people, the successful urban base through the so called reformer Medvedev, the new generation, the technically literate prime minister who for a time was president. And so that worked very successfully for Putin. He was able to bridge a big divide in the society and gain a greater mass support than he would otherwise have had by himself. That ruse only worked through the time that Medvedev was temporarily president for a few years because of the Constitution, Putin couldn't do three consecutive terms and stepped aside in what they call castling in chess. When this was over, Putin had difficulty with his popularity. There were mass protests in the urban areas, precisely that group of the population that he had been able to win in part because of the Medvedev castling and now had had their delusions exposed and were disillusioned, and there were these mass protests in the urban areas, not just in the capital, by the way. And Putin had to, as it were, come up with a new way to fix his popularity, which happened to be the annexation of Crimea, from which he got a very significant bump. However, the trend is back in the other direction. It's diminishing again, although it's still high relative to other leaders around the world. So I wouldn't say that he's unpopular with the mass in Russia. There is some popularity there, there is some success, but I would say it's tough for us to gauge because of the lack of alternatives. And Putin is unpopular inside the state administration. At every level, the bureaucracy of the leadership. Because those people are well informed, and they understand that the country is declining, that the human capital is declining, the infrastructure is declining, the economy is not really growing, it's not really diversifying, Russia's not investing in its future. The state officials understand all of that, and then they see that the Putin clique is stealing everything in sight. So between the failure to invest in a future and the corruption of a narrow group around the president, there's disillusionment in the state apparatus because they see this more clearly or more closely than the mass of the population. They can't necessarily yet oppose this in public because they're people, they have families, they have careers, they have children who want to go to school or want a job. And so there are constraints on their ability to oppose the regime based upon what we might call cowardice or other people might call realism. I don't know how courageous people can be when their family, children, career are on the line. So it's very interesting dynamic to see the disillusionment inside the government with the president, which is not yet fully public for the most part, but could become public. And once again, if there's an alternative, if an alternative appears, things could shift quickly. And that alternative could come from inside the regime. From inside the regime. But the leadership, the party, the people that are now, as you're saying, opposed to Putin, nevertheless, maybe you can correct me, but it feels like there's, structurally is deeply corrupt. So each of the people we're talking about are, don't feel like a George Washington. Once again, the circumstances don't permit them to act that way necessarily, right? George Washington did great things, but in certain circumstances. A lot of the state officials in Russia for certain are corrupt. There's no question. Many of them, however, are patriotic and many of them feel badly about where the country has been going. They would prefer that the country was less corrupt. They would prefer that there were greater investment in all sorts of areas of Russia. They might even themselves steal less if they could be guaranteed that everybody else would steal less. There's a deep and abiding patriotism inside Russia, as well as inside the Russian regime. So they understand that Putin in many ways rescued the Russian state from the chaos of the 1990s. They understand that Russia was in very bad shape as an incoherent failing state almost when Putin took over and that he did some important things for Russia's stability and consolidation. There's also some appreciation that Putin stood up to the West and stood up to more powerful countries and regained a sense of pride and maneuverability for Russia in the international system. People appreciate that and it's real. It's not imagined that Putin accomplished that. The problem is the methods that he accomplished it with. He used the kind of methods, that is to say, taking other people's property, putting other people in jail for political reasons. He used the kind of methods that are not conducive to long term growth and stability. So he fixed the problem, but he fixed the problem and then created even bigger long term problems potentially. And moreover, all authoritarian regimes that use those methods are tempted to keep using them and using them and using them until they're the only ones who are the beneficiaries and the group narrows and narrows. The elite gets smaller and narrower. The interest groups get excluded from power and their ability to continue enjoying the fruits of the system and the resentment grows. And so that's the situation we have in Russia is a place that is stuck. It was to a certain extent rescued. It was rescued with methods that were not conducive to long term success and stability. The rescue you're referring to is the sort of the economic growth when Putin first took office. Yes, they had 10 years. They had a full decade of an average of 7% growth a year, which was phenomenal and is not attributable predominantly to oil prices. During President Putin's first term as president, the average price of oil was $35 a barrel. During his second term as president, the average price was $70 a barrel. So during those two terms, when Russia was growing at about 7% a year, oil prices were averaging somewhere around $50 a barrel, which is fine, but is not the reason because later on when oil prices were over $100 a barrel, Russia stagnated. So the initial growth, do you think Putin deserves some credit for that? Yes, he does because he introduced some important liberalizing measures. He lowered taxes. He allowed land to be bought and sold. He deregulated many areas of the economy. And so there was a kind of entrepreneurial burst that was partly attributable, partly attributable to government policy during his first term. But also he was consolidating political power. And as I said, the methods he used overall for the long term were not able to continue sustain that success. In addition, we have to remember that China played a really big role in the success of Russia in the first two terms of Putin's presidency because China's phenomenal growth created insatiable demand for just about everything that the Soviet Union used to produce. So fertilizers, cement, fill in the blank, chemicals, metals, China had insatiable demand for everything the Soviet Union once produced. And so China's raising of global demand overall brought Soviet era industry back from the dead. And so there was something that happened. Soviet era industry fell off a cliff in the 1990s. There was a decline in manufacturing and industrial production greater than in the Great Depression in the US. But a lot of that came back online in the 2000s. And that had to do with China's phenomenal growth. The trade between China and Russia was not always direct. So this was an indirect effect. But raising global prices for the commodities and the products, the kind of lower end, lower value products in manufacturing, not high end stuff, but lower end stuff like steel or iron or cement or fertilizer, where the value added is not spectacular, but nonetheless, which had been destroyed by the 1990s and after the Soviet collapse, this was brought back to life. Now, you can do that once. You can bring Soviet era industry back to life once. And that happened during Putin's first two terms, in addition to the liberalizing policies, which spurred entrepreneurialism in some small and medium business. The crash of the ruble in 1998, which made Russian products much cheaper abroad and made imports much more expensive, also facilitated the resuscitation, the revival of domestic manufacturing. So all of this came together for that spectacular 10 year, 7% on average economic growth. And moreover, people's wages after inflation, their disposable income grew more even than GDP grew. So disposable income after inflation, that is real income, was growing greater than 7%. In some cases, 10% a year. So there was a boom, and the Russian people felt it, and it happened during Putin's first two terms, and people were grateful, rightly so, for that. And those who don't want to give Putin credit, give oil prices all the credit. But I don't think that oil prices can explain this. Having said that, that doesn't mean that this was sustainable over the long term. So you've briefly mentioned, sort of implying the possibility, you know, Stalin held power for, let's say, 30 years. You briefly mentioned that as a question, will Putin be able to beat that record, to beat that? So can you talk about your sense of, is it possible that Putin holds power for that kind of duration? Let's hope not. Let's hope not for Russia's sake. The primary victims of President Putin's power are Russians. They're not Ukrainians, although to a certain extent, Ukraine has suffered because of Putin's actions. And they're not Americans, they're Russians. Moreover, Russia has lost a great deal of human talent. Tens of millions of people have left Russia since 1991 overall. Somewhere between five and 10 million people have left the country and are beyond the borders of the former Soviet Union. So they left the Soviet space entirely. Moreover, the people who left are not the poor people. They're not the uneducated. They're not the losers. The people who've left are the more dynamic parts of the population. The better educated, the more entrepreneurial. So that human capital loss that Russia has suffered is phenomenal. And in fact, right here where we're sitting at MIT, we have examples of people who are qualified good enough for MIT and have left Russia to come to MIT. You're looking at one of them. And the other aspect, just to quickly comment, is those same people like me, I'm not welcome back. No, you're not under the current regime. It was a big loss for Russia if you're patriotic, but not from the point of view of the Putin regime. That has to do, also factors into popularity. If the people who don't like you leave, they're not there to complain, to protest, to vote against you. And so your opposition declines when you let them leave. However, it's very costly in human capital terms. Hemorrhaging that much human capital is damaging, it's self damaging. And we've seen it accelerate. It was already high, but we've seen it accelerate in the last seven to eight years of President Putin's rule. And those people are not going back of their own volition. But even if they wanted to go back, as you just said, they'd be unwelcome. That's a big cost to pay for this regime. And so whatever benefits this regime might or might not have given to the country, the disadvantages, the downside, the costs are also really high. So we don't want Putin lasting in power as long as Stalin. It would be better if Russia were able to choose among options, to choose a new leader among options. Many people speculate that President Putin will name a successor the way Yeltsin named Putin as his successor, President Boris Yeltsin. And then Putin will leave the stage and allow the successor to take over. That might seem like a good solution, but once again, we don't need a system where you hang on for as long as possible and then nominate who's going to take over. We need a system that has the kind of corrective mechanisms that democracies and markets have along with rule of law. A corrective mechanism is really important because all leaders make mistakes. But when you can't correct for the mistakes, then the mistakes get compounded. Putin could well, he seems to be healthy, he could well last as many years as Stalin. It's hard to predict because events intercede sometimes and create circumstances that are unforeseen and leaders get overthrown or have a heart attack or whatever. There's a palace insurrection where ambitious leaders on the inside for both personal power and patriotic reasons try to push aside an aging leader. There are many scenarios in which Putin could not last that long, but unfortunately, right now, you could also imagine potentially him lasting that long, which as I said, is not an outcome if you're patriotic about Russia, it's not an outcome you would wish out to the country. It's, I guess, a very difficult question, but what practically do you feel is a way out of the Putin regime, is a way out of the corruption that's deeply underlies the state? Is a, if you look from a history perspective, is a revolution required? Is violence required? Is from violence within or external to the country? Do you see, or is a powerful, is a inspiring leader enough to step in and bring democracy and kind of the free world to Russia? So Russia is not a failed country. It's a middle income country with tremendous potential and has proven many times in the past that when it gets in a bad way, it can reverse its trajectory. Moreover, violence is rarely ever a solution. Violence rarely, it may break an existing trend, but it's rare that violence produces a nonviolent, sustainable, positive outcome. It happens, but it doesn't happen frequently. Mental upheaval is not a way always to institutionalize a better path forward because you need institutions. People can protest as they did throughout the Middle East, and the protests didn't necessarily lead to better systems because the step from protest to new, strong, consolidated institutions is a colossal leap, not a small step. What we need and what we see from history and situations like this is a group within the power structures, which is a patriotic that sees things going down. That is to say that sees things not being developing relative to neighbors, relative to richer countries, relative to more successful countries, and they want to change the trajectory of Russia. And if they can, in a coalition fashion, unseat the current regime for a new power sharing arrangement, which once again can be frustrating because you can't do changes immediately, you can't do things overnight, but that's the point. Constraints on your ability to change everything immediately and to force change overnight is what leads to long term success potentially. That's the sustainability of change. So Russia needs stronger institutions. It needs court system as well as democratic institutions. It needs functioning, open, dynamic markets rather than monopolies. It needs meritocracy and banks to award loans on the basis of business plans, not on the basis of political criteria or corrupt bribery or whatever it might be. So Russia needs those kind of functioning institutions that take time, are sometimes slow, don't lead to a revolutionary transformation, but lead to potentially long term sustainable growth without upheaval, without violence, without getting into a situation where all of a sudden you need a miracle again. Every time Russia seems to need a miracle, and that's the problem, the solution would be not needing a miracle. Now having said that, the potential is there. The civilization that we call Russia is amazingly impressive. It has delivered world class culture, world class science. It's a great power. It's not a great power with a strong base right now, but nonetheless it is a great power as it acts in the world. So I wouldn't underestimate Russia's abilities here and I wouldn't write off Russia. I don't see it under the current regime, a renewal of the country. But if we can have from within the regime an evolution rather than a revolution in a positive direction, and maybe get a George Washington figure who is strong enough to push through institutionalization rather than personalism. So if I could ask about one particular individual, it'd be just interesting to get your comment, but also as a representative of potential leaders, I just on this podcast talked to Gary Kasparov, who I'm not sure if you're familiar with his, his ongoings. So besides being a world class chess player, he's also a very outspoken activist, sort of seeing Putin, truly seeing Putin as an enemy of the free world of democracy, of balanced government in Russia. What do you think of people like him specifically, or just people like him trying as leaders to step in, to run for president, to symbolize a new chapter in Russia's future? So we don't need individuals. Some individuals are very impressive and they have courage and they protest and they criticize and they organize. We need institutions. We need a Duma or a parliament that functions. We need a court system that functions. That is to say where there are a separation of powers, impartial professional civil service, impartial professional judiciary. Those are the things Russia needs. It's rare that you get that from an individual, no matter how impressive, right? We had Andrei Sakharov, who was an extraordinary individual, who developed the hydrogen bomb under a Soviet regime, was a world class physicist, was then upset about how his scientific knowledge and scientific achievements were being put to use and rebelled to try to put limits, constraints, civilizing humane limits and constraints on some of the implications of his extraordinary science. But Sakharov, even if he had become the leader of the country, which he did not become, he was more of a moral or spiritual leader, it still wouldn't have given you a judiciary. It still wouldn't have given you a civil service. It still wouldn't have given you a Duma or functioning parliament. You need a leader in coalition with other leaders. You need a bunch of leaders, a whole group, and they have to be divided a little bit so that not one of them can destroy all the others. And they have to be interested in creating institutions, not solely or predominantly in their personal power. And so I have no objection to outstanding individuals and to the work that they do. But I think in institutional terms, and they need to think that way too in order to be successful. So if we go back to the echoes of that after the Russian Revolution with Stalin, with Lenin and Stalin, maybe you can correct me, but there was a group of people there in that same kind of way looking to establish institutions that were beautifully built around an ideology that they believed is good for the world. So sort of echoing that idea of what we're talking about, what Russia needs now, can you, first of all, you've described a fascinating thought, which is Stalin is having amassed arguably more power than any man in history, which is an interesting thing to think about. But can you tell about his journey to getting that power after the Russian Revolution? How does that perhaps echo to our current discussion about institutions and so on? And just in general, the story I think is fascinating of how one man is able to get more power than any other man in history. It is a great story, not necessarily from a moral point of view, but if you're interested in power, for sure it's an incredible story. So we have to remember that Stalin is also a product of circumstances, not solely his own individual drive, which is very strong. For example, World War I breaks the czarist regime, the czarist order, imperial Russian state. Stalin has no participation whatsoever in World War I. He spends World War I in exile in Siberia. Until the downfall of the czarist autocracy in February 1917, Stalin is in Eastern Siberian exile. He's only able to leave Eastern Siberia when that regime falls. He never fights in the war. He's called up briefly towards the end of the war and is disqualified on physical grounds because of physical deformities from being drafted. The war continues after the czarist regime has been toppled in the capital and there's been a revolution. The war continues and that war is very radicalizing. The peasants begin to seize the land after the czar falls, essentially destroying much of the gentry class. Stalin has nothing to do with that. The peasants have their own revolution, seizing the land, not in law, but in fact, de facto not de jure land ownership. So there are these really large processes underway that Stalin is alive during, but not a driver of. The most improbable thing happens, which is a very small group of people around the figure of Vladimir Lenin announces that it has seized power. Now by this time in October 1917, the government that has replaced the czar, the so called provisional government, has failed. And so there's not so much power to seize from the provisional government. What Lenin does is he does a coup on the left. That is to say, Soviets or councils, as we would call them in English, which represent people's power or the masses participating in politics, a kind of radical grassroots democracy are extremely popular all over the country and not dominated by any one group, but predominantly socialist or predominantly leftist. Russia has an election during the war, a free and fair election for the most part, despite the war at the end of 1917, in December 1917, and three quarters plus of the country votes socialist in some form or another. So the battle was over the definition of socialism and who had the right to participate in defining socialism, not only what it would be, but who had the right to decide. So there's a coup by Lenin's group known as the Bolsheviks against all the other socialists. And so Lenin declares a seizure of power whereby the old government has failed, people's power, the councils known as the Soviets are going to take their place, and Lenin seizes power in the name of the Soviets. So it's a coup against the left, against the rest of the left, not against the provisional government that has replaced the czar, which has already failed. And so Stalin is able to come to power along with Lenin in this crazy seizure of power on the left against the rest of the left in October 1917, which we know is the October Revolution, and I call the October coup as many other historians call. The October Revolution happened after the seizure of power. What's interesting about this episode is that the leftists who seize power in the name of the Soviets, in the name of the masses, in the name of people's power, they retain their hold. Many times in history, there's a seizure of power by the left, and they fail. They collapse. They're cleaned out by an army or what we call forces of order, by counter revolutionary forces. Lenin's revolution, Lenin's coup is successful. It is able to hold power, not just seize power. They win a civil war, and they're entrenched in the heart of the country already by 1921. Stalin is part of that group. Lenin needs somebody to run. This new regime in the kind of nitty gritty way, Lenin is the leader, the undisputed leader in the Bolshevik party, which changes their name to communists in 1918. He makes Stalin the general secretary of the communist party. He creates a new position, which hadn't existed before, a kind of day to day political manager, a right hand man. Not because Lenin is looking to replace himself. He's looking to institutionalize a helpmate, a right hand man. He does this in the spring of 1922. Stalin is named to this position, which Lenin has created expressly for Stalin. So there has been a coup on the left whereby the Bolsheviks who become communists have seized power against the rest of the socialists and anarchists and the entire left. And then there's an institutionalization of a position known as general secretary of the communist party, right hand man of Lenin. Less than six weeks after Lenin has created this position and installed Stalin, Lenin has a stroke, a major stroke, and never really returns as a full actor to power before he dies of a fourth stroke in January 1924. So a position is created for Stalin to run things on Lenin's behalf. And then Lenin has a stroke. And so Stalin now has this new position general secretary, but he's the right hand of a person who's no longer exercising day to day control over affairs. Stalin then uses this new position to create a personal dictatorship inside the Bolshevik dictatorship, which is the remarkable story I tried to tell. So is there anything nefarious about any of what you just described? So it seems conveniently that the position is created just for Stalin. There was a few other brilliant people, arguably more brilliant than Stalin in the vicinity of Lenin. Why was Stalin chosen? Why did Lenin all of a sudden fall ill? It's perhaps a conspiratorial question, but is there anything nefarious about any of this historical trajectory to power that Stalin took in creating the personal dictatorship? So history is full of contingency and surprise. After something happens, we all think it's inevitable. It had to happen that way. Everything was leading up to it. So Hitler seizes power in Germany in 1933, and the Nazi regime gets institutionalized by several of his moves after being named chancellor. And so all German history becomes a story of the Nazi rise to power, Hitler's rise to power. Every trend tendency is bent into that outcome. Things which don't seem related to that outcome all of a sudden get bent in that direction. And other trends that were going on are no longer examined because they didn't lead to that outcome. But Hitler's becoming chancellor of Germany in 1933 was not inevitable. It was contingent. He was offered the position by the traditional conservatives. He's part of the radical right and the traditional right named him chancellor. The Nazi party never outright won an election that was free and fair before Hitler came to power. And in fact, its votes on the eve of Hitler becoming chancellor declined relative to the previous election. So there's contingency in history, and so Lenin's illness, his stroke, the neurological and blood problems that he had were not a structure in history. In other words, if Lenin had been a healthier figure, Stalin might never have become the Stalin that we know. That's not to say that all history is accidental, just that we need to relate the structural, the larger structural factors to the contingent factors. Why did Lenin pick Stalin? Well, Stalin was a very effective organizer, and the position was an organizational position. Stalin could get things done. He would carry out assignments no matter how difficult. He wouldn't complain that it was hard work or too much work. He wouldn't go off womanizing and drinking and ignore his responsibilities. Lenin chose Stalin among other options because he thought Stalin was the better option. Once again, he wasn't choosing his successor because he didn't know he was going to have this stroke. Lenin had some serious illnesses, but he had never had a major stroke before. So the choice was made based upon Stalin's organizational skills and promise against the others who were in the regime. Now, they can seem more brilliant than Stalin, but he was more effective, and I'm not sure they were very brilliant. Well, he was exceptionally competent actually at the tasks for running a government, the executive branch, right, of a dictator. Yes. He turned out to be very adept at being a dictator. And so if he had been chosen by Lenin and had not been very good, he would have been pushed aside by others. Yeah. You can get a position by accident. You can be named because you're someone's friend or someone's relative, but to hold that position, to hold that position in difficult circumstances, and then to build effectively a superpower on all that bloodshed, right, you have to be skilled in some way. It can't be just the accident that brings you to power because if accident brings you to power, it won't last. Just like we discovered with Putin, he had some qualities that we didn't foresee at the beginning, and he's been able to hold power, not just be named. Now, Putin and Stalin are very different people. These are very different regimes. I wouldn't put them in the same sentence. My point is not that one resembles the other. My point is that when people come to power for contingent reasons, they don't stay in power unless they're able to manage it. And Stalin was able to build a personal dictatorship inside that dictatorship. He was cunning, he was ruthless, and he was a workaholic. He was very diligent. He had a phenomenal memory, and so he could remember people's names and faces and events. And this was very advantageous for him as he built the machine that became the Soviet state and bureaucracy. One of the things, maybe you can correct me if I'm wrong, what you've made me realize is this wasn't some kind of manipulative personality trying to gain more power solely, like kind of an evil picture of a person, but he truly believed in communism. As far as I can understand, again, you can correct me if I'm wrong, but he wanted to build a better world by infusing communism into the country, perhaps into the whole world. So maybe my question is what role does communism as an idea, as an ideology play in all of this? What was the power in the people of the time, in the Russian people, actually just the whole 20th century? You're right. Stalin was a true believer, and this is very important. He was also hungry for power and for personal power, but just as you said, not for power's sake, not only for power. He was interested in enacting communism in reality and also in building a powerful state. He was a statist, a traditional Russian statist in the imperial sense, and this won him a lot of followers. The fact that they knew he was a hardcore true believing communist won him a lot of followers among the communists, and the fact that he was a hardcore defender of Russian state interests now in the Soviet guise also won him a lot of followers. Sometimes those groups overlapped, the communists and the Russian patriots, and sometimes they were completely different groups, but both of them shared an admiration for Stalin's dedication to those goals and his abilities to enact them. And so it's very important to understand that however thirsty he was for power, and he was very thirsty for power, that he was also driven by ideals. Now I don't necessarily think that everyone around Stalin shared those ideals. We have to be careful not to make everybody into a communist true believer, not to make everybody into a great statist Russian patriot, but they were widespread and powerful attractions for a lot of people. And so Stalin's ability to communicate to people that he was dedicated to those pursuits and his ability to drive towards them were part of his appeal. Where he also resorted to manipulation, he also resorted to violence, he lied, he spoke out of all sides of his mouth, he slandered other people, he sabotaged potential rivals. He used every underhanded method, and then some, in order to build his personal dictatorship. Now he justified this, as you said, by appeals to communism and to Soviet power. To himself as well too. To himself and to others. And so he justified it in his own mind and to others, but certainly any means, right, were acceptable to him to achieve these ends. And he identified his personal power with communism and with Russian glory in the world. So he felt that he was the only one who could be trusted, who could be relied upon to build these things. Now, we put ourselves back in that time period. The Great Depression was a very difficult time for the capitalist system. There was mass unemployment, a lot of hardship, fascism, Nazism, Imperial Japan. There were a lot of associations that were negative with the kind of capitalist system that was not a hundred percent, not a monolith, but had a lot of authoritarian incarnations. There was imperialism, colonies that even the democratic rule of law capitalist states had non democratic, non rule of law colonies under their rule. So the image and reality of capitalism during that time period between World War I and World War II was very different from how it would become later. And so in that time period, in that interwar conjuncture after World War I, before World War II, communism held some appeal inside the Soviet Union for sure, but even outside the Soviet Union because the image and reality of capitalism disappointed many people. Now, in the end, communism was significantly worse. Many more victims and the system of course would eventually implode. But nonetheless, there were real problems that communism tried to address. It didn't solve those problems. It was not a solution, but it didn't come out of nowhere. It came out of the context of that interwar period. And so Stalin's rule, some people saw it as potentially a better option than imperialism, fascism and Great Depression. Having said that, they were wrong. It turned out that Stalin wasn't a better alternative to markets and private property and rule of law and democracy. However, that didn't become clearer to people until after World War II, after Nazism had been defeated, Imperial Japan had been defeated, a fascist Italy had been defeated and decolonization had happened around the world, and there was a middle class economic boom in the period from the late 40s through the 70s that created a kind of mass middle class in many societies. So capitalism rose from the ashes as it were, and this changed the game for Stalin and communism. Capitalism is about an alternative to capitalism, and if that alternative is not superior, there's no reason for communism to exist. But if capitalism is in foul odor, if people have a bad opinion, a strong critique of capitalism, there can be appeal to alternatives, and that's kind of what happened with Stalin's rule. But after World War II, the context changed a lot, capitalism was very different, much more successful, nonviolent compared to what it was in the interwar period. And the Soviet Union had a tough time competing against that new context. Now today we see similarly that the image and reality of capitalism is on the question again, which leads some people to find an answer in socialism as an alternative. So you just kind of painted a beautiful picture of comparison. This is the way we think about ideologies because we, is what's working better. Do you separate in your mind the ideals of communism to the Stalinist implementation of communism, and again, capitalism and American implementation of capitalism? And as we look at now the 21st century where, yes, this idea of socialism being a potential political system that we would, or economic system we would operate under in the United States rising up again as an idea. So how do we think about that again in the 21st century, about these ideas, fundamental deep ideas of communism and capitalism? Yeah, so in the Marxist schema, there was something called feudalism, which was supposedly destroyed by the bourgeoisie who created capitalism. And then the working class was supposed to destroy capitalism and create socialism. But socialism wasn't the end stage. The end stage was going to be communism. So that's why the communist party in the Soviet Union first built socialism transcending capitalism. The next stage was socialism and the end game, the final stage was communism. So their version of socialism was derived from Marx. And Marx argued that the problem was capitalism had been very beneficial for a while. It had produced greater wealth and greater opportunity than feudalism had. But then it had come to serve only the narrow interests of the so called bourgeoisie or the capitalists themselves. And so for humanity's sake, the universal class, the working class needed to overthrow capitalism in order for greater productivity, greater wealth to be produced for all of humanity to flourish and on a higher level. So you couldn't have socialism unless you destroyed capitalism. So that meant no markets, no private property, no so called parliaments or bourgeois parliaments as they were called. So you got socialism in Marx's schema by transcending, by eliminating capitalism. Now Marx also called for freedom. He said that this elimination of markets and private property and bourgeois parliaments would produce greater freedom in addition to greater abundance. However, everywhere this was tried, it produced tyranny and mass violence, death and shortages. Everywhere it was tried. There's no exception in historical terms. And so it's very interesting. Marx insisted that capitalism had to be eliminated. You couldn't have markets. Markets were chaos. You needed planning. You couldn't have hiring of wage labor. That was wage slavery. You couldn't have private property because that was a form of theft. So in the Marxist scheme, somehow you were going to eliminate capitalism and get to freedom. It turned out you didn't get to freedom. So then people said, well, you can't blame Marx because he said we needed freedom. He was pro freedom. So it's kind of like dropping a nuclear bomb. You say you're going to drop a nuclear bomb, but you want to minimize civilian casualties. So the dropping of the nuclear bomb is the elimination of markets, private property and parliaments. But you're going to bring freedom or you're going to minimize civilian casualties. So you drop the nuclear bomb, you eliminate the capitalism and you get famine, deportation, no constraints on executive power and not abundance, but shortages. And people say, well, that's not what Mark said. That's not what I said. I said, I wanted to minimize civilian casualties. The nuclear bomb goes off and there's mass civilian casualties. And you keep saying, but I said, drop the bomb, but minimize civilian casualties. So that's where we are. That's history, not philosophy. I'm speaking about historical examples, all the cases that we have. Marx was not a theorist of inequality. Marx was a theorist of alienation, of dehumanization, of fundamental constraints or what he called fetters on productivity and on wealth, which he all attributed to capitalism. Marx wasn't bothered by inequality. He was bothered by something deeper, something worse, right? Those socialists who figured this out, who understood that if you drop the nuclear bomb, there was no way to minimize civilian casualties. All socialists who came to understand that if you eliminated capitalism, markets, private property and parliaments, if you eliminated that, you wouldn't get freedom. Those Marxists, those socialists became what we would call social Democrats or people who would use the state to regulate the market, not to eliminate the market. They would use the state to redistribute income, not to destroy private property and markets. And so this in the Marxist schema was apostasy because they were accepting markets and private property. They were accepting alienation and wage slavery. They were accepting capitalism in principle, but they wanted to fix it. They wanted to ameliorate. They wanted to regulate. And so they became what was denounced as revisionists, not true Marxists, not real revolutionaries, but parliamentary road, parliamentarians. We know this as normal politics, normal social democratic politics from the European case or from the American case, but they are not asking to eliminate capitalism, blaming capitalism, blaming markets and private property. So this rift among the socialists, the ones who are for elimination of capitalism, transcending capitalism, otherwise you could never, ever get to abundance and freedom in the Marxist schema versus those who accept capitalism, but want to regulate and redistribute. That rift on the left has been with us almost from the beginning. It's a kind of civil war on the left between the Leninists and the social democrats or the revisionists as they're known pejoratively by the Leninists. We have the same confusion today in the world today where people also cite Marx saying capitalism is a dead end and we need to drop that nuclear bomb and get freedom, get no civilian casualties versus those who say, yes, there are inequities. There's a lack of equality of opportunity. There are many other issues that we need to deal with and we can fix those issues. We can regulate, we can redistribute. I'm not advocating this as a political position. I'm not taking a political position myself. I'm just saying that there's a confusion on the left between those who accept capitalism and want to regulate it versus those who think capitalism is inherently evil and if we eliminate it we'll get to a better world when in fact history shows that if you eliminate capitalism you get to a worse world. The problems might be real, but the solutions are worse. From history's lessons, now we have deep painful lessons, but there's not that many of them. You know, our history is relatively short as a human species. Do we have a good answer on the left of Leninist, Marxist versus Social Democrat versus capitalism versus any anarchy? Do we have sufficient samples from history to make better decisions about the future of our politics and economics? For sure. We have the American Revolution, which was a revolution not about class, not about workers, not about a so called universal class of the working class, elimination of capitalism markets and the bourgeoisie, but was about the category citizen. It was about universal humanity where everyone in theory could be part of it as a citizen. The revolution fell short of its own ideals. Not everyone was a citizen. For example, if you didn't own property, you were a male but didn't own property. You didn't have full rights of a citizen. If you were a female, whether you own property or not, you weren't a full citizen. If you were imported from Africa against your will, you were a slave and not a citizen. And so not everyone was afforded the rights in actuality that were declared in principle. However, over time, the category citizen could expand and slaves could be emancipated and they could get the right to vote. They could become citizens. Nonproperty owning males could get the right to vote and become full citizens. Females could get the right to vote and become full citizens. In fact, eventually my mother was able to get a credit card in her own name in the 1970s without my father having to co sign the paperwork. It took a long time. But nonetheless, the category citizen can expand and it can become a universal category. So we have that, the citizen universal humanity model of the American Revolution, which was deeply flawed at the time it was introduced, but fixable over time. We also had that separation of powers and constraint on executive power that we began this conversation with. That was also institutionalized in the American Revolution because they were afraid of tyranny. They were afraid of unconstrained executive power. So they built a system that would contain that, constrain it institutionally, not circumstantially. So that's a great gift. Within that universal category of citizen, which has over time come closer to fulfilling its original promise. And within those institutional constraints, that separation of powers, constraint on executive power, within that we've developed what we might call normal politics, left right politics. People can be in favor of redistribution, and government action and people can be in favor of small government, hands off government, no redistribution or less redistribution. That's the normal left right political spectrum, where you respect the institutions and separation of powers. And you respect the universal category of citizenship and equality before the law and everything else. I don't see any problems with that whatsoever. I see that as a great gift, not just to this country, but around the world and other places besides the United States have developed this. The problems arise at the extremes, the far left and the far right that don't recognize the legitimacy either of capitalism or of democratic rule of law institutions. And they want to eliminate constraints on executive power. They want to control the public sphere or diminish the independence of the media. They want to take away markets or private property and redistribution becomes something bigger than just redistribution. It becomes actually that original Marxist idea of transcending capitalism. So I'm not bothered by the left or the right. I think they're normal and we should have that debate. We're a gigantic, diverse country of many different political points of view. I'm troubled only by the extremes that are against the system qua system that want to get rid of it and supposedly that will be the bright path to the future. History tells us that the far left and the far right are wrong about that. But once again, this doesn't mean that you have to be a social democrat. You could be a libertarian. You could be a conservative. You could be a centrist. You could be conservative on some issues and liberal on other issues. All of that comes under what I would presume to be normal politics. And I see that as the important corrective mechanism. Normal politics and market economies, non monopolistic, open, free and dynamic market economies. I don't like concentrations of power politically and I don't like concentrations of power economically. I like competition in the political realm. I like competition in the economic realm. This is not perfect. It's constantly needs to be protected and reinvented and there are flaws that are fundamental and need to be adjusted and addressed and everything else, especially equality of opportunity. Equality of outcome is unreachable and is a mistake because it produces perverse and unintended consequences. Equality of outcome attempts, attempts to make people equal on the outcome side, but attempts to make them more equal on the front end, on the opportunity side. That's really, really important for a healthy society. That's where we've fallen down. Our schools are not providing equality of opportunity for the majority of people in all of our school systems. And so I see problems there. I see a need to invest in ourselves, invest in infrastructure, invest in human capital, create greater equality of opportunity, but also to make sure that we have good governance because governance is the variable that enables you to do all these other things. I've watched quite a bit, returning back to Putin, I've watched quite a few interviews with Putin and conversations, especially because I speak Russian fluently, I can understand often the translations lose a lot. I find the man putting morality aside very deep and interesting. And I found almost no interview with him to get at that depth. I was very hopeful for the Oliver Stone documentary and with him, and to me, because I deeply respect Oliver Stone as a filmmaker in general, but it was a complete failure in my eyes, that interview. The lack of, I mean, I suppose you could toss it up to a language barrier, but a complete lack of diving deep into the person is what I saw. My question is a strange one, but if you were to sit down with Putin and have a conversation, or perhaps if you were to sit down with Stalin and have a conversation, what kind of questions would you ask? This wouldn't be televised unless you want it to be. So this is only you, so you're allowed to ask about some of the questions that are sort of not socially acceptable, meaning putting morality aside, getting into depth of the human character. What would you ask? So once again, they're very different personalities and very different time periods and very different regimes. So what I would talk to Stalin about and Putin about are not in the same category necessarily. So let's take Putin. So I would ask him where he thinks this is going, where he thinks Russia is going to be in 25 years or 50 years. What's the long term vision? What does he anticipate the current trends are going to produce? Is he under the illusion that Russia is on the upswing, that things are actually going pretty well, that in 25 years Russia is going to still be a great power with a tremendous dynamic economy and a lot of high tech and a lot of human capital and wonderful infrastructure and a very high standard of living and a secure borders and sense of security at home. Does he think the current path is leading in that direction and if not, if he understands that the current trajectory does not provide for those kinds of circumstances, does it bother him? Does he worry about that? Does he care about the future 25 or 50 years from now? Deep down, what do you think his answer is? The honest answer? He thinks he's on that trajectory already or he doesn't care about that long term trajectory. So that's the mystery for me with him. He's clever. He has tremendous sources of information. He has great experience now as a world leader having served for effectively longer than Leonid Brezhnev's long 18 year reign. And so Putin has accumulated a great deal of experience at the highest level compared to where he started. And so I'm interested to understand how he sees this long term evolution or non evolution of Russia and whether he believes he's got them on the right trajectory or whether if he doesn't believe that he cares. I have no idea because I've never spoken to him about this, but I would love to hear the answer. Sometimes you have to ask questions not directly like that, but you have to come a little bit sideways. You can elicit answers from people by making them feel comfortable and coming sideways with them. And just a quick question. So that's talking about Russia, Putin's role in Russia. Do you think it's interesting to ask, and you could say the same for Stalin, the more personal question of how do you feel yourself about this whole thing? About your life, about your legacy, looking at the person that's one of the most powerful and important people in the history of civilization, both Putin and Stalin, you could argue. Yeah. Once you experience power at that level, it becomes something that's almost necessary for you as a human being. It's a drug. It's an aphrodisiac. It's a feeling. You know, you go to the gym to exercise and the endorphins, the chemicals get released. And even if you're tired or you're sore, you get this massive chemical change, which has very dynamic effects on how you feel and the kind of level of energy you have for the rest of the day. And if you do that for a long time and then you don't do it for a while, you're like a drug addict not getting your fix. You miss it. Your body misses that release of endorphins to a certain extent. That's how power works for people like Putin. That's how power works for people who run universities or are secretaries of state or run corporations, fill in the blank. In whatever ways power is exercised, it becomes almost a drug for people. It becomes something that's difficult for them to give up. It becomes a part of who they are. It becomes necessary for their sense of self and well being. The greatest people, the people I admire the most are the ones that can step away from power, can give up the drug, can be satisfied, can be stronger even by walking away from continued power when they had the option to continue. So with a person like Putin, once again, I don't know him personally, so I have no basis to judge this. This is a general statement observable with many people and in historical terms. With a person like Putin who's exercised this much power for this long, it's something that becomes a part of who you are and you have a hard time imagining yourself without it. You begin to conflate your personal power with the well being of the nation. You begin to think that the more power you have, the better off the country is this conflation. You begin to be able to not imagine, you can no longer imagine what it would be like just to be an ordinary citizen or an ordinary person running a company even, something much smaller than a country. So I anticipate that without knowing for sure that he would be in that category of person, but you'd want to explore that with questions with him about, so what's his day look like from beginning to end? Just take me through a typical day of yours. What do you do on a day? How does it start? What are the ups? What are the downs? What are the parts of the day you look forward to the most? What are the parts of the day you don't look forward to that much? What do you consider a good day? What do you consider a bad day? How do you know that what you're doing is having the effects that you intend? How do you follow up? How do you gather the information, the reaction? How do you get people to tell you to your face things that they know are uncomfortable or that you might not want to hear? Those kind of questions. And through that window, through that kind of questioning, you get a window into a man with power. So let me ask about Stalin because you've done more research, there's another amazing interview you've had, the introduction was that you know more about Stalin than Stalin himself. You've done an incredible amount of research on Stalin. So if you could talk to him, get sort of direct research, what question would you ask of Stalin? I have so many questions, I don't even know where I would begin. The thing about studying a person like Stalin, who's an immense creature, right? He's exercising the power of life and death over hundreds of millions of people. He's making decisions about novels and films and turbines and submarines and packs with Hitler or deals with Churchill and Roosevelt and occupation of Mongolia or occupation of North Korea. He's making phenomenally consequential decisions over all spheres of life, all areas of endeavor and over much of the globe, much of the landmass of the earth. And so what's that like? Does he sometimes reflect on the amount of power and responsibility he has that he can exercise? Does he sometimes think about what it means that a single person has that kind of power? And does it have an effect on his relations with others, his sense of self, the kinds of things he values in life? Does he sometimes think it's a mistake that he's accumulated this much power? Does he sometimes wish he had a simpler life? Or is he once again so drunk, so enamored, so caught up with chemically and spiritually with exercising this kind of power that he couldn't live without it? And then what were you thinking, I would ask him, in certain decisions that he made? What were you thinking on certain dates and certain circumstances where you made a decision and could have made a different decision? Can you recall your thought processes? Can you bring the decision back? Was it seat of the pants? Was it something you'd been planning? Did you just improvise or did you have a strategy? What were you guided by? Whose examples did you look to? When you picked up these books that you read and you read the books and you made pencil marks in them, is it because you absorbed the lesson there? Or did it really not become a permanent lesson and it was just something that you checked and it was like a reflex? So I have many specific questions about many specific events and people and circumstances that I have tried to figure out with the surviving source materials that we have in abundance. But I would still like to delve into his mindset and reconstruct his mind. The closer you get to Stalin, in some ways, the more elusive he can become. And especially around World War II, you've already illuminated a lot of interesting aspects about Stalin's role in the war, but it would be interesting to ask even more questions about how seat of the pants or deliberate some of the decisions have been. If I could ask just one quick question, one last quick question, and you're constrained in time and answering it, do you think there will always be evil in the world? Do you think there will always be war? Unfortunately, yes. There are conflicting interests, conflicting goals that people have. Most of the time, those conflicts can be resolved peacefully. That's why we build strong institutions to resolve different interests and conflicts peacefully. But the fact, the enduring fact of conflicting interests and conflicting desires, that can never be changed. So the job that we have for humanity's sake is to make those conflicting interests, those conflicting desires, to make them, to put them in a context where they can be resolved peacefully, and not in a zero sum fashion. So we can't get there on the global scale. So there's always going to be the kind of conflict that sometimes gets violent. What we don't want is a conflict among the strongest powers. Great power conflict is unbelievably bad. There are no words to describe it. At least 55 million people died in World War II. If we have a World War III, a war between the United States and China, or whatever it might be, who knows what the number could be? 155 million, 255 million, 555 million, I don't even want to think about it. And so it's horrible when wars break out in the humanitarian catastrophes. For example, Yemen and Syria and several other places I could name today. It's just horrible what you see there. And the scale is colossal for those places. But it's not planetary scale. And so avoiding planetary scale destruction is really important for us. And so having those different interests be somehow managed in a way that they don't, that no one sees advantage in a violent resolution. And a part of that is remembering history, so they should read your books. Stephen, thank you so much. It was a huge honor talking to you today. I really enjoyed it. Thank you for the opportunity. My pleasure. Thanks for listening to this conversation with Stephen Kotkin. A thank you to our presenting sponsor, Cash App. Download it and use code LexPodcast. You'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or connect with me on Twitter. And now let me leave you with words from Joseph Stalin, spoken shortly before the death of Lenin and at the beginning of Stalin's rise to power. FIRST IN RUSSIAN Я считаю, что совершенно неважно, кто и как будет в партии голосовать. Но вот что чрезвычайно важно, это кто и как будет считать голоса. I consider it completely unimportant who in the party will vote or how, but what is extraordinarily important is who will count the votes and how. Thank you for listening and hope to see you next time.
Stephen Kotkin: Stalin, Putin, and the Nature of Power | Lex Fridman Podcast #63
The following is a conversation with Grant Sanderson. He's a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give us five stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has an investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square, and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST Robotics and LEGO competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store at Google Play and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Grant Sanderson. If there's intelligent life out there in the universe, do you think their mathematics is different than ours? Jumping right in. I think it's probably very different. There's an obvious sense the notation is different, right? I think notation can guide what the math itself is. I think it has everything to do with the form of their existence, right? Do you think they have basic arithmetic? Sorry, I interrupted. Yeah, so I think they count, right? I think notions like one, two, three, the natural numbers, that's extremely, well, natural. That's almost why we put that name to it. As soon as you can count, you have a notion of repetition, right? Because you can count by two, two times or three times. And so you have this notion of repeating the idea of counting, which brings you addition and multiplication. I think the way that we extend it to the real numbers, there's a little bit of choice in that. So there's this funny number system called the servial numbers that it captures the idea of continuity. It's a distinct mathematical object. You could very well model the universe and motion of planets with that as the back end of your math, right? And you still have kind of the same interface with the front end of what physical laws you're trying to, or what physical phenomena you're trying to describe with math. And I wonder if the little glimpses that we have of what choices you can make along the way based on what different mathematicians I've brought to the table is just scratching the surface of what the different possibilities are if you have a completely different mode of thought, right? Or a mode of interacting with the universe. And you think notation is a key part of the journey that we've taken through math. I think that's the most salient part that you'd notice at first. I think the mode of thought is gonna influence things more than like the notation itself. But notation actually carries a lot of weight when it comes to how we think about things, more so than we usually give it credit for. I would be comfortable saying. Do you have a favorite or least favorite piece of notation in terms of its effectiveness? Yeah, yeah, well, so least favorite, one that I've been thinking a lot about that will be a video I don't know when, but we'll see. The number e, we write the function e to the x, this general exponential function with a notation e to the x that implies you should think about a particular number, this constant of nature, and you repeatedly multiply it by itself. And then you say, oh, what's e to the square root of two? And you're like, oh, well, we've extended the idea of repeated multiplication. That's all nice, that's all nice and well. But very famously, you have like e to the pi i, and you're like, well, we're extending the idea of repeated multiplication into the complex numbers. Yeah, you can think about it that way. In reality, I think that it's just the wrong way of notationally representing this function, the exponential function, which itself could be represented a number of different ways. You can think about it in terms of the problem it solves, a certain very simple differential equation, which often yields way more insight than trying to twist the idea of repeated multiplication, like take its arm and put it behind its back and throw it on the desk and be like, you will apply to complex numbers, right? That's not, I don't think that's pedagogically helpful. So the repeated multiplication is actually missing the main point, the power of e to the x. I mean, what it addresses is things where the rate at which something changes depends on its own value, but more specifically, it depends on it linearly. So for example, if you have like a population that's growing and the rate at which it grows depends on how many members of the population are already there, it looks like this nice exponential curve. It makes sense to talk about repeated multiplication because you say, how much is there after one year, two years, three years, you're multiplying by something. The relationship can be a little bit different sometimes where let's say you've got a ball on a string, like a game of tetherball going around a rope, right? And you say, its velocity is always perpendicular to its position. That's another way of describing its rate of change is being related to where it is, but it's a different operation. You're not scaling it, it's a rotation. It's this 90 degree rotation. That's what the whole idea of like complex exponentiation is trying to capture, but it's obfuscated in the notation when what it's actually saying, like if you really parse something like e to the pi i, what it's saying is choose an origin, always move perpendicular to the vector from that origin to you, okay? Then when you walk pi times that radius, you'll be halfway around. Like that's what it's saying. It's kind of the, you turn 90 degrees and you walk, you'll be going in a circle. That's the phenomenon that it's describing, but trying to twist the idea of repeatedly multiplying a constant into that. Like I can't even think of the number of human hours of like intelligent human hours that have been wasted trying to parse that to their own liking and desire among like scientists or electrical engineers or students everywhere, which if the notation were a little different or the way that this whole function was introduced from the get go were framed differently, I think could have been avoided, right? And you're talking about the most beautiful equation in mathematics, but it's still pretty mysterious, isn't it? Like you're making it seem like it's a notational. It's not mysterious. I think the notation makes it mysterious. I don't think it's, I think the fact that it represents, it's pretty, it's not like the most beautiful thing in the world, but it's quite pretty. The idea that if you take the linear operation of a 90 degree rotation, and then you do this general exponentiation thing to it, that what you get are all the other kinds of rotation, which is basically to say, if your velocity vector is perpendicular to your position vector, you walk in a circle, that's pretty. It's not the most beautiful thing in the world, but it's quite pretty. The beauty of it, I think comes from perhaps the awkwardness of the notation somehow still nevertheless coming together nicely, because you have like several disciplines coming together in a single equation. Well, I think. In a sense, like historically speaking. That's true. You've got, so like the number E is significant. Like it shows up in probability all the time. It like shows up in calculus all the time. It is significant. You're seeing it sort of mated with pi, this geometric constant and I, like the imaginary number and such. I think what's really happening there is the way that E shows up is when you have things like exponential growth and decay, right? It's when this relation that something's rate of change has to itself is a simple scaling, right? A similar law also describes circular motion. Because we have bad notation, we use the residue of how it shows up in the context of self reinforcing growth, like a population growing or compound interest. The constant associated with that is awkwardly placed into the context of how rotation comes about, because they both come from pretty similar equations. And so what we see is the E and the pi juxtaposed a little bit closer than they would be with a purely natural representation, I would think. Here's how I would describe the relation between the two. You've got a very important function we might call exp. That's like the exponential function. When you plug in one, you get this nice constant called E that shows up in like probability and calculus. If you try to move in the imaginary direction, it's periodic and the period is tau. So those are these two constants associated with the same central function, but for kind of unrelated reasons, right? And not unrelated, but like orthogonal reasons. One of them is what happens when you're moving in the real direction. One's what happens when you move in the imaginary direction. And like, yeah, those are related. They're not as related as the famous equation seems to think it is. It's sort of putting all of the children in one bed and they'd kind of like to sleep in separate beds if they had the choice, but you see them all there and there is a family resemblance, but it's not that close. So actually thinking of it as a function is the better idea. And that's a notational idea. And yeah, and like, here's the thing. The constant E sort of stands as this numerical representative of calculus, right? Calculus is the like study of change. So at the very least there's a little cognitive dissonance using a constant to represent the science of change. I never thought of it that way. Yeah. Right? Yeah. It makes sense why the notation came about that way. Because this is the first way that we saw it in the context of things like population growth or compound interest. It is nicer to think about as repeated multiplication. That's definitely nicer. But it's more that that's the first application of what turned out to be a much more general function that maybe the intelligent life your initial question asked about would have come to recognize as being much more significant than the single use case, which lends itself to repeated multiplication notation. But let me jump back for a second to aliens and the nature of our universe. Okay. Do you think math is discovered or invented? So we're talking about the different kind of mathematics that could be developed by the alien species. The implied question is, yeah, is math discovered or invented? Is fundamentally everybody going to discover the same principles of mathematics? So the way I think about it, and everyone thinks about it differently, but here's my take. I think there's a cycle at play where you discover things about the universe that tell you what math will be useful. And that math itself is invented in a sense, but of all the possible maths that you could have invented, it's discoveries about the world that tell you which ones are. So like a good example here is the Pythagorean theorem. When you look at this, do you think of that as a definition or do you think of that as a discovery? From the historical perspective, right, it's a discovery because they were, but that's probably because they were using physical object to build their intuition. And from that intuition came the mathematics. So the mathematics wasn't in some abstract world detached from physics, but I think more and more math has become detached from, you know, when you even look at modern physics from string theory to even general relativity, I mean, all math behind the 20th and 21st century physics kind of takes a brisk walk outside of what our mind can actually even comprehend in multiple dimensions, for example, anything beyond three dimensions, maybe four dimensions. No, no, no, no, higher dimensions can be highly, highly applicable. I think this is a common misinterpretation that if you're asking questions about like a five dimensional manifold, that the only way that that's connected to the physical world is if the physical world is itself a five dimensional manifold or includes them. Well, wait, wait, wait a minute, wait a minute. You're telling me you can imagine a five dimensional manifold? No, no, that's not what I said. I would make the claim that it is useful to a three dimensional physical universe, despite itself not being three dimensional. So it's useful meaning to even understand a three dimensional world, it'd be useful to have five dimensional manifolds. Yes, absolutely, because of state spaces. But you're saying there in some deep way for us humans, it does always come back to that three dimensional world for the usefulness that the dimensional world and therefore it starts with a discovery, but then we invent the mathematics that helps us make sense of the discovery in a sense. Yes, I mean, just to jump off of the Pythagorean theorem example, it feels like a discovery. You've got these beautiful geometric proofs where you've got squares and you're modifying the areas, it feels like a discovery. If you look at how we formalize the idea of 2D space as being R2, right, all pairs of real numbers, and how we define a metric on it and define distance, you're like, hang on a second, we've defined a distance so that the Pythagorean theorem is true, so that suddenly it doesn't feel that great. But I think what's going on is the thing that informed us what metric to put on R2, to put on our abstract representation of 2D space, came from physical observations. And the thing is, there's other metrics you could have put on it. We could have consistent math with other notions of distance, it's just that those pieces of math wouldn't be applicable to the physical world that we study because they're not the ones where the Pythagorean theorem holds. So we have a discovery, a genuine bonafide discovery that informed the invention, the invention of an abstract representation of 2D space that we call R2 and things like that. And then from there, you just study R2 as an abstract thing that brings about more ideas and inventions and mysteries which themselves might yield discoveries. Those discoveries might give you insight as to what else would be useful to invent and it kind of feeds on itself that way. That's how I think about it. So it's not an either or. It's not that math is one of these or it's one of the others. At different times, it's playing a different role. So then let me ask the Richard Feynman question then, along that thread, is what do you think is the difference between physics and math? There's a giant overlap. There's a kind of intuition that physicists have about the world that's perhaps outside of mathematics. It's this mysterious art that they seem to possess, we humans generally possess. And then there's the beautiful rigor of mathematics that allows you to, I mean, just like as we were saying, invent frameworks of understanding our physical world. So what do you think is the difference there and how big is it? Well, I think of math as being the study of abstractions over patterns and pure patterns in logic. And then physics is obviously grounded in a desire to understand the world that we live in. I think you're gonna get very different answers when you talk to different mathematicians because there's a wide diversity in types of mathematicians. There are some who are motivated very much by pure puzzles. They might be turned on by things like combinatorics. And they just love the idea of building up a set of problem solving tools applying to pure patterns. There are some who are very physically motivated, who try to invent new math or discover math in veins that they know will have applications to physics or sometimes computer science. And that's what drives them. Like chaos theory is a good example of something that's pure math, that's purely mathematical. A lot of the statements being made, but it's heavily motivated by specific applications to largely physics. And then you have a type of mathematician who just loves abstraction. They just love pulling it to the more and more abstract things, the things that feel powerful. These are the ones that initially invented like topology and then later on get really into category theory and go on about like infinite categories and whatnot. These are the ones that love to have a system that can describe truths about as many things as possible. People from those three different veins of motivation into math are gonna give you very different answers about what the relation at play here is. Cause someone like Vladimir Arnold, who has written a lot of great books, many about like differential equations and such, he would say, math is a branch of physics. That's how he would think about it. And of course he was studying like differential equations related things because that is the motivator behind the study of PDEs and things like that. But you'll have others who, like especially the category theorists who aren't really thinking about physics necessarily. It's all about abstraction and the power of generality. And it's more of a happy coincidence that that ends up being useful for understanding the world we live in. And then you can get into like, why is that the case? It's sort of surprising that that which is about pure puzzles and abstraction also happens to describe the very fundamentals of quarks and everything else. So why do you think the fundamentals of quarks and the nature of reality is so compressible into clean, beautiful equations that are for the most part simple, relatively speaking, a lot simpler than they could be? So you have, we mentioned somebody like Stephen Wolfram who thinks that sort of there's incredibly simple rules underlying our reality, but it can create arbitrary complexity. But there is simple equations. What, I'm asking a million questions that nobody knows the answer to, but. I have no idea, why is it simple? It could be the case that there's like a filter iteration at play. The only things that physicists find interesting are the ones that are simple enough they could describe it mathematically. But as soon as it's a sufficiently complex system, like, oh, that's outside the realm of physics, that's biology or whatever have you. And of course, that's true. Maybe there's something where it's like, of course there will always be something that is simple when you wash away the like non important parts of whatever it is that you're studying. Just from like an information theory standpoint, there might be some like, you get to the lowest information component of it. But I don't know, maybe I'm just having a really hard time conceiving of what it would even mean for the fundamental laws to be like intrinsically complicated, like some set of equations that you can't decouple from each other. Well, no, it could be that sort of we take for granted that the laws of physics, for example, are for the most part the same everywhere or something like that, right? As opposed to the sort of an alternative could be that the rules under which the world operates is different everywhere. It's like a deeply distributed system where just everything is just chaos, not in a strict definition of chaos, but meaning like just it's impossible for equations to capture, for to explicitly model the world as cleanly as the physical does. I mean, we almost take it for granted that we can describe, we can have an equation for gravity, for action at a distance. We can have equations for some of these basic ways the planet's moving. Just the low level at the atomic scale, how the materials operate, at the high scale, how black holes operate. But it doesn't, it seems like it could be, there's infinite other possibilities where none of it could be compressible into such equations. So it just seems beautiful. It's also weird, probably to the point you're making, that it's very pleasant that this is true for our minds. So it might be that our minds are biased to just be looking at the parts of the universe that are compressible. And then we can publish papers on and have nice E equals empty squared equations. Right, well, I wonder would such a world with uncompressible laws allow for the kind of beings that can think about the kind of questions that you're asking? That's true. Right, like an anthropic principle coming into play in some weird way here? I don't know, like I don't know what I'm talking about at all. Maybe the universe is actually not so compressible, but the way our brain, the way our brain evolved we're only able to perceive the compressible parts. I mean, we are, so this is the sort of Chomsky argument. We are just descendants of apes over like really limited biological systems. So it totally makes sense that we're really limited little computers, calculators, that are able to perceive certain kinds of things and the actual world is much more complicated. Well, but we can do pretty awesome things, right? Like we can fly spaceships and we have to have some connection of reality to be able to take our potentially oversimplified models of the world, but then actually twist the world to our will based on it. So we have certain reality checks that like physics isn't too far a field simply based on what we can do. Yeah, the fact that we can fly is pretty good. It's great, yeah, like it's a proof of concept that the laws we're working with are working well. So I mentioned to the internet that I'm talking to you and so the internet gave some questions. So I apologize for these, but do you think we're living in a simulation that the universe is a computer or the universe is a computation running on a computer? It's conceivable. What I don't buy is, you know, you'll have the argument that, well, let's say that it was the case that you can have simulations. Then the simulated world would itself eventually get to a point where it's running simulations. And then the second layer down would create a third layer down and on and on and on. So probabilistically, you just throw a dart at one of those layers, we're probably in one of the simulated layers. I think if there's some sort of limitations on like the information processing of whatever the physical world is, like it quickly becomes the case that you have a limit to the layers that could exist there because like the resources necessary to simulate a universe like ours clearly is a lot just in terms of the number of bits at play. And so then you can ask, well, what's more plausible? That there's an unbounded capacity of information processing in whatever the like highest up level universe is, or that there's some bound to that capacity, which then limits like the number of levels available. How do you play some kind of probability distribution on like what the information capacity is? I have no idea. But I don't, like people almost assume a certain uniform probability over all of those meta layers that could conceivably exist when it's a little bit like a Pascal's wager on like you're not giving a low enough prior to the mere existence of that infinite set of layers. Yeah, that's true. But it's also very difficult to contextualize the amount. So the amount of information processing power required to simulate like our universe seems like amazingly huge. But you can always raise two to the power of that. Yeah, like numbers get big. And we're easily humbled by basically everything around us. So it's very difficult to kind of make sense of anything actually when you look up at the sky and look at the stars and the immensity of it all, to make sense of the smallness of us, the unlikeliness of everything that's on this earth coming to be, then you could basically anything could be, all laws of probability go out the window to me because I guess because the amount of information under which we're operating is very low. We basically know nothing about the world around us, relatively speaking. And so when I think about the simulation hypothesis, I think it's just fun to think about it. But it's also, I think there is a thought experiment kind of interesting to think of the power of computation, whether the limits of a Turing machine, sort of the limits of our current computers, when you start to think about artificial intelligence, how far can we get with computers? And that's kind of where the simulation hypothesis used with me as a thought experiment is the universe just a computer? Is it just a computation? Is all of this just a computation? And sort of the same kind of tools we apply to analyzing algorithms, can that be applied? If we scale further and further and further, will the arbitrary power of those systems start to create some interesting aspects that we see in our universe? Or is something fundamentally different needs to be created? Well, it's interesting that in our universe, it's not arbitrarily large, the power, that you can place limits on, for example, how many bits of information can be stored per unit area. Right, like all of the physical laws, you've got general relativity and quantum coming together to give you a certain limit on how many bits you can store within a given range before it collapses into a black hole. The idea that there even exists such a limit is at the very least thought provoking, when naively you might assume, oh, well, technology could always get better and better, we could get cleverer and cleverer, and you could just cram as much information as you want into like a small unit of space, that makes me think, it's at least plausible that whatever the highest level of existence is doesn't admit too many simulations or ones that are at the scale of complexity that we're looking at. Obviously, it's just as conceivable that they do and that there are many, but I guess what I'm channeling is the surprise that I felt upon learning that fact, that there are, that information is physical in this way. There's a finiteness to it. Okay, let me just even go off on that. From a mathematics perspective and a psychology perspective, how do you mix, are you psychologically comfortable with the concept of infinity? I think so. Are you okay with it? I'm pretty okay, yeah. Are you okay? No, not really, it doesn't make any sense to me. I don't know, like how many words, how many possible words do you think could exist that are just like strings of letters? So that's a sort of mathematical statement as beautiful and we use infinity in basically everything we do, everything we do in science, math, and engineering, yes. But you said exist, the question is, you said letters or words? I said words. Words. To bring words into existence to me, you have to start like saying them or like writing them or like listing them. That's an instantiation. Okay, how many abstract words exist? Well, the idea of an abstract. The idea of abstract notions and ideas. I think we should be clear on terminology. I mean, you think about intelligence a lot, like artificial intelligence. Would you not say that what it's doing is a kind of abstraction? That like abstraction is key to conceptualizing the universe? You get this raw sensory data. I need something that every time you move your face a little bit and they're not pixels, but like analog of pixels on my retina changed entirely, that I can still have some coherent notion of this is Lex, I'm talking to Lex, right? What that requires is you have a disparate set of possible images hitting me that are unified in a notion of Lex, right? That's a kind of abstraction. It's a thing that could apply to a lot of different images that I see and it represents it in a much more compressed way and one that's like much more resilient to that. I think in the same way, if I'm talking about infinity as an abstraction, I don't mean nonphysical woo woo, like ineffable or something. What I mean is it's something that can apply to a multiplicity of situations that share a certain common attribute in the same way that the images of like your face on my retina share enough common attributes that I can put the single notion to it. Like in that way, infinity is an abstraction and it's very powerful and it's only through such abstractions that we can actually understand like the world and logic and things. And in the case of infinity, the way I think about it, the key entity is the property of always being able to add one more. Like no matter how many words you can list, you just throw an A at the end of one and you have another conceivable word. You don't have to think of all the words at once. It's that property, the oh, I could always add one more that gives it this nature of infiniteness in the same way that there's certain like properties of your face that give it the Lexness, right? So like infinity should be no more worrying than the I can always add one more sentiment. That's a really elegant, much more elegant way than I could put it. So thank you for doing that as yet another abstraction. And yes, indeed, that's what our brain does. That's what intelligent systems do. That's what programming does. That's what science does is build abstraction on top of each other. And yet there is at a certain point abstractions that go into the quote woo, right? Sort of, and because we're now, it's like we built this stack of, you know, the only thing that's true is the stuff that's on the ground. Everything else is useful for interpreting this. And at a certain point you might start floating into ideas that are surreal and difficult and take us into areas that are disconnected from reality in a way that we could never get back. What if instead of calling these abstract, how different would it be in your mind if we called them general? And the phenomenon that you're describing is overgeneralization. When you try to have a concept or an idea that's so general as to apply to nothing in particular in a useful way, does that map to what you're thinking of when you think of? First of all, I'm playing little just for the fun of it. Devil's advocate. And I think our cognition, our mind is unable to visualize. So you do some incredible work with visualization and video. I think infinity is very difficult to visualize for our mind. We can delude ourselves into thinking we can visualize it, but we can't. I don't, I mean, I don't, I would venture to say it's very difficult. And so there's some concepts of mathematics, like maybe multiple dimensions, we could sort of talk about that are impossible for us to truly intuit, like, and it just feels dangerous to me to use these as part of our toolbox of abstractions. On behalf of your listeners, I almost fear we're getting too philosophical. Right? Heck no. Heck no. I think to that point for any particular idea like this, there's multiple angles of attack. I think the, when we do visualize infinity, what we're actually doing, you know, you write dot, dot, dot, right? One, two, three, four, dot, dot, dot, right? Those are symbols on the page that are insinuating a certain infinity. What you're capturing with a little bit of design there is the I can always add one more property, right? I think I'm just as uncomfortable with you are if you try to concretize it so much that you have a bag of infinitely many things that I actually think of, no, not one, two, three, four, dot, dot, dot, one, two, three, four, five, six, seven, eight. I try to get them all in my head and you realize, oh, you know, your brain would literally collapse into a black hole, all of that. And I honestly feel this with a lot of math that I try to read where I don't think of myself as like particularly good at math in some ways. Like I get very confused often when I am going through some of these texts. And often what I'm feeling in my head is like, this is just so damn abstract. I just can't wrap my head around it. I just want to put something concrete to it that makes me understand. And I think a lot of the motivation for the channel is channeling that sentiment of, yeah, a lot of the things that you're trying to read out there, it's just so hard to connect to anything that you spend an hour banging your head against a couple of pages and you come out not really knowing anything more other than some definitions maybe and a certain sense of self defeat, right? One of the reasons I focus so much on visualizations is that I'm a big believer in, I'm sorry, I'm just really hampering on this idea of abstraction, being clear about your layers of abstraction, right? It's always tempting to start an explanation from the top to the bottom, okay? You give the definition of a new theorem. You're like, this is the definition of a vector space. For example, that's how we'll start a course. These are the properties of a vector space. First from these properties, we will derive what we need in order to do the math of linear algebra or whatever it might be. I don't think that's how understanding works at all. I think how understanding works is you start at the lowest level you can get at where rather than thinking about a vector space, you might think of concrete vectors that are just lists of numbers or picturing it as like an arrow that you draw, which is itself like even less abstract than numbers because you're looking at quantities, like the distance of the x coordinate, the distance of the y coordinate. It's as concrete as you could possibly get and it has to be if you're putting it in a visual, right? It's an actual arrow. It's an actual vector. You're not talking about like a quote unquote vector that could apply to any possible thing. You have to choose one if you're illustrating it. And I think this is the power of being in a medium like video or if you're writing a textbook and you force yourself to put a lot of images is with every image, you're making a choice. With each choice, you're showing a concrete example. With each concrete example, you're aiding someone's path to understanding. I'm sorry to interrupt you, but you just made me realize that that's exactly right. So the visualizations you're creating while you're sometimes talking about abstractions, the actual visualization is an explicit low level example. Yes. So there's an actual, like in the code, you have to say what the vector is, what's the direction of the arrow, what's the magnitude of the, yeah. So that's, you're going, the visualization itself is actually going to the bottom of that. And I think that's very important. I also think about this a lot in writing scripts where even before you get to the visuals, the first instinct is to, I don't know why, I just always do, I say the abstract thing, I say the general definition, the powerful thing, and then I fill it in with examples later. Always, it will be more compelling and easier to understand when you flip that. And instead, you let someone's brain do the pattern recognition. You just show them a bunch of examples. The brain is gonna feel a certain similarity between them. Then by the time you bring in the definition, or by the time you bring in the formula, it's articulating a thing that's already in the brain that was built off of looking at a bunch of examples with a certain kind of similarity. And what the formula does is articulate what that kind of similarity is, rather than being a high cognitive load set of symbols that needs to be populated with examples later on, assuming someone's still with you. What is the most beautiful or awe inspiring idea you've come across in mathematics? I don't know, man. Maybe it's an idea you've explored in your videos, maybe not. What just gave you pause? What's the most beautiful idea? Small or big. So I think often, the things that are most beautiful are the ones that you have a little bit of understanding of, but certainly not an entire understanding. It's a little bit of that mystery that is what makes it beautiful. What was the moment of the discovery for you personally, almost just that leap of aha moment? So something that really caught my eye, I remember when I was little, there were these, I think the series was called like wooden books or something, these tiny little books that would have just a very short description of something on the left and then a picture on the right. I don't know who they're meant for, but maybe it's like loosely children or something like that. But it can't just be children, because of some of the things I was describing. On the last page of one of them, somewhere tiny in there was this little formula that on the left hand had a sum over all of the natural numbers. It's like one over one to the S plus one over two to the S plus one over three to the S on and on to the infinity. Then on the other side had a product over all of the primes and it was a certain thing had to do with all the primes. And like any good young math enthusiast, I'd probably been indoctrinated with how chaotic and confusing the primes are, which they are. And seeing this equation where on one side you have something that's as understandable as you could possibly get, the counting numbers. And on the other side is all the prime numbers. It was like this, whoa, they're related like this? There's a simple description that includes all the primes getting wrapped together like this. This is like the Euler product for the Zeta function, as I like later found out. The equation itself essentially encodes the fundamental theorem of arithmetic that every number can be expressed as a unique set of primes. To me still there's, I mean, I certainly don't understand this equation or this function all that well. The more I learn about it, the prettier it is. The idea that you can, this is sort of what gets you representations of primes, not in terms of primes themselves, but in terms of another set of numbers. They're like the non trivial zeros of the Zeta function. And again, I'm very kind of in over my head in a lot of ways as I like try to get to understand it. But the more I do, it always leaves enough mystery that it remains very beautiful to me. So whenever there's a little bit of mystery just outside of the understanding that, and by the way, the process of learning more about it, how does that come about? Just your own thought or are you reading? Reading, yeah. Or is the process of visualization itself revealing more to you? Visuals help. I mean, in one time when I was just trying to understand like analytic continuation and playing around with visualizing complex functions, this is what led to a video about this function. It's titled something like Visualizing the Riemann Zeta Function. It's one that came about because I was programming and tried to see what a certain thing looked like. And then I looked at it and I'm like, whoa, that's elucidating. And then I decided to make a video about it. But I mean, you try to get your hands on as much reading as you can. You know, in this case, I think if anyone wants to start to understand it, if they have like a math background like they studied some in college or something like that, like the Princeton Companion to Math has a really good article on analytic number theory. And that itself has a whole bunch of references and you know, anything has more references and it gives you this like tree to start piling through. And like, you know, you try to understand, I try to understand things visually as I go. That's not always possible, but it's very helpful when it does. You recognize when there's common themes, like in this case, Cousins of the Fourier Transform that come into play and you realize, oh, it's probably pretty important to have deep intuitions of the Fourier Transform, even if it's not explicitly mentioned in like these texts. And you try to get a sense of what the common players are. But I'll emphasize again, like, I feel very in over my head when I try to understand the exact relation between like the zeros of the Riemann Zeta function and how they relate to the distribution of primes. I definitely understand it better than I did a year ago. I definitely understand it on 100th as well as the experts on the matter do, I assume. But the slow path towards getting there is, it's fun, it's charming, and like to your question, very beautiful. And the beauty is in the, what, in the journey versus the destination? Well, it's that each thing doesn't feel arbitrary. I think that's a big part, is that you have these unpredictable, no, yeah, these very unpredictable patterns or these intricate properties of like a certain function. But at the same time, it doesn't feel like humans ever made an arbitrary choice in studying this particular thing. So, you know, it feels like you're speaking to patterns themselves or nature itself. That's a big part of it. I think things that are too arbitrary, it's just hard for those to feel beautiful because this is sort of what the word contrived is meant to apply to, right? And when they're not arbitrary means it could be, you can have a clean abstraction and intuition that allows you to comprehend it. Well, to one of your first questions, it makes you feel like if you came across another intelligent civilization, that they'd be studying the same thing. Maybe with different notation. Certainly, yeah, but yeah. Like that's what, I think you talked to that other civilization, they're probably also studying the zeros of the Riemann Zeta function or like some variant thereof that is like a clearly equivalent cousin or something like that. But that's probably on their docket. Whenever somebody does a lot of something amazing, I'm gonna ask the question that you've already been asked a lot and that you'll get more and more asked in your life. But what was your favorite video to create? Oh, favorite to create. One of my favorites is, the title is Who Cares About Topology? You want me to pull it up or no? If you want, sure, yeah. It is about, well, it starts by describing an unsolved problem that's still unsolved in math called the inscribed square problem. You draw any loop and then you ask, are there four points on that loop that make a square? Totally useless, right? This is not answering any physical questions. It's mostly interesting that we can't answer that question. And it seems like such a natural thing to ask. Now, if you weaken it a little bit and you ask, can you always find a rectangle? You choose four points on this curve, can you find a rectangle? That's hard, but it's doable. And the path to it involves things like looking at a torus, this surface with a single hole in it, like a donut, or looking at a mobius strip. In ways that feel so much less contrived to when I first, as like a little kid, learned about these surfaces and shapes, like a mobius strip and a torus. Like what you learn is, oh, this mobius strip, you take a piece of paper, put a twist, glue it together, and now you have a shape with one edge and just one side. And as a student, you should think, who cares, right? Like, how does that help me solve any problems? I thought math was about problem solving. So what I liked about the piece of math that this was describing that was in this paper by a mathematician named Vaughn was that it arises very naturally. It's clear what it represents. It's doing something. It's not just playing with construction paper. And the way that it solves the problem is really beautiful. So kind of putting all of that down and concretizing it, right? Like I was talking about how when you have to put visuals to it, it demands that what's on screen is a very specific example of what you're describing. The construction here is very abstract in nature. You describe this very abstract kind of surface in 3D space. So then when I was finding myself, in this case, I wasn't programming, I was using a grapher that's like built into OSX for the 3D stuff to draw that surface, you realize, oh man, the topology argument is very non constructive. I have to make a lot of, you have to do a lot of extra work in order to make the surface show up. But then once you see it, it's quite pretty and it's very satisfying to see a specific instance of it. And you also feel like, ah, I've actually added something on top of what the original paper was doing that it shows something that's completely correct. That's a very beautiful argument, but you don't see what it looks like. And I found something satisfying in seeing what it looked like that could only ever have come about from the forcing function of getting some kind of image on the screen to describe the thing I was talking about. So you almost weren't able to anticipate what it's gonna look like. I had no idea. I had no idea. And it was wonderful, right? It was totally, it looks like a Sydney Opera House or some sort of Frank Gehry design. And it was, you knew it was gonna be something and you can say various things about it. Like, oh, it touches the curve itself. It has a boundary that's this curve on the 2D plane. It all sits above the plane. But before you actually draw it, it's very unclear what the thing will look like. And to see it, it's very, it's just pleasing, right? So that was fun to make, very fun to share. I hope that it has elucidated for some people out there where these constructs of topology come from, that it's not arbitrary play with construction paper. So let's, I think this is a good sort of example to talk a little bit about your process. You have a list of ideas. So that's sort of the curse of having an active and brilliant mind is I'm sure you have a list that's growing faster than you can utilize. Now I'm ahead, absolutely. But there's some sorting procedure depending on mood and interest and so on. But okay, so you pick an idea and then you have to try to write a narrative arc that sort of, how do I elucidate? How do I make this idea beautiful and clear and explain it? And then there's a set of visualizations that will be attached to it. Sort of, you've talked about some of this before, but sort of writing the story, attaching the visualizations. Can you talk through interesting, painful, beautiful parts of that process? Well, the most painful is if you've chosen a topic that you do want to do, but then it's hard to think of, I guess how to structure the script. This is sort of where I have been on one for like the last two or three months. And I think that ultimately the right resolution is just like set it aside and instead do some other things where the script comes more naturally. Because you sort of don't want to overwork a narrative. The more you've thought about it, the less you can empathize with the student who doesn't yet understand the thing you're trying to teach. Who is the judger in your head? Sort of the person, the creature, the essence that's saying this sucks or this is good. And you mentioned kind of the student you're thinking about. Can you, who is that? What is that thing? That says, the perfectionist that says this thing sucks. You need to work on that for another two, three months. I don't know. I think it's my past self. I think that's the entity that I'm most trying to empathize with is like you take who I was, because that's kind of the only person I know. Like you don't really know anyone other than versions of yourself. So I start with the version of myself that I know who doesn't yet understand the thing, right? And then I just try to view it with fresh eyes, a particular visual or a particular script. Like, is this motivating? Does this make sense? Which has its downsides, because sometimes I find myself speaking to motivations that only myself would be interested in. I don't know, like I did this project on quaternions where what I really wanted was to understand what are they doing in four dimensions? Can we see what they're doing in four dimensions, right? And I came up with a way of thinking about it that really answered the question in my head that made me very satisfied and being able to think about concretely with a 3D visual, what are they doing to a 4D sphere? And so I'm like, great, this is exactly what my past self would have wanted, right? And I make a thing on it. And I'm sure it's what some other people wanted too. But in hindsight, I think most people who wanna learn about quaternions are like robotics engineers or graphics programmers who want to understand how they're used to describe 3D rotations. And like their use case was actually a little bit different than my past self. And in that way, like, I wouldn't actually recommend that video to people who are coming at it from that angle of wanting to know, hey, I'm a robotics programmer. Like, how do these quaternion things work to describe position in 3D space? I would say other great resources for that. If you ever find yourself wanting to say like, but hang on, in what sense are they acting in four dimensions? Then come back. But until then, that's a little different. Yeah, it's interesting because you have incredible videos on neural networks, for example. And from my sort of perspective, because I've probably, I mean, I looked at the, is sort of my field and I've also looked at the basic introduction of neural networks like a million times from different perspectives. And it made me realize that there's a lot of ways to present it. So you were sort of, you did an incredible job. I mean, sort of the, but you could also do it differently and also incredible. Like to create a beautiful presentation of a basic concept requires sort of creativity, requires genius and so on, but you can take it from a bunch of different perspectives. And that video on neural networks made me realize that. And just as you're saying, you kind of have a certain mindset, a certain view, but from a, if you take a different view from a physics perspective, from a neuroscience perspective, talking about neural networks or from a robotics perspective, or from, let's see, from a pure learning, statistics perspective. So you can create totally different videos. And you've done that with a few actually concepts where you've have taken different cuts, like at the Euler equation, right? You've taken different views of that. I think I've made three videos on it and I definitely will make at least one more. Right? Never enough. Never enough. So you don't think it's the most beautiful equation in mathematics? Like I said, as we represent it, it's one of the most hideous. It involves a lot of the most hideous aspects of our notation. I talked about E, the fact that we use pi instead of tau, the fact that we call imaginary numbers imaginary, and then, hence, I actually wonder if we use the I because of imaginary. I don't know if that's historically accurate, but at least a lot of people, they read the I and they think imaginary. Like all three of those facts, it's like those are things that have added more confusion than they needed to, and we're wrapping them up in one equation. Like boy, that's just very hideous, right? The idea is that it does tie together when you wash away the notation. Like it's okay, it's pretty, it's nice, but it's not like mind blowing greatest thing in the universe, which is maybe what I was thinking of when I said, like once you understand something, it doesn't have the same beauty. Like I feel like I understand Euler's formula, and I feel like I understand it enough to sort of see the version that just woke up that hasn't really gotten itself dressed in the morning that's a little bit groggy, and there's bags under its eyes. So you're past the dating stage, you're no longer dating, right? I'm still dating the Zeta function, and like she's beautiful and right, and like we have fun, and it's that high dopamine part, but like maybe at some point we'll settle into the more mundane nature of the relationship where I like see her for who she truly is, and she'll still be beautiful in her own way, but it won't have the same romantic pizzazz, right? Well, that's the nice thing about mathematics. I think as long as you don't live forever, there'll always be enough mystery and fun with some of the equations. Even if you do, the rate at which questions comes up is much faster than the rate at which answers come up, so. If you could live forever, would you? I think so, yeah. So you think, you don't think mortality is the thing that makes life meaningful? Would your life be four times as meaningful if you died at 25? So this goes to infinity. I think you and I, that's really interesting. So what I said is infinite, not four times longer. I said infinite. So the actual existence of the finiteness, the existence of the end, no matter the length, is the thing that may sort of, from my comprehension of psychology, it's such a deeply human, it's such a fundamental part of the human condition, the fact that there is, that we're mortal, that the fact that things end, it seems to be a crucial part of what gives them meaning. I don't think, at least for me, it's a very small percentage of my time that mortality is salient, that I'm aware of the end of my life. What do you mean by me? I'm trolling. Is it the ego, is it the id, or is it the superego? The reflective self, the Wernicke's area that puts all this stuff into words. Yeah, a small percentage of your mind that is actually aware of the true motivations that drive you. But my point is that most of my life, I'm not thinking about death, but I still feel very motivated to make things and to interact with people, experience love or things like that. I'm very motivated, and it's strange that that motivation comes while death is not in my mind at all. And this might just be because I'm young enough that it's not salient. Or it's in your subconscious, or that you've constructed an illusion that allows you to escape the fact of your mortality by enjoying the moment, sort of the existential approach to life. Could be. Gun to my head, I don't think that's it. Yeah, another sort of way to say gun to the head is sort of the deep psychological introspection of what drives us. I mean, that's, in some ways to me, I mean, when I look at math, when I look at science, is a kind of an escape from reality in a sense that it's so beautiful. It's such a beautiful journey of discovery that it allows you to actually, it sort of allows you to achieve a kind of immortality of explore ideas and sort of connect yourself to the thing that is seemingly infinite, like the universe, right? That allows you to escape the limited nature of our little, of our bodies, of our existence. What else would give this podcast meaning? That's right. If not the fact that it will end. This place closes in 40 minutes. And it's so much more meaningful for it. How much more I love this room because we'll be kicked out. So I understand just because you're trolling me doesn't mean I'm wrong. But I take your point. I take your point. Boy, that would be a good Twitter bio. Just because you're trolling me doesn't mean I'm wrong. Yeah, and sort of difference in backgrounds. I'm a bit Russian, so we're a bit melancholic and seem to maybe assign a little too much value to suffering and mortality and things like that. Makes for a better novel, I think. Oh yeah, you need some sort of existential threat to drive a plot. So when do you know when the video is done when you're working on it? That's pretty easy actually, because I'll write the script. I want there to be some kind of aha moment in there. And then hopefully the script can revolve around some kind of aha moment. And then from there, you're putting visuals to each sentence that exists, and then you narrate it, you edit it all together. So given that there's a script, the end becomes quite clear. And as I animate it, I often change certainly the specific words, but sometimes the structure itself. But it's a very deterministic process at that point. It makes it much easier to predict when something will be done. How do you know when a script is done? It's like, for problem solving videos, that's quite simple. It's once you feel like someone who didn't understand the solution now could. For things like neural networks, that was a lot harder because like you said, there's so many angles at which you could attack it. And there, it's just at some point you feel like this asks a meaningful question and it answers that question, right? What is the best way to learn math for people who might be at the beginning of that journey? I think that's a question that a lot of folks kind of ask and think about. And it doesn't, even for folks who are not really at the beginning of their journey, like there might be actually deep in their career, some type they've taken college or taken calculus and so on, but still wanna sort of explore math. What would be your advice instead of education at all ages? Your temptation will be to spend more time like watching lectures or reading. Try to force yourself to do more problems than you naturally would. That's a big one. Like the focus time that you're spending should be on like solving specific problems and seek entities that have well curated lists of problems. So go into like a textbook almost and the problems in the back of a textbook kind of thing, back of a chapter. So if you can take a little look through those questions at the end of the chapter before you read the chapter, a lot of them won't make sense. Some of them might, and those are the best ones to think about. A lot of them won't, but just take a quick look and then read a little bit of the chapter and then maybe take a look again and things like that. And don't consider yourself done with the chapter until you've actually worked through a couple exercises. And this is so hypocritical, right? Cause I like put out videos that pretty much never have associated exercises. I just view myself as a different part of the ecosystem, which means I'm kind of admitting that you're not really learning, or at least this is only a partial part of the learning process if you're watching these videos. I think if someone's at the very beginning, like I do think Khan Academy does a good job. They have a pretty large set of questions you can work through. Just the very basics, sort of just picking up, getting comfortable with the very basic linear algebra, calculus or so on, Khan Academy. Programming is actually I think a great, like learn to program and like let the way that math is motivated from that angle push you through. I know a lot of people who didn't like math got into programming in some way and that's what turned them on to math. Maybe I'm biased cause like I live in the Bay area, so I'm more likely to run into someone who has that phenotype. But I am willing to speculate that that is a more generalizable path. So you yourself kind of in creating the videos are using programming to illuminate a concept, but for yourself as well. So would you recommend somebody try to make a, sort of almost like try to make videos? Like you do as a way to learn? So one thing I've heard before, I don't know if this is based on any actual study. This might be like a total fictional anecdote of numbers, but it rings in the mind as being true. You remember about 10% of what you read, you remember about 20% of what you listen to, you remember about 70% of what you actively interact with in some way, and then about 90% of what you teach. This is a thing I heard again, those numbers might be meaningless, but they ring true, don't they, right? I'm willing to say I learned nine times better if I'm teaching something than reading. That might even be a low ball, right? So doing something to teach or to like actively try to explain things is huge for consolidating the knowledge. Outside of family and friends, is there a moment you can remember that you would like to relive because it made you truly happy or it was transformative in some fundamental way? A moment that was transformative. Or made you truly happy? Yeah, I think there's times, like music used to be a much bigger part of my life than it is now, like when I was a, let's say a teenager, and I can think of some times in like playing music. There was one, like my brother and a friend of mine, so this slightly violates the family and friends, but it was the music that made me happy. They were just accompanying. We like played a gig at a ski resort such that you like take a gondola to the top and like did a thing. And then on the gondola ride down, we decided to just jam a little bit. And it was just like, I don't know, the gondola sort of came over a mountain and you saw the city lights and we're just like jamming, like playing some music. I wouldn't describe that as transformative. I don't know why, but that popped into my mind as a moment of, in a way that wasn't associated with people I love, but more with like a thing I was doing, something that was just, it was just happy and it was just like a great moment. I don't think I can give you anything deeper than that. Well, as a musician myself, I'd love to see, as you mentioned before, music enter back into your work, back into your creative work. I'd love to see that. I'm certainly allowing it to enter back into mine. And it's a beautiful thing for a mathematician, for a scientist to allow music to enter their work. I think only good things can happen. All right, I'll try to promise you a music video by 2020. By 2020? By the end of 2020. Okay, all right, good. Give myself a longer window. All right, maybe we can like collaborate on a band type situation. What instruments do you play? The main instrument I play is violin, but I also love to dabble around on like guitar and piano. Beautiful, me too, guitar and piano. So in a mathematician's lament, Paul Lockhart writes, the first thing to understand is that mathematics is an art. The difference between math and the other arts, such as music and painting, is that our culture does not recognize it as such. So I think I speak for millions of people, myself included, in saying thank you for revealing to us the art of mathematics. So thank you for everything you do and thanks for talking today. Wow, thanks for saying that. And thanks for having me on. Thanks for listening to this conversation with Grant Sanderson. And thank you to our presenting sponsor, Cash App. Download it, use code LEXPodcast. You'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or connect with me on Twitter. And now, let me leave you with some words of wisdom from one of Grant's and my favorite people, Richard Feynman. Nobody ever figures out what this life is all about, and it doesn't matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough. Thank you for listening, and hope to see you next time.
Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64
The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics for his integration of economic science with the psychology of human behavior, judgment, and decision making. He's the author of the popular book Thinking Fast and Slow that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky on cognitive biases, prospect theory, and happiness. The central thesis of this work is the dichotomy between two modes of thought. What he calls system one is fast, instinctive, and emotional. System two is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each of these two types of thinking. His study of the human mind and its peculiar and fascinating limitations are both instructive and inspiring for those of us seeking to engineer intelligent systems. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say one dollar's worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating at Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Daniel Kahneman. You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France in Paris, where you grew up. He picked you up and hugged you and showed you a picture of a boy, Daniel Kahneman. Maybe not realizing that you were Jewish. Not maybe, certainly not. So I told you I'm from the Soviet Union that was significantly impacted by the war as well, and I'm Jewish as well. What do you think World War II taught us about human psychology broadly? Well, I think the only big surprise is the extermination policy, genocide, by the German people. That's when you look back on it, and I think that's a major surprise. It's a surprise because... It's a surprise that they could do it. It's a surprise that enough people willingly participated in that. This is a surprise. Now it's no longer a surprise, but it's changed many people's views, I think, about human beings. Certainly for me, the Ackman trial, that teaches you something because it's very clear that if it could happen in Germany, it could happen anywhere. It's not that the Germans were special. This could happen anywhere. So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty? I don't think in those terms. I think that what is certainly possible is you can dehumanize people so that you treat them not as people anymore, but as animals. And the same way that you can slaughter animals without feeling much of anything, it can be the same. And when you feel that, I think, the combination of dehumanizing the other side and having uncontrolled power over other people, I think that doesn't bring out the most generous aspect of human nature. So that Nazi soldier, he was a good man. And he was perfectly capable of killing a lot of people, and I'm sure he did. But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of? IA Again, this is surprising that it was so extreme, but it's not one thing in human nature. I don't want to call it evil, but the distinction between the in group and the out group, that is very basic. So that's built in. The loyalty and affection towards in group and the willingness to dehumanize the out group, that is in human nature. That's what I think probably didn't need the Holocaust to teach us that. But the Holocaust is a very sharp lesson of what can happen to people and what people can do. SL. So the effect of the in group and the out group. IA It's clear. Those were people, you could shoot them. They were not human. There was no empathy, or very, very little empathy left. So occasionally, there might have been. And very quickly, by the way, the empathy disappeared, if there was initially. And the fact that everybody around you was doing it, that completely, the group doing it, and everybody shooting Jews, I think that makes it permissible. Now, how much, whether it could happen in every culture, or whether the Germans were just particularly efficient and disciplined, so they could get away with it. It's an interesting question. SL. Are these artifacts of history or is it human nature? IA I think that's really human nature. You put some people in a position of power relative to other people, and then they become less human, they become different. SL. But in general, in war, outside of concentration camps in World War Two, it seems that war brings out darker sides of human nature, but also the beautiful things about human nature. IA Well, I mean, what it brings out is the loyalty among soldiers. I mean, it brings out the bonding, male bonding, I think is a very real thing that happens. And there is a certain thrill to friendship, and there is certainly a certain thrill to friendship under risk and to shared risk. And so people have very profound emotions, up to the point where it gets so traumatic that little is left. SL. So let's talk about psychology a little bit. In your book, Thinking Fast and Slow, you describe two modes of thought, system one, the fast and instinctive, and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking Darwin to discuss theory of evolution, can you describe distinguishing characteristics for people who have not read your book of the two systems? IA Well, I mean, the word system is a bit misleading, but at the same time it's misleading, it's also very useful. But what I call system one, it's easier to think of it as a family of activities. And primarily, the way I describe it is there are different ways for ideas to come to mind. And some ideas come to mind automatically, and the standard example is two plus two, and then something happens to you. And in other cases, you've got to do something, you've got to work in order to produce the idea. And my example, I always give the same pair of numbers as 27 times 14, I think. SL. You have to perform some algorithm in your head, some steps. IA Yes, and it takes time. It's a very difference. Nothing comes to mind except something comes to mind, which is the algorithm, I mean, that you've got to perform. And then it's work, and it engages short term memory, it engages executive function, and it makes you incapable of doing other things at the same time. So the main characteristic of system two is that there is mental effort involved, and there is a limited capacity for mental effort, whereas system one is effortless, essentially. That's the major distinction. SL. So you talk about there, you know, it's really convenient to talk about two systems, but you also mentioned just now and in general that there's no distinct two systems in the brain from a neurobiological, even from a psychology perspective. But why does it seem to, from the experiments you've conducted, there does seem to be kind of emergent two modes of thinking? So at some point, these kinds of systems came into a brain architecture. Maybe mammals share it. Or do you not think of it at all in those terms that it's all a mush and these two things just emerge? RL. Evolutionary theorizing about this is cheap and easy. So it's the way I think about it is that it's very clear that animals have perceptual system, and that includes an ability to understand the world, at least to the extent that they can predict, they can't explain anything, but they can anticipate what's going to happen. And that's a key form of understanding the world. And my crude idea is that what I call system two, well, system two grew out of this. And, you know, there is language and there is the capacity of manipulating ideas and the capacity of imagining futures and of imagining counterfactual things that haven't happened and to do conditional thinking. And there are really a lot of abilities that without language and without the very large brain that we have compared to others would be impossible. Now, system one is more like what the animals are, but system one also can talk. I mean, it has language. It understands language. Indeed, it speaks for us. I mean, you know, I'm not choosing every word as a deliberate process. The words, I have some idea and then the words come out and that's automatic and effortless. And many of the experiments you've done is to show that, listen, system one exists and it does speak for us and we should be careful about the voice it provides. Well, I mean, you know, we have to trust it because it's the speed at which it acts. System two, if we're dependent on system two for survival, we wouldn't survive very long because it's very slow. Yeah. Crossing the street. Crossing the street. I mean, many things depend on their being automatic. One very important aspect of system one is that it's not instinctive. You use the word instinctive. It contains skills that clearly have been learned. So that skilled behavior like driving a car or speaking, in fact, skilled behavior has to be learned. And so it doesn't, you know, you don't come equipped with driving. You have to learn how to drive and you have to go through a period where driving is not automatic before it becomes automatic. So. Yeah. You construct, I mean, this is where you talk about heuristic and biases is you, to make it automatic, you create a pattern and then system one essentially matches a new experience against the previously seen pattern. And when that match is not a good one, that's when the cognitive, all the mess happens, but it's most of the time it works. And so it's pretty. Most of the time, the anticipation of what's going to happen next is correct. And most of the time the plan about what you have to do is correct. And so most of the time everything works just fine. What's interesting actually is that in some sense, system one is much better at what it does than system two is at what it does. That is there is that quality of effortlessly solving enormously complicated problems, which clearly exists so that the chess player, a very good chess player, all the moves that come to their mind are strong moves. So all the selection of strong moves happens unconsciously and automatically and very, very fast. And all that is in system one. So system two verifies. So along this line of thinking, really what we are are machines that construct a pretty effective system one. You could think of it that way. So we're not talking about humans, but if we think about building artificial intelligence systems, robots, do you think all the features and bugs that you have highlighted in human beings are useful for constructing AI systems? So both systems are useful for perhaps instilling in robots? What is happening these days is that actually what is happening in deep learning is more like a system one product than like a system two product. I mean, deep learning matches patterns and anticipate what's going to happen. So it's highly predictive. What deep learning doesn't have and many people think that this is the critical, it doesn't have the ability to reason. So there is no system two there. But I think very importantly, it doesn't have any causality or any way to represent meaning and to represent real interactions. So until that is solved, what can be accomplished is marvelous and very exciting, but limited. That's actually really nice to think of current advances in machine learning as essentially system one advances. So how far can we get with just system one? If we think of deep learning in artificial intelligence systems? I mean, you know, it's very clear that deep mind has already gone way beyond what people thought was possible. I think the thing that has impressed me most about the developments in AI is the speed. It's that things, at least in the context of deep learning, and maybe this is about to slow down, but things moved a lot faster than anticipated. The transition from solving chess to solving Go, that's bewildering how quickly it went. The move from Alpha Go to Alpha Zero is sort of bewildering the speed at which they accomplished that. Now, clearly, there are many problems that you can solve that way, but there are some problems for which you need something else. Something like reasoning. Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is also a critic of AI. I mean, what he points out, and I think he has a point, is that humans learn quickly. Children don't need a million examples, they need two or three examples. So, clearly, there is a fundamental difference. And what enables a machine to learn quickly, what you have to build into the machine, because it's clear that you have to build some expectations or or something in the machine to make it ready to learn quickly. That at the moment seems to be unsolved. I'm pretty sure that DeepMind is working on it, but if they have solved it, I haven't heard yet. They're trying to actually, them and OpenAI are trying to start to get to use neural networks to reason. So, assemble knowledge. Of course, causality is, temporal causality, is out of reach to most everybody. You mentioned the benefits of System 1 is essentially that it's fast, allows us to function in the world. Fast and skilled, yeah. It's skill. And it has a model of the world. You know, in a sense, I mean, there was the early phase of AI attempted to model reasoning. And they were moderately successful, but, you know, reasoning by itself doesn't get you much. Deep learning has been much more successful in terms of, you know, what they can do. But now, it's an interesting question, whether it's approaching its limits. What do you think? I think absolutely. So, I just talked to Gian LeCun. He mentioned, you know, so he thinks that the limits, we're not going to hit the limits with neural networks, that ultimately, this kind of System 1 pattern matching will start to look like System 2 without significant transformation of the architecture. So, I'm more with the majority of the people who think that, yes, neural networks will hit a limit in their capability. He, on the one hand, I have heard him tell them it's a sub, it's essentially that, you know, what they have accomplished is not a big deal, that they have just touched, that basically, you know, they can't do unsupervised learning in an effective way. But you're telling me that he thinks that the current, within the current architecture, you can do causality and reasoning? So, he's very much a pragmatist in a sense that's saying that we're very far away, that there's still, I think there's this idea that he says is, we can only see one or two mountain peaks ahead and there might be either a few more after or thousands more after. Yeah, so that kind of idea. I heard that metaphor. Yeah, right. But nevertheless, it doesn't see the final answer not fundamentally looking like one that we currently have. So, neural networks being a huge part of that. Yeah, I mean, that's very likely because pattern matching is so much of what's going on. And you can think of neural networks as processing information sequentially. Yeah, I mean, you know, there is an important aspect to, for example, you get systems that translate and they do a very good job, but they really don't know what they're talking about. And for that, I'm really quite surprised. For that, you would need an AI that has sensation, an AI that is in touch with the world. Yes, self awareness and maybe even something resembles consciousness kind of ideas. Certainly awareness of, you know, awareness of what's going on so that the words have meaning or can get, are in touch with some perception or some action. Yeah, so that's a big thing for Jan and as what he refers to as grounding to the physical space. So that's what we're talking about the same thing. Yeah, so how do you ground? I mean, the grounding, without grounding, then you get a machine that doesn't know what it's talking about because it is talking about the world ultimately. The question, the open question is what it means to ground. I mean, we're very human centric in our thinking, but what does it mean for a machine to understand what it means to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans have all of these elements? It's a very, it's an open question. You know, I'm not sure about having a body, but having a perceptual system, having a body would be very helpful too. I mean, if you think about human, mimicking human, you know, but having a perception that seems to be essential so that you can build, you can accumulate knowledge about the world. So if you can imagine a human completely paralyzed, and there's a lot that the human brain could learn, you know, with a paralyzed body. So if we got a machine that could do that, that would be a big deal. TK And then the flip side of that, something you see in children and something in machine learning world is called active learning. Maybe it is also in, is being able to play with the world. How important for developing System 1 or System 2 do you think it is to play with the world? To be able to interact with the world? MG A lot of what you learn is you learn to anticipate the outcomes of your actions. I mean, you can see that how babies learn it, you know, with their hands, how they learn, you know, to connect, you know, the movements of their hands with something that clearly is something that happens in the brain and the ability of the brain to learn new patterns. So, you know, it's the kind of thing that you get with artificial limbs, that you connect it and then people learn to operate the artificial limb, you know, really impressively quickly, at least from what I hear. So we have a system that is ready to learn the world through action. TK At the risk of going into way too mysterious of land, what do you think it takes to build a system like that? Obviously, we're very far from understanding how the brain works, but how difficult is it to build this mind of ours? MG You know, I mean, I think that Jan LeCun's answer that we don't know how many mountains there are, I think that's a very good answer. I think that, you know, if you look at what Ray Kurzweil is saying, that strikes me as off the wall. But I think people are much more realistic than that, where actually Demis Hassabis is and Jan is, and so the people are actually doing the work fairly realistic, I think. TK To maybe phrase it another way, from a perspective not of building it, but from understanding it, how complicated are human beings in the following sense? You know, I work with autonomous vehicles and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being, their perception of the world, the two systems they operate under, sufficiently to be able to predict whether the pedestrian is going to cross the road or not? MG I'm, you know, I'm fairly optimistic about that, actually, because what we're talking about is a huge amount of information that every vehicle has, and that feeds into one system, into one gigantic system. And so anything that any vehicle learns becomes part of what the whole system knows. And with a system multiplier like that, there is a lot that you can do. So human beings are very complicated, and the system is going to make mistakes, but human makes mistakes. I think that they'll be able to, I think they are able to anticipate pedestrians, otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout and into traffic, so they must know both to expect or to anticipate how people will react when they're sneaking in. And there's a lot of learning that's involved in that. RL Currently, the pedestrians are treated as things that cannot be hit, and they're not treated as agents with whom you interact in a game theoretic way. So, I mean, it's not, it's a totally open problem, and every time somebody tries to solve it, it seems to be harder than we think. And nobody's really tried to seriously solve the problem of that dance, because I'm not sure if you've thought about the problem of pedestrians, but you're really putting your life in the hands of the driver. RL You know, there is a dance, there's part of the dance that would be quite complicated, but for example, when I cross the street and there is a vehicle approaching, I look the driver in the eye, and I think many people do that. And, you know, that's a signal that I'm sending, and I would be sending that machine to an autonomous vehicle, and it had better understand it, because it means I'm crossing. RL So, and there's another thing you do, that actually, so I'll tell you what you do, because we watched, I've watched hundreds of hours of video on this, is when you step in the street, you do that before you step in the street, and when you step in the street, you actually look away. RL Look away. RL Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car who hasn't slowed down yet will slow down. RL Yeah. And you're telling him, I'm committed. I mean, this is like in a game of chicken, so I'm committed, and if I'm committed, I'm looking away. So, there is, you just have to stop. RL So, the question is whether a machine that observes that needs to understand mortality. RL Here, I'm not sure that it's got to understand so much as it's got to anticipate. So, and here, but you know, you're surprising me, because here I would think that maybe you can anticipate without understanding, because I think this is clearly what's happening in playing go or in playing chess. There's a lot of anticipation, and there is zero understanding. RL Exactly. RL So, I thought that you didn't need a model of the human and a model of the human mind to avoid hitting pedestrians, but you are suggesting that actually… RL There you go, yeah. RL You do. Then it's a lot harder, I thought. RL And I have a follow up question to see where your intuition lies. It seems that almost every robot human collaboration system is a lot harder than people realize. So, do you think it's possible for robots and humans to collaborate successfully? We talked a little bit about semi autonomous vehicles, like in the Tesla autopilot, but just in tasks in general. If you think we talked about current neural networks being kind of system one, do you think those same systems can borrow humans for system two type tasks and collaborate successfully? RL Well, I think that in any system where humans and the machine interact, the human will be superfluous within a fairly short time. That is, if the machine is advanced enough so that it can really help the human, then it may not need the human for a long time. Now, it would be very interesting if there are problems that for some reason the machine cannot solve, but that people could solve. Then you would have to build into the machine an ability to recognize that it is in that kind of problematic situation and to call the human. That cannot be easy without understanding. That is, it must be very difficult to program a recognition that you are in a problematic situation without understanding the problem. SL. That's very true. In order to understand the full scope of situations that are problematic, you almost need to be smart enough to solve all those problems. RL It's not clear to me how much the machine will need the human. I think the example of chess is very instructive. I mean, there was a time at which Kasparov was saying that human machine combinations will beat everybody. Even stockfish doesn't need people and Alpha Zero certainly doesn't need people. SL. The question is, just like you said, how many problems are like chess and how many problems are not like chess? Every problem probably in the end is like chess. The question is, how long is that transition period? RL That's a question I would ask you. Autonomous vehicle, just driving, is probably a lot more complicated than Go to solve that problem. Because it's open. That's not surprising to me because there is a hierarchical aspect to this, which is recognizing a situation and then within the situation bringing up the relevant knowledge. For that hierarchical type of system to work, you need a more complicated system than we currently have. SL. A lot of people think, because as human beings, this is probably the cognitive biases, they think of driving as pretty simple because they think of their own experience. This is actually a big problem for AI researchers or people thinking about AI because they evaluate how hard a particular problem is based on very limited knowledge, based on how hard it is for them to do the task. And then they take for granted, maybe you can speak to that because most people tell me driving is trivial and humans in fact are terrible at driving is what people tell me. And I see humans and humans are actually incredible at driving and driving is really terribly difficult. Is that just another element of the effects that you've described in your work on the psychology side? No, I mean, I haven't really, I would say that my research has contributed nothing to understanding the ecology and to understanding the structure of situations and the complexity of problems. So all we know is very clear that that goal, it's endlessly complicated, but it's very constrained. And in the real world, there are far fewer constraints and many more potential surprises. SL. So that's obvious because it's not always obvious to people, right? So when you think about… Well, I mean, you know, people thought that reasoning was hard and perceiving was easy, but you know, they quickly learned that actually modeling vision was tremendously complicated and modeling, even proving theorems was relatively straightforward. To push back on that a little bit on the quickly part, it took several decades to learn that and most people still haven't learned that. I mean, our intuition, of course, AI researchers have, but you drift a little bit outside the specific AI field, the intuition is still perceptible to solve that. No, I mean, that's true. Intuitions, the intuitions of the public haven't changed radically. And they are, as you said, they're evaluating the complexity of problems by how difficult it is for them to solve the problems. And that's got very little to do with the complexities of solving them in AI. SL. How do you think from the perspective of an AI researcher, do we deal with the intuitions of the public? So in trying to think, arguably, the combination of hype investment and the public intuition is what led to the AI winters. I'm sure that same could be applied to tech or that the intuition of the public leads to media hype, leads to companies investing in the tech, and then the tech doesn't make the company's money. And then there's a crash. Is there a way to educate people to fight the, let's call it system one thinking? In general, no. I think that's the simple answer. And it's going to take a long time before the understanding of what those systems can do becomes public knowledge. And then the expectations, there are several aspects that are going to be very complicated. The fact that you have a device that cannot explain itself is a major, major difficulty. And we're already seeing that. I mean, this is really something that is happening. So it's happening in the judicial system. So you have system that are clearly better at predicting parole violations than judges, but they can't explain their reasoning. And so people don't want to trust them. We seem to in system one, even use cues to make judgements about our environment. So this explainability point, do you think humans can explain stuff? No, but I mean, there is a very interesting aspect of that. Humans think they can explain themselves. So when you say something and I ask you, why do you believe that? Then reasons will occur to you. But actually, my own belief is that in most cases, the reasons have very little to do with why you believe what you believe. So that the reasons are a story that comes to your mind when you need to explain yourself. But people traffic in those explanations I mean, the human interaction depends on those shared fictions and, and the stories that people tell themselves. You just made me actually realize and we'll talk about stories in a second. That not to be cynical about it, but perhaps there's a whole movement of people trying to do explainable AI. And really, we don't necessarily need to explain AI doesn't need to explain itself. It just needs to tell a convincing story. Yeah, absolutely. It doesn't necessarily, the story doesn't necessarily need to reflect the truth as it might, it just needs to be convincing. There's something to that. You can say exactly the same thing in a way that sounds cynical or doesn't sound cynical. Right. But the objective of having an explanation is to tell a story that will be acceptable to people. And, and, and for it to be acceptable and to be robustly acceptable, it has to have some elements of truth. But, but the objective is for people to accept it. It's quite brilliant, actually. But so on the, on the stories that we tell, sorry to ask me, ask you the question that most people know the answer to, but you talk about two selves in terms of how life is lived, the experienced self and remembering self. Can you describe the distinction between the two? Well, sure. I mean, the, there is an aspect of, of life that occasionally, you know, most of the time we just live and we have experiences and they're better and they're worse and it goes on over time. And mostly we forget everything that happens or we forget most of what happens. Then occasionally you, when something ends or at different points, you evaluate the past and you form a memory and the memory is schematic. It's not that you can roll a film of an interaction. You construct, in effect, the elements of a story about an, about an episode. So there is the experience and there is the story that is created about the experience. And that's what I call the remembering. So I had the image of two selves. So there is a self that lives and there is a self that evaluates life. Now the paradox and the deep paradox in that is that we have one system or one self that does the living, but the other system, the remembering self is all we get to keep. And basically decision making and, and everything that we do is governed by our memories, not by what actually happened. It's, it's governed by, by the story that we told ourselves or by the story that we're keeping. So that's, that's the distinction. I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of that. What are the properties of happiness which emerge from a remembering self? There are, there are properties of how we construct stories that are really important. So that I studied a few, but, but a couple are really very striking. And one is that in stories, time doesn't matter. There's a sequence of events or there are highlights or not. And, and how long it took, you know, they lived happily ever after or three years later or something. It, time really doesn't matter. And in stories, events matter, but time doesn't. That, that leads to a very interesting set of problems because time is all we got to live. I mean, you know, time is the currency of life. And yet time is not represented basically in evaluated memories. So that, that creates a lot of paradoxes that I've thought about. Yeah. They're fascinating. But if you were to give advice on how one lives a happy life based on such properties, what's the optimal? You know, I gave up, I abandoned happiness research because I couldn't solve that problem. I couldn't, I couldn't see. And in the first place, it's very clear that if you do talk in terms of those two selves, then that what makes the remembering self happy and what makes the experiencing self happy are different things. And I, I asked the question of, suppose you're planning a vacation and you're just told that at the end of the vacation, you'll get an amnesic drug, so you remember nothing. And they'll also destroy all your photos. So there'll be nothing. Would you still go to the same vacation? And, and it's, it turns out we go to vacations in large part to construct memories, not to have experiences, but to construct memories. And it turns out that the vacation that you would want for yourself, if you knew, you will not remember is probably not the same vacation that you will want for yourself if you will remember. So I have no solution to these problems, but clearly those are big issues. And you've talked about, you've talked about sort of how many minutes or hours you spend about the vacation. It's an interesting way to think about it because that's how you really experience the vacation outside the being in it. But there's also a modern, I don't know if you think about this or interact with it. There's a modern way to, um, magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people live life for the picture that you take, that you post somewhere. And now thousands of people share and potentially potentially millions. And then you can relive it even much more than just those minutes. Do you think about that magnification much? You know, I'm too old for social networks. I, you know, I've never seen Instagram, so I cannot really speak intelligently about those things. I'm just too old. But it's interesting to watch the exact effects you've described. Make a very big difference. I mean, and it will make, it will also make a difference. And that I don't know whether, uh, it's clear that in some ways the devices that serve us are supplant functions. So you don't have to remember phone numbers. You don't have, you really don't have to know facts. I mean, the number of conversations I'm involved with, somebody says, well, let's look it up. Uh, so it's, it's in a way it's made conversations. Well it's, it means that it's much less important to know things. You know, it used to be very important to know things. This is changing. So the requirements of that, that we have for ourselves and for other people are changing because of all those supports and because, and I have no idea what Instagram does, but it's, uh, well, I'll tell you, I wish I could just have the, my remembering self could enjoy this conversation, but I'll get to enjoy it even more by having watched, by watching it and then talking to others. It'll be about a hundred thousand people as scary as this to say, well, listen or watch this, right? It changes things. It changes the experience of the world that you seek out experiences which could be shared in that way. It's in, and I haven't seen, it's, it's the same effects that you described. And I don't think the psychology of that magnification has been described yet because it's a new world. But the sharing, there was a, there was a time when people read books and, uh, and, and you could assume that your friends had read the same books that you read. So there was kind of invisible sharing. There was a lot of sharing going on and there was a lot of assumed common knowledge and, you know, that was built in. I mean, it was obvious that you had read the New York Times. It was obvious that you had read the reviews. I mean, so a lot was taken for granted that was shared. And, you know, when there were, when there were three television channels, it was obvious that you'd seen one of them probably the same. So sharing, sharing always was always there. It was just different. At the risk of, uh, inviting mockery from you, let me say that I'm also a fan of Sartre and Camus and existentialist philosophers. And, um, I'm joking of course about mockery, but from the perspective of the two selves, what do you think of the existentialist philosophy of life? So trying to really emphasize the experiencing self as the proper way to, or the best way to live life. I don't know enough philosophy to answer that, but it's not, uh, you know, the emphasis on, on experience is also the emphasis in Buddhism. Yeah, right. That's right. So, uh, that's, you just have got to, to experience things and, and, and not to evaluate and not to pass judgment and not to score, not to keep score. So, uh, If, when you look at the grand picture of experience, you think there's something to that, that one, one of the ways to achieve contentment and maybe even happiness is letting go of any of the things, any of the procedures of the remembering self. Well, yeah, I mean, I think, you know, if one could imagine a life in which people don't score themselves, uh, it, it feels as if that would be a better life as if the self scoring and you know, how am I doing a kind of question, uh, is not, is not a very happy thing to have. But I got out of that field because I couldn't solve that problem and, and that was because my intuition was that the experiencing self, that's reality. But then it turns out that what people want for themselves is not experiences. They want memories and they want a good story about their life. And so you cannot have a theory of happiness that doesn't correspond to what people want for themselves. And when I, when I realized that this, this was where things were going, I really sort of left the field of research. Do you think there's something instructive about this emphasis of reliving memories in building AI systems. So currently artificial intelligence systems are more like experiencing self in that they react to the environment. There's some pattern formation like a learning so on, but you really don't construct memories, uh, except in reinforcement learning every once in a while that you replay over and over. Yeah, but you know, that would in principle would not be. Do you think that's useful? Do you think it's a feature or a bug of human beings that we, that we look back? Oh, I think that's definitely a feature. That's not a bug. I mean, you, you have to look back in order to look forward. So, uh, without, without looking back, you couldn't, you couldn't really intelligently look forward. You're looking for the echoes of the same kind of experience in order to predict what the future holds. Yeah. So though Victor Frankel in his book, man's search for meaning, I'm not sure if you've read, describes his experience at the consecration concentration camps during world war two as a way to describe that finding identifying a purpose in life, a positive purpose in life can save one from suffering. First of all, do you connect with the philosophy that he describes there? Not really. I mean, the, so I can, I can really see that somebody who has that feeling of purpose and meaning and so on, that, that could sustain you. Uh, I in general don't have that feeling and I'm pretty sure that if I were in a concentration camp, I'd give up and die, you know? So he talks, he is, he is a survivor. Yeah. And, you know, he survived with that. And I'm, and I'm not sure how essential to survival this sense is, but I do know when I think about myself that I would have given up. Oh, this isn't going anywhere. And there is, there is a sort of character that, that, that manages to survive in conditions like that. And then because they survive, they tell stories and it sounds as if they survive because of what they were doing. We have no idea. They survived because the kind of people that they are and the other kind of people who survives and would tell themselves stories of a particular kind. So I'm not, uh, So you don't think seeking purpose is a significant driver in our being? Oh, I mean, it's, it's a very interesting question because when you ask people whether it's very important to have meaning in their life, they say, oh yes, that's the most important thing. But when you ask people, what kind of a day did you have? And, and you know, what were the experiences that you remember? You don't get much meaning. You get social experiences. Then, uh, and, and some people say that, for example, in, in, in child, you know, in taking care of children, the fact that they are your children and you're taking care of them, uh, makes a very big difference. I think that's entirely true. Uh, but it's more because of a story that we're telling ourselves, which is a very different story when we're taking care of our children or when we're taking care of other things. Jumping around a little bit in doing a lot of experiments, let me ask a question. Most of the work I do, for example, is in the, in the real world, but most of the clean good science that you can do is in the lab. So that distinction, do you think we can understand the fundamentals of human behavior through controlled experiments in the lab? If we talk about pupil diameter, for example, it's much easier to do when you can control lighting conditions, right? So when we look at driving, lighting variation destroys almost completely your ability to use pupil diameter. But in the lab for, as I mentioned, semi autonomous or autonomous vehicles in driving simulators, we can't, we don't capture true, honest, uh, human behavior in that particular domain. So what's your intuition? How much of human behavior can we study in this controlled environment of the lab? A lot, but you'd have to verify it, you know, that your, your conclusions are basically limited to the situation, to the experimental situation. Then you have to jump the big inductive leap to the real world. Uh, so, and, and that's the flare. That's where the difference, I think, between the good psychologists and others that are mediocre is in the sense of that your experiment captures something that's important and something that's real and others are just running experiments. So what is that? Like the birth of an idea to his development in your mind to something that leads to an experiment. Is that similar to maybe like what Einstein or a good physicist do is your intuition. You basically use your intuition to build up. Yeah, but I mean, you know, it's, it's very skilled intuition. I mean, I just had that experience actually. I had an idea that turns out to be very good idea a couple of days ago and, and you, and you have a sense of that building up. So I'm working with a collaborator and he essentially was saying, you know, what, what are you doing? What's, what's going on? And I was, I really, I couldn't exactly explain it, but I knew this is going somewhere, but you know, I've been around that game for a very long time. And so I can, you, you develop that anticipation that yes, this, this is worth following up. That's part of the skill. Is that something you can reduce to words in describing a process in the form of advice to others? No. Follow your heart, essentially. I mean, you know, it's, it's like trying to explain what it's like to drive. It's not, you've got to break it apart and it's not. And then you lose. And then you lose the experience. You mentioned collaboration. You've written about your collaboration with Amos Tversky that this is you writing, the 12 or 13 years in which most of our work was joint were years of interpersonal and intellectual bliss. Everything was interesting. Almost everything was funny. And there was a current joy of seeing an idea take shape. So many times in those years, we shared the magical experience of one of us saying something, which the other one would understand more deeply than the speaker had done. Contrary to the old laws of information theory, it was common for us to find that more information was received than had been sent. I have almost never had the experience with anyone else. If you have not had it, you don't know how marvelous collaboration can be. So let me ask a perhaps a silly question. How does one find and create such a collaboration? That may be asking like, how does one find love? Yeah, you have to be lucky. And I think you have to have the character for that because I've had many collaborations. I mean, none were as exciting as with Amos, but I've had and I'm having just very. So it's a skill. I think I'm good at it. Not everybody is good at it. And then it's the luck of finding people who are also good at it. Is there advice in a form for a young scientist who also seeks to violate this law of information theory? I really think it's so much luck is involved. And in those really serious collaborations, at least in my experience, are a very personal experience. And I have to like the person I'm working with. Otherwise, I mean, there is that kind of collaboration, which is like an exchange, a commercial exchange of giving this, you give me that. But the real ones are interpersonal. They're between people who like each other and who like making each other think and who like the way that the other person responds to your thoughts. You have to be lucky. But I already noticed that even just me showing up here, you've quickly started to digging in on a particular problem I'm working on and already new information started to emerge. Is that a process, just the process of curiosity of talking to people about problems and seeing? I'm curious about anything to do with AI and robotics. And I knew you were dealing with that. So I was curious. Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding terminology of replication crisis, but really just the, at times, this effect that at times studies do not, are not fully generalizable. They don't. You are being polite. It's worse than that. Is it? So I'm actually not fully familiar to the degree how bad it is, right? So what do you think is the source? Where do you think? I think I know what's going on actually. I mean, I have a theory about what's going on and what's going on is that there is, first of all, a very important distinction between two types of experiments. And one type is within subject. So it's the same person has two experimental conditions. And the other type is between subjects where some people are this condition, other people are that condition. They're different worlds. And between subject experiments are much harder to predict and much harder to anticipate. And the reason, and they're also more expensive because you need more people. And it's just, so between subject experiments is where the problem is. It's not so much in within subject experiments, it's really between. And there is a very good reason why the intuitions of researchers about between subject experiments are wrong. And that's because when you are a researcher, you're in a within subject situation. That is you are imagining the two conditions and you see the causality and you feel it. But in the between subject condition, they live in one condition and the other one is just nowhere. So our intuitions are very weak about between subject experiments. And that I think is something that people haven't realized. And in addition, because of that, we have no idea about the power of manipulations of experimental manipulations because the same manipulation is much more powerful when you are in the two conditions than when you live in only one condition. And so the experimenters have very poor intuitions about between subject experiments. And there is something else which is very important, I think, which is that almost all psychological hypotheses are true. That is in the sense that, you know, directionally, if you have a hypothesis that A really causes B, that it's not true that A causes the opposite of B. Maybe A just has very little effect, but hypotheses are true mostly, except mostly they're very weak. They're much weaker than you think when you are having images. So the reason I'm excited about that is that I recently heard about some friends of mine who they essentially funded 53 studies of behavioral change by 20 different teams of people with a very precise objective of changing the number of times that people go to the gym. And the success rate was zero. Not one of the 53 studies worked. Now, what's interesting about that is those are the best people in the field and they have no idea what's going on. So they're not calibrated. They think that it's going to be powerful because they can imagine it, but actually it's just weak because you are focusing on your manipulation and it feels powerful to you. There's a thing that I've written about that's called the focusing illusion. That is that when you think about something, it looks very important, more important than it really is. More important than it really is. But if you don't see that effect, the 53 studies, doesn't that mean you just report that? So what was, I guess, the solution to that? Well, I mean, the solution is for people to trust their intuitions less or to try out their intuitions before. I mean, experiments have to be pre registered and by the time you run an experiment, you have to be committed to it and you have to run the experiment seriously enough and in a public. And so this is happening. The interesting thing is what happens before and how do people prepare themselves and how they run pilot experiments. It's going to train the way psychology is done and it's already happening. Do you have a hope for, this might connect to the study sample size. Yeah. Do you have a hope for the internet? Well, I mean, you know, this is really happening. MTurk, everybody's running experiments on MTurk and it's very cheap and very effective. Do you think that changes psychology essentially? Because you're thinking you cannot run 10,000 subjects. Eventually it will. I mean, I, you know, I can't put my finger on how exactly, but it's, that's been true in psychology with whenever an important new method came in, it changes the field. So, and MTurk is really a method because it makes it very much easier to do something, to do some things. Is there a undergrad students who'll ask me, you know, how big a neural network should be for a particular problem? So let me ask you an equivalent question. How big, how many subjects does the study have for it to have a conclusive result? Well, it depends on the strength of the effect. So if you're studying visual perception or the perception of color, many of the classic results in visual, in color perception were done on three or four people. And I think one of them was colorblind, but partly colorblind, but on vision, you know, it's highly reliable. Many people don't need a lot of replications for some type of neurological experiment. When you're studying weaker phenomena and especially when you're studying them between subjects, then you need a lot more subjects than people have been running. And that is, that's one of the things that are happening in psychology now is that the power, the statistical power of experiments is increasing rapidly. Does the between subject, as the number of subjects goes to infinity approach? Well, I mean, you know, it goes to infinity is exaggerated, but people, the standard number of subjects for an experiment in psychology were 30 or 40. And for a weak effect, that's simply not enough. And you may need a couple of hundred. I mean, it's that sort of order of magnitude. What are the major disagreements in theories and effects that you've observed throughout your career that still stand today? You've worked on several fields, but what still is out there as a major disagreement that pops into your mind? I've had one extreme experience of, you know, controversy with somebody who really doesn't like the work that Amos Tversky and I did. And he's been after us for 30 years or more, at least. Do you want to talk about it? Well, I mean, his name is Gerd Gigerenzer. He's a well known German psychologist. And that's the one controversy, which I, it's been unpleasant. And no, I don't particularly want to talk about it. But is there is there open questions, even in your own mind, every once in a while? You know, we talked about semi autonomous vehicles. In my own mind, I see what the data says, but I also constantly torn. Do you have things where you or your studies have found something, but you're also intellectually torn about what it means? And there's maybe disagreements within your own mind about particular things. I mean, it's, you know, one of the things that are interesting is how difficult it is for people to change their mind. Essentially, you know, once they are committed, people just don't change their mind about anything that matters. And that is surprisingly, but it's true about scientists. So the controversy that I described, you know, that's been going on like 30 years and it's never going to be resolved. And you build a system and you live within that system and other other systems of ideas look foreign to you and there is very little contact and very little mutual influence. That happens a fair amount. Do you have a hopeful advice or message on that? Thinking about science, thinking about politics, thinking about things that have impact on this world, how can we change our mind? I think that, I mean, on things that matter, which are political or really political or religious and people just don't, don't change their mind. And by and large, and there's very little that you can do about it. The, what does happen is that if leaders change their minds. So for example, the public, the American public doesn't really believe in climate change, doesn't take it very seriously. But if some religious leaders decided this is a major threat to humanity, that would have a big effect. So that we have the opinions that we have, not because we know why we have them, but because we trust some people and we don't trust other people. And so it's much less about evidence than it is about stories. So the way, one way to change your mind isn't at the individual level, is that the leaders of the communities you look up with, the stories change and therefore your mind changes with them. So there's a guy named Alan Turing, came up with a Turing test. What do you think is a good test of intelligence? Perhaps we're drifting in a topic that we're maybe philosophizing about, but what do you think is a good test for intelligence, for an artificial intelligence system? Well, the standard definition of artificial general intelligence is that it can do anything that people can do and it can do them better. What we are seeing is that in many domains, you have domain specific devices or programs or software, and they beat people easily in a specified way. What we are very far from is that general ability, general purpose intelligence. In machine learning, people are approaching something more general. I mean, for Alpha Zero was much more general than Alpha Go, but it's still extraordinarily narrow and specific in what it can do. So we're quite far from something that can, in every domain, think like a human except better. What aspect, so the Turing test has been criticized, it's natural language conversation that is too simplistic. It's easy to quote unquote pass under constraints specified. What aspect of conversation would impress you if you heard it? Is it humor? What would impress the heck out of you if you saw it in conversation? Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just factual conversation, which I think is easy. And allusions would be interesting and metaphors would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot that would be sort of impressive that is completely natural in conversation, but that you really wouldn't expect. Does the possibility of creating a human level intelligence or superhuman level intelligence system excite you, scare you? How does it make you feel? I find the whole thing fascinating. Absolutely fascinating. So exciting. I think. And exciting. It's also terrifying, you know, but I'm not going to be around to see it. And so I'm curious about what is happening now, but I also know that predictions about it are silly. We really have no idea what it will look like 30 years from now. No idea. Speaking of silly, bordering on the profound, let me ask the question of, in your view, what is the meaning of it all? The meaning of life? He's a descendant of great apes that we are. Why, what drives us as a civilization, as a human being, as a force behind everything that you've observed and studied? Is there any answer or is it all just a beautiful mess? There is no answer that I can understand and I'm not, and I'm not actively looking for one. Do you think an answer exists? No. There is no answer that we can understand. I'm not qualified to speak about what we cannot understand, but there is, I know that we cannot understand reality, you know. I mean, there are a lot of things that we can do. I mean, you know, gravity waves, I mean, that's a big moment for humanity. And when you imagine that ape, you know, being able to go back to the Big Bang, that's, that's, but... But the why. Yeah, the why. It's bigger than us. The why is hopeless, really. Danny, thank you so much. It was an honor. Thank you for speaking today. Thank you. Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter. And now, let me leave you with some words of wisdom from Daniel Kahneman. Intelligence is not only the ability to reason, it is also the ability to find relevant material and memory and to deploy attention when needed. Thank you for listening and hope to see you next time.
Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65
The following is a conversation with Ayana Howard. She's a roboticist, professor Georgia Tech, and director of the Human Automation Systems Lab, with research interests in human robot interaction, assisted robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments. Like me, in her work, she cares a lot about both robots and human beings, and so I really enjoyed this conversation. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and Member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating at Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again, is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Ayanna Howard. What or who is the most amazing robot you've ever met, or perhaps had the biggest impact on your career? I haven't met her, but I grew up with her, but of course, Rosie. So, and I think it's because also. Who's Rosie? Rosie from the Jetsons. She is all things to all people, right? Think about it. Like anything you wanted, it was like magic, it happened. So people not only anthropomorphize, but project whatever they wish for the robot to be onto. Onto Rosie. But also, I mean, think about it. She was socially engaging. She every so often had an attitude, right? She kept us honest. She would push back sometimes when George was doing some weird stuff. But she cared about people, especially the kids. She was like the perfect robot. And you've said that people don't want their robots to be perfect. Can you elaborate that? What do you think that is? Just like you said, Rosie pushed back a little bit every once in a while. Yeah, so I think it's that. So if you think about robotics in general, we want them because they enhance our quality of life. And usually that's linked to something that's functional. Even if you think of self driving cars, why is there a fascination? Because people really do hate to drive. Like there's the like Saturday driving where I can just speed, but then there's the I have to go to work every day and I'm in traffic for an hour. I mean, people really hate that. And so robots are designed to basically enhance our ability to increase our quality of life. And so the perfection comes from this aspect of interaction. If I think about how we drive, if we drove perfectly, we would never get anywhere, right? So think about how many times you had to run past the light because you see the car behind you is about to crash into you. Or that little kid kind of runs into the street and so you have to cross on the other side because there's no cars, right? Like if you think about it, we are not perfect drivers. Some of it is because it's our world. And so if you have a robot that is perfect in that sense of the word, they wouldn't really be able to function with us. Can you linger a little bit on the word perfection? So from the robotics perspective, what does that word mean and how is sort of the optimal behavior as you're describing different than what we think is perfection? Yeah, so perfection, if you think about it in the more theoretical point of view, it's really tied to accuracy, right? So if I have a function, can I complete it at 100% accuracy with zero errors? And so that's kind of, if you think about perfection in the sense of the word. And in the self driving car realm, do you think from a robotics perspective, we kind of think that perfection means following the rules perfectly, sort of defining, staying in the lane, changing lanes. When there's a green light, you go. When there's a red light, you stop. And that's the, and be able to perfectly see all the entities in the scene. That's the limit of what we think of as perfection. And I think that's where the problem comes is that when people think about perfection for robotics, the ones that are the most successful are the ones that are quote unquote perfect. Like I said, Rosie is perfect, but she actually wasn't perfect in terms of accuracy, but she was perfect in terms of how she interacted and how she adapted. And I think that's some of the disconnect is that we really want perfection with respect to its ability to adapt to us. We don't really want perfection with respect to 100% accuracy with respect to the rules that we just made up anyway, right? And so I think there's this disconnect sometimes between what we really want and what happens. And we see this all the time, like in my research, right? Like the optimal, quote unquote optimal interactions are when the robot is adapting based on the person, not 100% following what's optimal based on the rules. Just to link on autonomous vehicles for a second, just your thoughts, maybe off the top of the head, how hard is that problem do you think based on what we just talked about? There's a lot of folks in the automotive industry, they're very confident from Elon Musk to Waymo to all these companies. How hard is it to solve that last piece? The last mile. The gap between the perfection and the human definition of how you actually function in this world. Yeah, so this is a moving target. So I remember when all the big companies started to heavily invest in this and there was a number of even roboticists as well as folks who were putting in the VCs and corporations, Elon Musk being one of them that said, self driving cars on the road with people within five years, that was a little while ago. And now people are saying five years, 10 years, 20 years, some are saying never, right? I think if you look at some of the things that are being successful is these basically fixed environments where you still have some anomalies, right? You still have people walking, you still have stores, but you don't have other drivers, right? Like other human drivers are, it's a dedicated space for the cars. Because if you think about robotics in general, where has always been successful? I mean, you can say manufacturing, like way back in the day, right? It was a fixed environment, humans were not part of the equation, we're a lot better than that. But like when we can carve out scenarios that are closer to that space, then I think that it's where we are. So a closed campus where you don't have self driving cars and maybe some protection so that the students don't jet in front just because they wanna see what happens. Like having a little bit, I think that's where we're gonna see the most success in the near future. And be slow moving. Right, not 55, 60, 70 miles an hour, but the speed of a golf cart, right? So that said, the most successful in the automotive industry robots operating today in the hands of real people are ones that are traveling over 55 miles an hour and in unconstrained environments, which is Tesla vehicles, so Tesla autopilot. So I would love to hear sort of your, just thoughts of two things. So one, I don't know if you've gotten to see, you've heard about something called smart summon where Tesla system, autopilot system, where the car drives zero occupancy, no driver in the parking lot slowly sort of tries to navigate the parking lot to find itself to you. And there's some incredible amounts of videos and just hilarity that happens as it awkwardly tries to navigate this environment, but it's a beautiful nonverbal communication between machine and human that I think is a, it's like, it's some of the work that you do in this kind of interesting human robot interaction space. So what are your thoughts in general about it? So I do have that feature. Do you drive a Tesla? I do, mainly because I'm a gadget freak, right? So I say it's a gadget that happens to have some wheels. And yeah, I've seen some of the videos. But what's your experience like? I mean, you're a human robot interaction roboticist, you're a legit sort of expert in the field. So what does it feel for a machine to come to you? It's one of these very fascinating things, but also I am hyper, hyper alert, right? Like I'm hyper alert, like my butt, my thumb is like, oh, okay, I'm ready to take over. Even when I'm in my car or I'm doing things like automated backing into, so there's like a feature where you can do this automating backing into a parking space, or bring the car out of your garage, or even, you know, pseudo autopilot on the freeway, right? I am hypersensitive. I can feel like as I'm navigating, like, yeah, that's an error right there. Like I am very aware of it, but I'm also fascinated by it. And it does get better. Like I look and see it's learning from all of these people who are cutting it on, like every time I cut it on, it's getting better, right? And so I think that's what's amazing about it is that. This nice dance of you're still hyper vigilant. So you're still not trusting it at all. Yeah. And yet you're using it. On the highway, if I were to, like what, as a roboticist, we'll talk about trust a little bit. How do you explain that? You still use it. Is it the gadget freak part? Like where you just enjoy exploring technology? Or is that the right actually balance between robotics and humans is where you use it, but don't trust it. And somehow there's this dance that ultimately is a positive. Yeah, so I think I'm, I just don't necessarily trust technology, but I'm an early adopter, right? So when it first comes out, I will use everything, but I will be very, very cautious of how I use it. Do you read about it or do you explore it by just try it? Do you like crudely, to put it crudely, do you read the manual or do you learn through exploration? I'm an explorer. If I have to read the manual, then I do design. Then it's a bad user interface. It's a failure. Elon Musk is very confident that you kind of take it from where it is now to full autonomy. So from this human robot interaction, where you don't really trust and then you try and then you catch it when it fails to, it's going to incrementally improve itself into full where you don't need to participate. What's your sense of that trajectory? Is it feasible? So the promise there is by the end of next year, by the end of 2020 is the current promise. What's your sense about that journey that Tesla's on? So there's kind of three things going on though. I think in terms of will people go like as a user, as a adopter, will you trust going to that point? I think so, right? Like there are some users and it's because what happens is when you're hypersensitive at the beginning and then the technology tends to work, your apprehension slowly goes away. And as people, we tend to swing to the other extreme, right? Because it's like, oh, I was like hyper, hyper fearful or hypersensitive and it was awesome. And we just tend to swing. That's just human nature. And so you will have, I mean, and I... That's a scary notion because most people are now extremely untrusting of autopilot. They use it, but they don't trust it. And it's a scary notion that there's a certain point where you allow yourself to look at the smartphone for like 20 seconds. And then there'll be this phase shift where it'll be like 20 seconds, 30 seconds, one minute, two minutes. It's a scary proposition. But that's people, right? That's just, that's humans. I mean, I think of even our use of, I mean, just everything on the internet, right? Like think about how reliant we are on certain apps and certain engines, right? 20 years ago, people have been like, oh yeah, that's stupid. Like that makes no sense. Like, of course that's false. Like now it's just like, oh, of course I've been using it. It's been correct all this time. Of course aliens, I didn't think they existed, but now it says they do, obviously. 100%, earth is flat. So, okay, but you said three things. So one is the human. Okay, so one is the human. And I think there will be a group of individuals that will swing, right? I just. Teenagers. Teenage, I mean, it'll be, it'll be adults. There's actually an age demographic that's optimal for technology adoption. And you can actually find them. And they're actually pretty easy to find. Just based on their habits, based on, so if someone like me who wasn't a roboticist would probably be the optimal kind of person, right? Early adopter, okay with technology, very comfortable and not hypersensitive, right? I'm just hypersensitive cause I designed this stuff. So there is a target demographic that will swing. The other one though, is you still have these humans that are on the road. That one is a harder, harder thing to do. And as long as we have people that are on the same streets, that's gonna be the big issue. And it's just because you can't possibly, I wanna say you can't possibly map the, some of the silliness of human drivers, right? Like as an example, when you're next to that car that has that big sticker called student driver, right? Like you are like, oh, either I'm going to like go around. Like we are, we know that that person is just gonna make mistakes that make no sense, right? How do you map that information? Or if I am in a car and I look over and I see two fairly young looking individuals and there's no student driver bumper and I see them chit chatting to each other, I'm like, oh, that's an issue, right? So how do you get that kind of information and that experience into basically an autopilot? And there's millions of cases like that where we take little hints to establish context. I mean, you said kind of beautifully poetic human things, but there's probably subtle things about the environment about it being maybe time for commuters to start going home from work and therefore you can make some kind of judgment about the group behavior of pedestrians, blah, blah, blah, and so on and so on. Or even cities, right? Like if you're in Boston, how people cross the street, like lights are not an issue versus other places where people will actually wait for the crosswalk. Seattle or somewhere peaceful. But what I've also seen sort of just even in Boston that intersection to intersection is different. So every intersection has a personality of its own. So certain neighborhoods of Boston are different. So we kind of, and based on different timing of day, at night, it's all, there's a dynamic to human behavior that we kind of figure out ourselves. We're not able to introspect and figure it out, but somehow our brain learns it. We do. And so you're saying, is there a shortcut? Is there a shortcut, though, for a robot? Is there something that could be done, you think, that, you know, that's what we humans do. It's just like bird flight, right? That's the example they give for flight. Do you necessarily need to build a bird that flies or can you do an airplane? Is there a shortcut to it? So I think the shortcut is, and I kind of, I talk about it as a fixed space, where, so imagine that there's a neighborhood that's a new smart city or a new neighborhood that says, you know what? We are going to design this new city based on supporting self driving cars. And then doing things, knowing that there's anomalies, knowing that people are like this, right? And designing it based on that assumption that like, we're gonna have this. That would be an example of a shortcut. So you still have people, but you do very specific things to try to minimize the noise a little bit as an example. And the people themselves become accepting of the notion that there's autonomous cars, right? Right, like they move into, so right now you have like a, you will have a self selection bias, right? Like individuals will move into this neighborhood knowing like this is part of like the real estate pitch, right? And so I think that's a way to do a shortcut. One, it allows you to deploy. It allows you to collect then data with these variances and anomalies, cause people are still people, but it's a safer space and it's more of an accepting space. I.e. when something in that space might happen because things do, because you already have the self selection, like people would be, I think a little more forgiving than other places. And you said three things, did we cover all of them? The third is legal law, liability, which I don't really want to touch, but it's still of concern. And the mishmash with like with policy as well, sort of government, all that whole. That big ball of stuff. Yeah, gotcha. So that's, so we're out of time now. Do you think from a robotics perspective, you know, if you're kind of honest of what cars do, they kind of threaten each other's life all the time. So cars are various. I mean, in order to navigate intersections, there's an assertiveness, there's a risk taking. And if you were to reduce it to an objective function, there's a probability of murder in that function, meaning you killing another human being and you're using that. First of all, it has to be low enough to be acceptable to you on an ethical level as an individual human being, but it has to be high enough for people to respect you to not sort of take advantage of you completely and jaywalk in front of you and so on. So, I mean, I don't think there's a right answer here, but what's, how do we solve that? How do we solve that from a robotics perspective when danger and human life is at stake? Yeah, as they say, cars don't kill people, people kill people. People kill people. Right. So I think. And now robotic algorithms would be killing people. Right, so it will be robotics algorithms that are pro, no, it will be robotic algorithms don't kill people. Developers of robotic algorithms kill people, right? I mean, one of the things is people are still in the loop and at least in the near and midterm, I think people will still be in the loop at some point, even if it's a developer. Like we're not necessarily at the stage where robots are programming autonomous robots with different behaviors quite yet. It's a scary notion, sorry to interrupt, that a developer has some responsibility in the death of a human being. That's a heavy burden. I mean, I think that's why the whole aspect of ethics in our community is so, so important, right? Like, because it's true. If you think about it, you can basically say, I'm not going to work on weaponized AI, right? Like people can say, that's not what I'm gonna do. But yet you are programming algorithms that might be used in healthcare algorithms that might decide whether this person should get this medication or not. And they don't and they die. Okay, so that is your responsibility, right? And if you're not conscious and aware that you do have that power when you're coding and things like that, I think that's just not a good thing. Like we need to think about this responsibility as we program robots and computing devices much more than we are. Yeah, so it's not an option to not think about ethics. I think it's a majority, I would say, of computer science. Sort of, it's kind of a hot topic now, I think about bias and so on, but it's, and we'll talk about it, but usually it's kind of, it's like a very particular group of people that work on that. And then people who do like robotics are like, well, I don't have to think about that. There's other smart people thinking about it. It seems that everybody has to think about it. It's not, you can't escape the ethics, whether it's bias or just every aspect of ethics that has to do with human beings. Everyone. So think about, I'm gonna age myself, but I remember when we didn't have like testers, right? And so what did you do? As a developer, you had to test your own code, right? Like you had to go through all the cases and figure it out and then they realized that, we probably need to have testing because we're not getting all the things. And so from there, what happens is like most developers, they do a little bit of testing, but it's usually like, okay, did my compiler bug out? Let me look at the warnings. Okay, is that acceptable or not, right? Like that's how you typically think about as a developer and you'll just assume that it's going to go to another process and they're gonna test it out. But I think we need to go back to those early days when you're a developer, you're developing, there should be like the say, okay, let me look at the ethical outcomes of this because there isn't a second like testing ethical testers, right, it's you. We did it back in the early coding days. I think that's where we are with respect to ethics. Like let's go back to what was good practices and only because we were just developing the field. Yeah, and it's a really heavy burden. I've had to feel it recently in the last few months, but I think it's a good one to feel like I've gotten a message, more than one from people. You know, I've unfortunately gotten some attention recently and I've gotten messages that say that I have blood on my hands because of working on semi autonomous vehicles. So the idea that you have semi autonomy means people will become, will lose vigilance and so on. That's actually be humans, as we described. And because of that, because of this idea that we're creating automation, there'll be people be hurt because of it. And I think that's a beautiful thing. I mean, it's, you know, there's many nights where I wasn't able to sleep because of this notion. You know, you really do think about people that might die because of this technology. Of course, you can then start rationalizing saying, well, you know what, 40,000 people die in the United States every year and we're trying to ultimately try to save lives. But the reality is your code you've written might kill somebody. And that's an important burden to carry with you as you design the code. I don't even think of it as a burden if we train this concept correctly from the beginning. And I use, and not to say that coding is like being a medical doctor, but think about it. Medical doctors, if they've been in situations where their patient didn't survive, right? Do they give up and go away? No, every time they come in, they know that there might be a possibility that this patient might not survive. And so when they approach every decision, like that's in the back of their head. And so why isn't that we aren't teaching, and those are tools though, right? They are given some of the tools to address that so that they don't go crazy. But we don't give those tools so that it does feel like a burden versus something of I have a great gift and I can do great, awesome good, but with it comes great responsibility. I mean, that's what we teach in terms of if you think about the medical schools, right? Great gift, great responsibility. I think if we just change the messaging a little, great gift, being a developer, great responsibility. And this is how you combine those. But do you think, I mean, this is really interesting. It's outside, I actually have no friends who are sort of surgeons or doctors. I mean, what does it feel like to make a mistake in a surgery and somebody to die because of that? Like, is that something you could be taught in medical school, sort of how to be accepting of that risk? So, because I do a lot of work with healthcare robotics, I have not lost a patient, for example. The first one's always the hardest, right? But they really teach the value, right? So, they teach responsibility, but they also teach the value. Like, you're saving 40,000, but in order to really feel good about that, when you come to a decision, you have to be able to say at the end, I did all that I could possibly do, right? Versus a, well, I just picked the first widget, right? Like, so every decision is actually thought through. It's not a habit, it's not a, let me just take the best algorithm that my friend gave me, right? It's a, is this it, is this the best? Have I done my best to do good, right? And so... You're right, and I think burden is the wrong word. It's a gift, but you have to treat it extremely seriously. Correct. So, on a slightly related note, in a recent paper, The Ugly Truth About Ourselves and Our Robot Creations, you discuss, you highlight some biases that may affect the function of various robotic systems. Can you talk through, if you remember, examples of some? There's a lot of examples. I usually... What is bias, first of all? Yeah, so bias is this, and so bias, which is different than prejudice. So, bias is that we all have these preconceived notions about particular, everything from particular groups to habits to identity, right? So, we have these predispositions, and so when we address a problem, we look at a problem and make a decision, those preconceived notions might affect our outputs, our outcomes. So, there the bias can be positive and negative, and then is prejudice the negative kind of bias? Prejudice is the negative, right? So, prejudice is that not only are you aware of your bias, but you are then take it and have a negative outcome, even though you're aware, like... And there could be gray areas too. There's always gray areas. That's the challenging aspect of all ethical questions. So, I always like... So, there's a funny one, and in fact, I think it might be in the paper, because I think I talk about self driving cars, but think about this. We, for teenagers, right? Typically, insurance companies charge quite a bit of money if you have a teenage driver. So, you could say that's an age bias, right? But no one will claim... I mean, parents will be grumpy, but no one really says that that's not fair. That's interesting. We don't... That's right, that's right. It's everybody in human factors and safety research almost... I mean, it's quite ruthlessly critical of teenagers. And we don't question, is that okay? Is that okay to be ageist in this kind of way? It is, and it is ageist, right? It's definitely ageist, there's no question about it. And so, this is the gray area, right? Because you know that teenagers are more likely to be in accidents, and so, there's actually some data to it. But then, if you take that same example, and you say, well, I'm going to make the insurance higher for an area of Boston, because there's a lot of accidents. And then, they find out that that's correlated with socioeconomics. Well, then it becomes a problem, right? Like, that is not acceptable, but yet, the teenager, which is age... It's against age, is, right? We figure that out as a society by having conversations, by having discourse. I mean, throughout history, the definition of what is ethical or not has changed, and hopefully, always for the better. Correct, correct. So, in terms of bias or prejudice in algorithms, what examples do you sometimes think about? So, I think about quite a bit the medical domain, just because historically, right? The healthcare domain has had these biases, typically based on gender and ethnicity, primarily. A little in age, but not so much. Historically, if you think about FDA and drug trials, it's harder to find a woman that aren't childbearing, and so you may not test on drugs at the same level. Right, so there's these things. And so, if you think about robotics, right? Something as simple as, I'd like to design an exoskeleton, right? What should the material be? What should the weight be? What should the form factor be? Who are you gonna design it around? I will say that in the US, women average height and weight is slightly different than guys. So, who are you gonna choose? Like, if you're not thinking about it from the beginning, as, okay, when I design this and I look at the algorithms and I design the control system and the forces and the torques, if you're not thinking about, well, you have different types of body structure, you're gonna design to what you're used to. Oh, this fits all the folks in my lab, right? So, think about it from the very beginning is important. What about sort of algorithms that train on data kind of thing? Sadly, our society already has a lot of negative bias. And so, if we collect a lot of data, even if it's a balanced way, that's going to contain the same bias that our society contains. And so, yeah, is there things there that bother you? Yeah, so you actually said something. You had said how we have biases, but hopefully we learn from them and we become better, right? And so, that's where we are now, right? So, the data that we're collecting is historic. So, it's based on these things when we knew it was bad to discriminate, but that's the data we have and we're trying to fix it now, but we're fixing it based on the data that was used in the first place. Fix it in post. Right, and so the decisions, and you can look at everything from the whole aspect of predictive policing, criminal recidivism. There was a recent paper that had the healthcare algorithms, which had a kind of a sensational titles. I'm not pro sensationalism in titles, but again, you read it, right? So, it makes you read it, but I'm like, really? Like, ugh, you could have. What's the topic of the sensationalism? I mean, what's underneath it? What's, if you could sort of educate me on what kind of bias creeps into the healthcare space. Yeah, so. I mean, you already kind of mentioned. Yeah, so this one was the headline was racist AI algorithms. Okay, like, okay, that's totally a clickbait title. And so you looked at it and so there was data that these researchers had collected. I believe, I wanna say it was either Science or Nature. It just was just published, but they didn't have a sensational title. It was like the media. And so they had looked at demographics, I believe, between black and white women, right? And they showed that there was a discrepancy in the outcomes, right? And so, and it was tied to ethnicity, tied to race. The piece that the researchers did actually went through the whole analysis, but of course. I mean, the journalists with AI are problematic across the board, let's say. And so this is a problem, right? And so there's this thing about, oh, AI, it has all these problems. We're doing it on historical data and the outcomes are uneven based on gender or ethnicity or age. But I am always saying is like, yes, we need to do better, right? We need to do better. It is our duty to do better. But the worst AI is still better than us. Like, you take the best of us and we're still worse than the worst AI, at least in terms of these things. And that's actually not discussed, right? And so I think, and that's why the sensational title, right? And so it's like, so then you can have individuals go like, oh, we don't need to use this AI. I'm like, oh, no, no, no, no. I want the AI instead of the doctors that provided that data, because it's still better than that, right? I think that's really important to linger on, is the idea that this AI is racist. It's like, well, compared to what? Sort of, I think we set, unfortunately, way too high of a bar for AI algorithms. And in the ethical space where perfect is, I would argue, probably impossible. Then if we set the bar of perfection, essentially, of it has to be perfectly fair, whatever that means, it means we're setting it up for failure. But that's really important to say what you just said, which is, well, it's still better than it is. And one of the things I think that we don't get enough credit for, just in terms of as developers, is that you can now poke at it, right? So it's harder to say, is this hospital, is this city doing something, right? Until someone brings in a civil case, right? Well, with AI, it can process through all this data and say, hey, yes, there was an issue here, but here it is, we've identified it, and then the next step is to fix it. I mean, that's a nice feedback loop versus waiting for someone to sue someone else before it's fixed, right? And so I think that power, we need to capitalize on a little bit more, right? Instead of having the sensational titles, have the, okay, this is a problem, and this is how we're fixing it, and people are putting money to fix it because we can make it better. I look at like facial recognition, how Joy, she basically called out a couple of companies and said, hey, and most of them were like, oh, embarrassment, and the next time it had been fixed, right, it had been fixed better, right? And then it was like, oh, here's some more issues. And I think that conversation then moves that needle to having much more fair and unbiased and ethical aspects, as long as both sides, the developers are willing to say, okay, I hear you, yes, we are going to improve, and you have other developers who are like, hey, AI, it's wrong, but I love it, right? Yes, so speaking of this really nice notion that AI is maybe flawed but better than humans, so just made me think of it, one example of flawed humans is our political system. Do you think, or you said judicial as well, do you have a hope for AI sort of being elected for president or running our Congress or being able to be a powerful representative of the people? So I mentioned, and I truly believe that this whole world of AI is in partnerships with people. And so what does that mean? I don't believe, or maybe I just don't, I don't believe that we should have an AI for president, but I do believe that a president should use AI as an advisor, right? Like, if you think about it, every president has a cabinet of individuals that have different expertise that they should listen to, right? Like, that's kind of what we do. And you put smart people with smart expertise around certain issues, and you listen. I don't see why AI can't function as one of those smart individuals giving input. So maybe there's an AI on healthcare, maybe there's an AI on education and right, like all of these things that a human is processing, right? Because at the end of the day, there's people that are human that are going to be at the end of the decision. And I don't think as a world, as a culture, as a society, that we would totally, and this is us, like this is some fallacy about us, but we need to see that leader, that person as human. And most people don't realize that like leaders have a whole lot of advice, right? Like when they say something, it's not that they woke up, well, usually they don't wake up in the morning and be like, I have a brilliant idea, right? It's usually a, okay, let me listen. I have a brilliant idea, but let me get a little bit of feedback on this. Like, okay. And then it's a, yeah, that was an awesome idea or it's like, yeah, let me go back. We already talked through a bunch of them, but are there some possible solutions to the bias that's present in our algorithms beyond what we just talked about? So I think there's two paths. One is to figure out how to systematically do the feedback and corrections. So right now it's ad hoc, right? It's a researcher identify some outcomes that are not, don't seem to be fair, right? They publish it, they write about it. And the, either the developer or the companies that have adopted the algorithms may try to fix it, right? And so it's really ad hoc and it's not systematic. There's, it's just, it's kind of like, I'm a researcher, that seems like an interesting problem, which means that there's a whole lot out there that's not being looked at, right? Cause it's kind of researcher driven. And I don't necessarily have a solution, but that process I think could be done a little bit better. One way is I'm going to poke a little bit at some of the corporations, right? Like maybe the corporations when they think about a product, they should, instead of, in addition to hiring these, you know, bug, they give these. Oh yeah, yeah, yeah. Like awards when you find a bug. Yeah, security bug, you know, let's put it like we will give the, whatever the award is that we give for the people who find these security holes, find an ethics hole, right? Like find an unfairness hole and we will pay you X for each one you find. I mean, why can't they do that? One is a win win. They show that they're concerned about it, that this is important and they don't have to necessarily dedicate it their own like internal resources. And it also means that everyone who has like their own bias lens, like I'm interested in age. And so I'll find the ones based on age and I'm interested in gender and right, which means that you get like all of these different perspectives. But you think of it in a data driven way. So like sort of, if we look at a company like Twitter, it gets, it's under a lot of fire for discriminating against certain political beliefs. Correct. And sort of, there's a lot of people, this is the sad thing, cause I know how hard the problem is and I know the Twitter folks are working really hard at it. Even Facebook that everyone seems to hate are working really hard at this. You know, the kind of evidence that people bring is basically anecdotal evidence. Well, me or my friend, all we said is X and for that we got banned. And that's kind of a discussion of saying, well, look, that's usually, first of all, the whole thing is taken out of context. So they present sort of anecdotal evidence. And how are you supposed to, as a company, in a healthy way, have a discourse about what is and isn't ethical? How do we make algorithms ethical when people are just blowing everything? Like they're outraged about a particular anecdotal piece of evidence that's very difficult to sort of contextualize in the big data driven way. Do you have a hope for companies like Twitter and Facebook? Yeah, so I think there's a couple of things going on, right? First off, remember this whole aspect of we are becoming reliant on technology. We're also becoming reliant on a lot of these, the apps and the resources that are provided, right? So some of it is kind of anger, like I need you, right? And you're not working for me, right? Not working for me, all right. But I think, and so some of it, and I wish that there was a little bit of change of rethinking. So some of it is like, oh, we'll fix it in house. No, that's like, okay, I'm a fox and I'm going to watch these hens because I think it's a problem that foxes eat hens. No, right? Like be good citizens and say, look, we have a problem. And we are willing to open ourselves up for others to come in and look at it and not try to fix it in house. Because if you fix it in house, there's conflict of interest. If I find something, I'm probably going to want to fix it and hopefully the media won't pick it up, right? And that then causes distrust because someone inside is going to be mad at you and go out and talk about how, yeah, they canned the resume survey because it, right? Like be nice people. Like just say, look, we have this issue. Community, help us fix it. And we will give you like, you know, the bug finder fee if you do. Did you ever hope that the community, us as a human civilization on the whole is good and can be trusted to guide the future of our civilization into a positive direction? I think so. So I'm an optimist, right? And, you know, there were some dark times in history always. I think now we're in one of those dark times. I truly do. In which aspect? The polarization. And it's not just US, right? So if it was just US, I'd be like, yeah, it's a US thing, but we're seeing it like worldwide, this polarization. And so I worry about that. But I do fundamentally believe that at the end of the day, people are good, right? And why do I say that? Because anytime there's a scenario where people are in danger and I will use, so Atlanta, we had a snowmageddon and people can laugh about that. People at the time, so the city closed for, you know, little snow, but it was ice and the city closed down. But you had people opening up their homes and saying, hey, you have nowhere to go, come to my house, right? Hotels were just saying like, sleep on the floor. Like places like, you know, the grocery stores were like, hey, here's food. There was no like, oh, how much are you gonna pay me? It was like this, such a community. And like people who didn't know each other, strangers were just like, can I give you a ride home? And that was a point I was like, you know what, like. That reveals that the deeper thing is, there's a compassionate love that we all have within us. It's just that when all of that is taken care of and get bored, we love drama. And that's, I think almost like the division is a sign of the times being good, is that it's just entertaining on some unpleasant mammalian level to watch, to disagree with others. And Twitter and Facebook are actually taking advantage of that in a sense because it brings you back to the platform and they're advertiser driven, so they make a lot of money. So you go back and you click. Love doesn't sell quite as well in terms of advertisement. It doesn't. So you've started your career at NASA Jet Propulsion Laboratory, but before I ask a few questions there, have you happened to have ever seen Space Odyssey, 2001 Space Odyssey? Yes. Okay, do you think HAL 9000, so we're talking about ethics. Do you think HAL did the right thing by taking the priority of the mission over the lives of the astronauts? Do you think HAL is good or evil? Easy questions. Yeah. HAL was misguided. You're one of the people that would be in charge of an algorithm like HAL. Yeah. What would you do better? If you think about what happened was there was no fail safe, right? So perfection, right? Like what is that? I'm gonna make something that I think is perfect, but if my assumptions are wrong, it'll be perfect based on the wrong assumptions, right? That's something that you don't know until you deploy and then you're like, oh yeah, messed up. But what that means is that when we design software, such as in Space Odyssey, when we put things out, that there has to be a fail safe. There has to be the ability that once it's out there, we can grade it as an F and it fails and it doesn't continue, right? There's some way that it can be brought in and removed in that aspect. Because that's what happened with HAL. It was like assumptions were wrong. It was perfectly correct based on those assumptions and there was no way to change it, change the assumptions at all. And the change to fall back would be to a human. So you ultimately think like human should be, it's not turtles or AI all the way down. It's at some point, there's a human that actually. I still think that, and again, because I do human robot interaction, I still think the human needs to be part of the equation at some point. So what, just looking back, what are some fascinating things in robotic space that NASA was working at the time? Or just in general, what have you gotten to play with and what are your memories from working at NASA? Yeah, so one of my first memories was they were working on a surgical robot system that could do eye surgery, right? And this was back in, oh my gosh, it must've been, oh, maybe 92, 93, 94. So it's like almost like a remote operation. Yeah, it was remote operation. In fact, you can even find some old tech reports on it. So think of it, like now we have DaVinci, right? Like think of it, but these were like the late 90s, right? And I remember going into the lab one day and I was like, what's that, right? And of course it wasn't pretty, right? Because the technology, but it was like functional and you had this individual that could use a version of haptics to actually do the surgery and they had this mockup of a human face and like the eyeballs and you can see this little drill. And I was like, oh, that is so cool. That one I vividly remember because it was so outside of my like possible thoughts of what could be done. It's the kind of precision and I mean, what's the most amazing of a thing like that? I think it was the precision. It was the kind of first time that I had physically seen this robot machine human interface, right? Versus, cause manufacturing had been, you saw those kind of big robots, right? But this was like, oh, this is in a person. There's a person and a robot like in the same space. I'm meeting them in person. Like for me, it was a magical moment that I can't, it was life transforming that I recently met Spot Mini from Boston Dynamics. Oh, see. I don't know why, but on the human robot interaction for some reason I realized how easy it is to anthropomorphize and it was, I don't know, it was almost like falling in love, this feeling of meeting. And I've obviously seen these robots a lot on video and so on, but meeting in person, just having that one on one time is different. So have you had a robot like that in your life that made you maybe fall in love with robotics? Sort of like meeting in person. I mean, I loved robotics since, yeah. So I was a 12 year old. Like I'm gonna be a roboticist, actually was, I called it cybernetics. But so my motivation was Bionic Woman. I don't know if you know that. And so, I mean, that was like a seminal moment, but I didn't meet, like that was TV, right? Like it wasn't like I was in the same space and I met and I was like, oh my gosh, you're like real. Just linking on Bionic Woman, which by the way, because I read that about you. I watched bits of it and it's just so, no offense, terrible. It's cheesy if you look at it now. It's cheesy, no. I've seen a couple of reruns lately. But it's, but of course at the time it's probably captured the imagination. But the sound effects. Especially when you're younger, it just catch you. But which aspect, did you think of it, you mentioned cybernetics, did you think of it as robotics or did you think of it as almost constructing artificial beings? Like, is it the intelligent part that captured your fascination or was it the whole thing? Like even just the limbs and just the. So for me, it would have, in another world, I probably would have been more of a biomedical engineer because what fascinated me was the parts, like the bionic parts, the limbs, those aspects of it. Are you especially drawn to humanoid or humanlike robots? I would say humanlike, not humanoid, right? And when I say humanlike, I think it's this aspect of that interaction, whether it's social and it's like a dog, right? Like that's humanlike because it understand us, it interacts with us at that very social level to, you know, humanoids are part of that, but only if they interact with us as if we are human. Okay, but just to linger on NASA for a little bit, what do you think, maybe if you have other memories, but also what do you think is the future of robots in space? We'll mention how, but there's incredible robots that NASA's working on in general thinking about in our, as we venture out, human civilization ventures out into space, what do you think the future of robots is there? Yeah, so I mean, there's the near term. For example, they just announced the rover that's going to the moon, which, you know, that's kind of exciting, but that's like near term. You know, my favorite, favorite, favorite series is Star Trek, right? You know, I really hope, and even Star Trek, like if I calculate the years, I wouldn't be alive, but I would really, really love to be in that world. Like, even if it's just at the beginning, like, you know, like voyage, like adventure one. So basically living in space. Yeah. With, what robots, what are robots? With data. What role? The data would have to be, even though that wasn't, you know, that was like later, but. So data is a robot that has human like qualities. Right, without the emotion chip. Yeah. You don't like emotion. Well, so data with the emotion chip was kind of a mess, right? It took a while for that thing to adapt, but, and so why was that an issue? The issue is that emotions make us irrational agents. That's the problem. And yet he could think through things, even if it was based on an emotional scenario, right? Based on pros and cons. But as soon as you made him emotional, one of the metrics he used for evaluation was his own emotions, not people around him, right? Like, and so. We do that as children, right? So we're very egocentric when we're young. We are very egocentric. And so isn't that just an early version of the emotion chip then, I haven't watched much Star Trek. Except I have also met adults, right? And so that is a developmental process. And I'm sure there's a bunch of psychologists that can go through, like you can have a 60 year old adult who has the emotional maturity of a 10 year old, right? And so there's various phases that people should go through in order to evolve and sometimes you don't. So how much psychology do you think, a topic that's rarely mentioned in robotics, but how much does psychology come to play when you're talking about HRI, human robot interaction? When you have to have robots that actually interact with humans. Tons. So we, like my group, as well as I read a lot in the cognitive science literature, as well as the psychology literature. Because they understand a lot about human, human relations and developmental milestones and things like that. And so we tend to look to see what's been done out there. Sometimes what we'll do is we'll try to match that to see, is that human, human relationship the same as human robot? Sometimes it is, and sometimes it's different. And then when it's different, we have to, we try to figure out, okay, why is it different in this scenario? But it's the same in the other scenario, right? And so we try to do that quite a bit. Would you say that's, if we're looking at the future of human robot interaction, would you say the psychology piece is the hardest? Like if, I mean, it's a funny notion for you as, I don't know if you consider, yeah. I mean, one way to ask it, do you consider yourself a roboticist or a psychologist? Oh, I consider myself a roboticist that plays the act of a psychologist. But if you were to look at yourself sort of, 20, 30 years from now, do you see yourself more and more wearing the psychology hat? Another way to put it is, are the hard problems in human robot interactions fundamentally psychology, or is it still robotics, the perception manipulation, planning, all that kind of stuff? It's actually neither. The hardest part is the adaptation and the interaction. So it's the interface, it's the learning. And so if I think of, like I've become much more of a roboticist slash AI person than when I, like originally, again, I was about the bionics. I was electrical engineer, I was control theory, right? And then I started realizing that my algorithms needed like human data, right? And so then I was like, okay, what is this human thing? How do I incorporate human data? And then I realized that human perception had, like there was a lot in terms of how we perceive the world. And so trying to figure out how do I model human perception for my, and so I became a HRI person, human robot interaction person, from being a control theory and realizing that humans actually offered quite a bit. And then when you do that, you become more of an artificial intelligence, AI. And so I see myself evolving more in this AI world under the lens of robotics, having hardware, interacting with people. So you're a world class expert researcher in robotics, and yet others, you know, there's a few, it's a small but fierce community of people, but most of them don't take the journey into the H of HRI, into the human. So why did you brave into the interaction with humans? It seems like a really hard problem. It's a hard problem, and it's very risky as an academic. And I knew that when I started down that journey, that it was very risky as an academic in this world that was nuance, it was just developing. We didn't even have a conference, right, at the time. Because it was the interesting problems. That was what drove me. It was the fact that I looked at what interests me in terms of the application space and the problems. And that pushed me into trying to figure out what people were and what humans were and how to adapt to them. If those problems weren't so interesting, I'd probably still be sending rovers to glaciers, right? But the problems were interesting. And the other thing was that they were hard, right? So it's, I like having to go into a room and being like, I don't know what to do. And then going back and saying, okay, I'm gonna figure this out. I do not, I'm not driven when I go in like, oh, there are no surprises. Like, I don't find that satisfying. If that was the case, I'd go someplace and make a lot more money, right? I think I stay in academic because and choose to do this because I can go into a room and like, that's hard. Yeah, I think just from my perspective, maybe you can correct me on it, but if I just look at the field of AI broadly, it seems that human robot interaction has the most, one of the most number of open problems. Like people, especially relative to how many people are willing to acknowledge that there are this, because most people are just afraid of the humans so they don't even acknowledge how many open problems there are. But it's in terms of difficult problems to solve exciting spaces, it seems to be incredible for that. It is, and it's exciting. You've mentioned trust before. What role does trust from interacting with autopilot to in the medical context, what role does trust play in the human robot interactions? So some of the things I study in this domain is not just trust, but it really is over trust. How do you think about over trust? Like what is, first of all, what is trust and what is over trust? Basically, the way I look at it is, trust is not what you click on a survey, trust is about your behavior. So if you interact with the technology based on the decision or the actions of the technology as if you trust that decision, then you're trusting. And even in my group, we've done surveys that on the thing, do you trust robots? Of course not. Would you follow this robot in a burdening building? Of course not. And then you look at their actions and you're like, clearly your behavior does not match what you think or what you think you would like to think. And so I'm really concerned about the behavior because that's really at the end of the day, when you're in the world, that's what will impact others around you. It's not whether before you went onto the street, you clicked on like, I don't trust self driving cars. Yeah, that from an outsider perspective, it's always frustrating to me. Well, I read a lot, so I'm insider in a certain philosophical sense. It's frustrating to me how often trust is used in surveys and how people say, make claims out of any kind of finding they make while somebody clicking on answer. You just trust is a, yeah, behavior just, you said it beautifully. I mean, the action, your own behavior is what trust is. I mean, that everything else is not even close. It's almost like absurd comedic poetry that you weave around your actual behavior. So some people can say their trust, you know, I trust my wife, husband or not, whatever, but the actions is what speaks volumes. You bug their car, you probably don't trust them. I trust them, I'm just making sure. No, no, that's, yeah. Like even if you think about cars, I think it's a beautiful case. I came here at some point, I'm sure, on either Uber or Lyft, right? I remember when it first came out, right? I bet if they had had a survey, would you get in the car with a stranger and pay them? Yes. How many people do you think would have said, like, really? Wait, even worse, would you get in the car with a stranger at 1 a.m. in the morning to have them drop you home as a single female? Yeah. Like how many people would say, that's stupid. Yeah. And now look at where we are. I mean, people put kids, right? Like, oh yeah, my child has to go to school and yeah, I'm gonna put my kid in this car with a stranger. I mean, it's just fascinating how, like, what we think we think is not necessarily matching our behavior. Yeah, and certainly with robots, with autonomous vehicles and all the kinds of robots you work with, that's, it's, yeah, it's, the way you answer it, especially if you've never interacted with that robot before, if you haven't had the experience, you being able to respond correctly on a survey is impossible. But what do you, what role does trust play in the interaction, do you think? Like, is it good to, is it good to trust a robot? What does over trust mean? Or is it, is it good to kind of how you feel about autopilot currently, which is like, from a roboticist's perspective, is like, oh, still very cautious? Yeah, so this is still an open area of research, but basically what I would like in a perfect world is that people trust the technology when it's working 100%, and people will be hypersensitive and identify when it's not. But of course we're not there. That's the ideal world. And, but we find is that people swing, right? They tend to swing, which means that if my first, and like, we have some papers, like first impressions is everything, right? If my first instance with technology, with robotics is positive, it mitigates any risk, it correlates with like best outcomes, it means that I'm more likely to either not see it when it makes some mistakes or faults, or I'm more likely to forgive it. And so this is a problem because technology is not 100% accurate, right? It's not 100% accurate, although it may be perfect. How do you get that first moment right, do you think? There's also an education about the capabilities and limitations of the system. Do you have a sense of how do you educate people correctly in that first interaction? Again, this is an open ended problem. So one of the study that actually has given me some hope that I were trying to figure out how to put in robotics. So there was a research study that it showed for medical AI systems, giving information to radiologists about, here you need to look at these areas on the X ray. What they found was that when the system provided one choice, there was this aspect of either no trust or over trust, right? Like I don't believe it at all, or a yes, yes, yes, yes. And they would miss things, right? Instead, when the system gave them multiple choices, like here are the three, even if it knew like, it had estimated that the top area you need to look at was some place on the X ray. If it gave like one plus others, the trust was maintained and the accuracy of the entire population increased, right? So basically it was a, you're still trusting the system, but you're also putting in a little bit of like, your human expertise, like your human decision processing into the equation. So it helps to mitigate that over trust risk. Yeah, so there's a fascinating balance that the strike. Haven't figured out again, robotics is still an open research. This is exciting open area research, exactly. So what are some exciting applications of human robot interaction? You started a company, maybe you can talk about the exciting efforts there, but in general also what other space can robots interact with humans and help? Yeah, so besides healthcare, cause you know, that's my bias lens. My other bias lens is education. I think that, well, one, we definitely, we in the US, you know, we're doing okay with teachers, but there's a lot of school districts that don't have enough teachers. If you think about the teacher student ratio for at least public education in some districts, it's crazy. It's like, how can you have learning in that classroom, right? Because you just don't have the human capital. And so if you think about robotics, bringing that in to classrooms, as well as the afterschool space, where they offset some of this lack of resources in certain communities, I think that's a good place. And then turning on the other end is using these systems then for workforce retraining and dealing with some of the things that are going to come out later on of job loss, like thinking about robots and in AI systems for retraining and workforce development. I think that's exciting areas that can be pushed even more, and it would have a huge, huge impact. What would you say are some of the open problems in education, sort of, it's exciting. So young kids and the older folks or just folks of all ages who need to be retrained, who need to sort of open themselves up to a whole nother area of work. What are the problems to be solved there? How do you think robots can help? We have the engagement aspect, right? So we can figure out the engagement. That's not a... What do you mean by engagement? So identifying whether a person is focused, is like that we can figure out. What we can figure out and there's some positive results in this is that personalized adaptation based on any concepts, right? So imagine I think about, I have an agent and I'm working with a kid learning, I don't know, algebra two, can that same agent then switch and teach some type of new coding skill to a displaced mechanic? Like, what does that actually look like, right? Like hardware might be the same, content is different, two different target demographics of engagement. Like how do you do that? How important do you think personalization is in human robot interaction? And not just a mechanic or student, but like literally to the individual human being. I think personalization is really important, but a caveat is that I think we'd be okay if we can personalize to the group, right? And so if I can label you as along some certain dimensions, then even though it may not be you specifically, I can put you in this group. So the sample size, this is how they best learn, this is how they best engage. Even at that level, it's really important. And it's because, I mean, it's one of the reasons why educating in large classrooms is so hard, right? You teach to the median, but there's these individuals that are struggling and then you have highly intelligent individuals and those are the ones that are usually kind of left out. So highly intelligent individuals may be disruptive and those who are struggling might be disruptive because they're both bored. Yeah, and if you narrow the definition of the group or in the size of the group enough, you'll be able to address their individual, it's not individual needs, but really the most important group needs, right? And that's kind of what a lot of successful recommender systems do with Spotify and so on. So it's sad to believe, but as a music listener, probably in some sort of large group, it's very sadly predictable. You have been labeled. Yeah, I've been labeled and successfully so because they're able to recommend stuff that I like. Yeah, but applying that to education, right? There's no reason why it can't be done. Do you have a hope for our education system? I have more hope for workforce development. And that's because I'm seeing investments. Even if you look at VC investments in education, the majority of it has lately been going to workforce retraining, right? And so I think that government investments is increasing. There's like a claim and some of it's based on fear, right? Like AI is gonna come and take over all these jobs. What are we gonna do with all these nonpaying taxes that aren't coming to us by our citizens? And so I think I'm more hopeful for that. Not so hopeful for early education because it's still a who's gonna pay for it. And you won't see the results for like 16 to 18 years. It's hard for people to wrap their heads around that. But on the retraining part, what are your thoughts? There's a candidate, Andrew Yang running for president and saying that sort of AI, automation, robots. Universal basic income. Universal basic income in order to support us as we kind of automation takes people's jobs and allows you to explore and find other means. Like do you have a concern of society transforming effects of automation and robots and so on? I do. I do know that AI robotics will displace workers. Like we do know that. But there'll be other workers that will be defined new jobs. What I worry about is, that's not what I worry about. Like will all the jobs go away? What I worry about is the type of jobs that will come out. Like people who graduate from Georgia Tech will be okay. We give them the skills, they will adapt even if their current job goes away. I do worry about those that don't have that quality of an education. Will they have the ability, the background to adapt to those new jobs? That I don't know. That I worry about, which will create even more polarization in our society, internationally and everywhere. I worry about that. I also worry about not having equal access to all these wonderful things that AI can do and robotics can do. I worry about that. People like me from Georgia Tech from say MIT will be okay, right? But that's such a small part of the population that we need to think much more globally of having access to the beautiful things, whether it's AI in healthcare, AI in education, AI in politics, right? I worry about that. And that's part of the thing that you were talking about is people that build the technology have to be thinking about ethics, have to be thinking about access and all those things. And not just a small subset. Let me ask some philosophical, slightly romantic questions. People that listen to this will be like, here he goes again. Okay, do you think one day we'll build an AI system that a person can fall in love with and it would love them back? Like in the movie, Her, for example. Yeah, although she kind of didn't fall in love with him or she fell in love with like a million other people, something like that. You're the jealous type, I see. We humans are the jealous type. Yes, so I do believe that we can design systems where people would fall in love with their robot, with their AI partner. That I do believe. Because it's actually, and I don't like to use the word manipulate, but as we see, there are certain individuals that can be manipulated if you understand the cognitive science about it, right? Right, so I mean, if you could think of all close relationship and love in general as a kind of mutual manipulation, that dance, the human dance. I mean, manipulation is a negative connotation. And that's why I don't like to use that word particularly. I guess another way to phrase it is, you're getting at is it could be algorithmatized or something, it could be a. The relationship building part can be. I mean, just think about it. We have, and I don't use dating sites, but from what I heard, there are some individuals that have been dating that have never saw each other, right? In fact, there's a show I think that tries to like weed out fake people. Like there's a show that comes out, right? Because like people start faking. Like, what's the difference of that person on the other end being an AI agent, right? And having a communication and you building a relationship remotely, like there's no reason why that can't happen. In terms of human robot interaction, so what role, you've kind of mentioned with data emotion being, can be problematic if not implemented well, I suppose. What role does emotion and some other human like things, the imperfect things come into play here for good human robot interaction and something like love? Yeah, so in this case, and you had asked, can an AI agent love a human back? I think they can emulate love back, right? And so what does that actually mean? It just means that if you think about their programming, they might put the other person's needs in front of theirs in certain situations, right? You look at, think about it as a return on investment. Like, what's my return on investment? As part of that equation, that person's happiness, has some type of algorithm waiting to it. And the reason why is because I care about them, right? That's the only reason, right? But if I care about them and I show that, then my final objective function is length of time of the engagement, right? So you can think of how to do this actually quite easily. And so. But that's not love? Well, so that's the thing. I think it emulates love because we don't have a classical definition of love. Right, but, and we don't have the ability to look into each other's minds to see the algorithm. And I mean, I guess what I'm getting at is, is it possible that, especially if that's learned, especially if there's some mystery and black box nature to the system, how is that, you know? How is it any different? How is it any different in terms of sort of if the system says, I'm conscious, I'm afraid of death, and it does indicate that it loves you. Another way to sort of phrase it, be curious to see what you think. Do you think there'll be a time when robots should have rights? You've kind of phrased the robot in a very roboticist way and just a really good way, but saying, okay, well, there's an objective function and I could see how you can create a compelling human robot interaction experience that makes you believe that the robot cares for your needs and even something like loves you. But what if the robot says, please don't turn me off? What if the robot starts making you feel like there's an entity, a being, a soul there, right? Do you think there'll be a future, hopefully you won't laugh too much at this, but where they do ask for rights? So I can see a future if we don't address it in the near term where these agents, as they adapt and learn, could say, hey, this should be something that's fundamental. I hopefully think that we would address it before it gets to that point. So you think that's a bad future? Is that a negative thing where they ask we're being discriminated against? I guess it depends on what role have they attained at that point, right? And so if I think about now. Careful what you say because the robots 50 years from now I'll be listening to this and you'll be on TV saying, this is what roboticists used to believe. Well, right? And so this is my, and as I said, I have a bias lens and my robot friends will understand that. So if you think about it, and I actually put this in kind of the, as a roboticist, you don't necessarily think of robots as human with human rights, but you could think of them either in the category of property, or you can think of them in the category of animals, right? And so both of those have different types of rights. So animals have their own rights as a living being, but they can't vote, they can't write, they can be euthanized, but as humans, if we abuse them, we go to jail, right? So they do have some rights that protect them, but don't give them the rights of like citizenship. And then if you think about property, property, the rights are associated with the person, right? So if someone vandalizes your property or steals your property, like there are some rights, but it's associated with the person who owns that. If you think about it back in the day, and if you remember, we talked about how society has changed, women were property, right? They were not thought of as having rights. They were thought of as property of, like their... Yeah, assaulting a woman meant assaulting the property of somebody else. Exactly, and so what I envision is, is that we will establish some type of norm at some point, but that it might evolve, right? Like if you look at women's rights now, like there are still some countries that don't have, and the rest of the world is like, why that makes no sense, right? And so I do see a world where we do establish some type of grounding. It might be based on property rights, it might be based on animal rights. And if it evolves that way, I think we will have this conversation at that time, because that's the way our society traditionally has evolved. Beautifully put, just out of curiosity, Anki, Jibo, Mayflower Robotics, with their robot Curie, SciFiWorks, WeThink Robotics, were all these amazing robotics companies led, created by incredible roboticists, and they've all went out of business recently. Why do you think they didn't last long? Why is it so hard to run a robotics company, especially one like these, which are fundamentally HRI human robot interaction robots? Or personal robots? Each one has a story, only one of them I don't understand, and that was Anki. That's actually the only one I don't understand. I don't understand it either. No, no, I mean, I look like from the outside, I've looked at their sheets, I've looked at the data that's. Oh, you mean like business wise, you don't understand, I got you. Yeah. Yeah, and like I look at all, I look at that data, and I'm like, they seem to have like product market fit. Like, so that's the only one I don't understand. The rest of it was product market fit. What's product market fit? Just that of, like how do you think about it? Yeah, so although WeThink Robotics was getting there, right? But I think it's just the timing, it just, their clock just timed out. I think if they'd been given a couple more years, they would have been okay. But the other ones were still fairly early by the time they got into the market. And so product market fit is, I have a product that I wanna sell at a certain price. Are there enough people out there, the market, that are willing to buy the product at that market price for me to be a functional viable profit bearing company? Right? So product market fit. If it costs you a thousand dollars and everyone wants it and only is willing to pay a dollar, you have no product market fit. Even if you could sell it for, you know, it's enough for a dollar, cause you can't. So how hard is it for robots? Sort of maybe if you look at iRobot, the company that makes Roombas, vacuum cleaners, can you comment on, did they find the right product, market product fit? Like, are people willing to pay for robots is also another kind of question underlying all this. So if you think about iRobot and their story, right? Like when they first, they had enough of a runway, right? When they first started, they weren't doing vacuum cleaners, right? They were contracts primarily, government contracts, designing robots. Or military robots. Yeah, I mean, that's what they were. That's how they started, right? And then. They still do a lot of incredible work there. But yeah, that was the initial thing that gave them enough funding to. To then try to, the vacuum cleaner is what I've been told was not like their first rendezvous in terms of designing a product, right? And so they were able to survive until they got to the point that they found a product price market, right? And even with, if you look at the Roomba, the price point now is different than when it was first released, right? It was an early adopter price, but they found enough people who were willing to fund it. And I mean, I forgot what their loss profile was for the first couple of years, but they became profitable in sufficient time that they didn't have to close their doors. So they found the right, there's still people willing to pay a large amount of money, so over $1,000 for a vacuum cleaner. Unfortunately for them, now that they've proved everything out, figured it all out, now there's competitors. Yeah, and so that's the next thing, right? The competition, and they have quite a number, even internationally. Like there's some products out there, you can go to Europe and be like, oh, I didn't even know this one existed. So this is the thing though, like with any market, I would, this is not a bad time, although as a roboticist, it's kind of depressing, but I actually think about things like with, I would say that all of the companies that are now in the top five or six, they weren't the first to the stage, right? Like Google was not the first search engine, sorry, Altavista, right? Facebook was not the first, sorry, MySpace, right? Like think about it, they were not the first players. Those first players, like they're not in the top five, 10 of Fortune 500 companies, right? They proved, they started to prove out the market, they started to get people interested, they started the buzz, but they didn't make it to that next level. But the second batch, right? The second batch, I think might make it to the next level. When do you think the Facebook of robotics? The Facebook of robotics. Sorry, I take that phrase back because people deeply, for some reason, well, I know why, but it's, I think, exaggerated distrust Facebook because of the privacy concerns and so on. And with robotics, one of the things you have to make sure is all the things we talked about is to be transparent and have people deeply trust you to let a robot into their lives, into their home. When do you think the second batch of robots will come? Is it five, 10 years, 20 years that we'll have robots in our homes and robots in our hearts? So if I think about, and because I try to follow the VC kind of space in terms of robotic investments, and right now, and I don't know if they're gonna be successful, I don't know if this is the second batch, but there's only one batch that's focused on like the first batch, right? And then there's all these self driving Xs, right? And so I don't know if they're a first batch of something or if like, I don't know quite where they fit in, but there's a number of companies, the co robot, I call them co robots that are still getting VC investments. Some of them have some of the flavor of like Rethink Robotics. Some of them have some of the flavor of like Curie. What's a co robot? So basically a robot and human working in the same space. So some of the companies are focused on manufacturing. So having a robot and human working together in a factory, some of these co robots are robots and humans working in the home, working in clinics, like there's different versions of these companies in terms of their products, but they're all, so we think robotics would be like one of the first, at least well known companies focused on this space. So I don't know if this is a second batch or if this is still part of the first batch, that I don't know. And then you have all these other companies in this self driving space. And I don't know if that's a first batch or again, a second batch. Yeah. So there's a lot of mystery about this now. Of course, it's hard to say that this is the second batch until it proves out, right? Correct. Yeah, we need a unicorn. Yeah, exactly. Why do you think people are so afraid, at least in popular culture of legged robots like those worked in Boston Dynamics or just robotics in general, if you were to psychoanalyze that fear, what do you make of it? And should they be afraid, sorry? So should people be afraid? I don't think people should be afraid. But with a caveat, I don't think people should be afraid given that most of us in this world understand that we need to change something, right? So given that. Now, if things don't change, be very afraid. Which is the dimension of change that's needed? So changing, thinking about the ramifications, thinking about like the ethics, thinking about like the conversation is going on, right? It's no longer a we're gonna deploy it and forget that this is a car that can kill pedestrians that are walking across the street, right? We're not in that stage. We're putting these roads out. There are people out there. A car could be a weapon. Like people are now, solutions aren't there yet, but people are thinking about this as we need to be ethically responsible as we send these systems out, robotics, medical, self driving. And military too. And military. Which is not as often talked about, but it's really where probably these robots will have a significant impact as well. Correct, correct. Right, making sure that they can think rationally, even having the conversations, who should pull the trigger, right? But overall you're saying if we start to think more and more as a community about these ethical issues, people should not be afraid. Yeah, I don't think people should be afraid. I think that the return on investment, the impact, positive impact will outweigh any of the potentially negative impacts. Do you have worries of existential threats of robots or AI that some people kind of talk about and romanticize about in the next decade, the next few decades? No, I don't. Singularity would be an example. So my concept is that, so remember, robots, AI, is designed by people. It has our values. And I always correlate this with a parent and a child. So think about it, as a parent, what do we want? We want our kids to have a better life than us. We want them to expand. We want them to experience the world. And then as we grow older, our kids think and know they're smarter and better and more intelligent and have better opportunities. And they may even stop listening to us. They don't go out and then kill us, right? Like, think about it. It's because we, it's instilled in them values. We instilled in them this whole aspect of community. And yes, even though you're maybe smarter and have more money and dah, dah, dah, it's still about this love, caring relationship. And so that's what I believe. So even if like, you know, we've created the singularity in some archaic system back in like 1980 that suddenly evolves, the fact is it might say, I am smarter, I am sentient. These humans are really stupid, but I think it'll be like, yeah, but I just can't destroy them. Yeah, for sentimental value. It's still just to come back for Thanksgiving dinner every once in a while. Exactly. That's such, that's so beautifully put. You've also said that The Matrix may be one of your more favorite AI related movies. Can you elaborate why? Yeah, it is one of my favorite movies. And it's because it represents kind of all the things I think about. So there's a symbiotic relationship between robots and humans, right? That symbiotic relationship is that they don't destroy us, they enslave us, right? But think about it, even though they enslaved us, they needed us to be happy, right? And in order to be happy, they had to create this cruddy world that they then had to live in, right? That's the whole premise. But then there were humans that had a choice, right? Like you had a choice to stay in this horrific, horrific world where it was your fantasy life with all of the anomalies, perfection, but not accurate. Or you can choose to be on your own and like have maybe no food for a couple of days, but you were totally autonomous. And so I think of that as, and that's why. So it's not necessarily us being enslaved, but I think about us having the symbiotic relationship. Robots and AI, even if they become sentient, they're still part of our society and they will suffer just as much as we. And there will be some kind of equilibrium that we'll have to find some symbiotic relationship. Right, and then you have the ethicists, the robotics folks that are like, no, this has got to stop, I will take the other pill in order to make a difference. So if you could hang out for a day with a robot, real or from science fiction, movies, books, safely, and get to pick his or her, their brain, who would you pick? Gotta say it's Data. Data. I was gonna say Rosie, but I'm not really interested in her brain. I'm interested in Data's brain. Data pre or post emotion chip? Pre. But don't you think it'd be a more interesting conversation post emotion chip? Yeah, it would be drama. And I'm human, I deal with drama all the time. But the reason why I wanna pick Data's brain is because I could have a conversation with him and ask, for example, how can we fix this ethics problem? And he could go through like the rational thinking and through that, he could also help me think through it as well. And so there's like these fundamental questions I think I could ask him that he would help me also learn from. And that fascinates me. I don't think there's a better place to end it. Ayana, thank you so much for talking to us, it was an honor. Thank you, thank you. This was fun. Thanks for listening to this conversation and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support on Patreon or simply connect with me on Twitter. And now let me leave you with some words of wisdom from Arthur C. Clarke. Whether we are based on carbon or on silicon makes no fundamental difference. We should each be treated with appropriate respect. Thank you for listening and hope to see you next time.
Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66
The following is a conversation with Paul Krugman, Nobel Prize winner in economics, professor at CUNY, and columnist at the New York Times. His academic work centers around international economics, economic geography, liquidity traps, and currency crises. But he also is an outspoken writer and commentator on the intersection of modern day politics and economics, which places him in the middle of the tense, divisive, modern day political discourse. If you have clicked dislike on this video and started writing a comment of derision before listening to the conversation, I humbly ask that you please unsubscribe from this channel and from this podcast. Not because you're conservative, a libertarian, a liberal, a socialist, an anarchist, but because you're not open to new ideas, at least in this case, especially at its most difficult, from people with whom you largely disagree. I do my best to stay away from politics of the day because political discourse is filled with a degree of emotion and self assured certainty that to me is not conducive to exploring questions that nobody knows the definitive right answer to. The role of government, the impact of automation, the regulation of tech, the medical system, guns, war, trade, foreign policy, are not easy topics and have no clear answers, despite the certainty of the so called experts, the pundits, the trolls, the media personalities, and the conspiracy theorists. Please listen, empathize, and allow yourself to explore ideas with curiosity and without judgment and without derision. I will speak with many more economists and political thinkers, trying to stay away from the political battles of the day and instead look at the long arc of history and the lessons it reveals. In this, I appreciate your patience and support. This show is presented by Cash App, the number one finance app in the App Store. Cash App lets you send money to friends, buy bitcoin, and invest in the stock market with fractional share trading, allowing you to buy one dollars worth of a stock no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. Get Cash App from the App Store and Google Play and use the code LEXPODCAST. You'll get ten dollars and Cash App will also donate ten dollars to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. Since Cash App does fractional share trading, let me say that to me, it's a fascinating concept. The order execution algorithm that works behind the scenes to create the abstraction of fractional orders for the investor is an algorithmic marvel, so big props to the Cash App engineers for that. I like it when tech teams solve complicated problems to provide, in the end, a simple, effortless interface that abstracts away all the details of the underlying algorithm. And now, here's my conversation with Paul Krugman. What does a perfect world, a utopia, from an economics perspective look like? Wow, I don't really, I don't believe in perfection. I mean, somebody once once said that his ideal was slightly imaginary Sweden. I mean, I like an economy that has a really high safety net for people, good environmental regulation, and something that's kind of like some of the better run countries in the world, but with fixing all of the smaller things that are wrong with them. What about wealth distribution? Well, obviously, you know, total equality is neither possible nor, I think, especially desirable, but I think you want one where, basically one where nobody is hurting and where everybody lives in the same material universe. Everybody is basically living in the same society, so I think it's a bad thing to have people who are so wealthy that they're really not in the same world as the rest of us. What about competition? Do you see the value of competition? What may be its limits? Oh, competition is great when it can work. I mean, I remember, I'm old enough to remember when there was only one phone company and there was really limited choice, and I think the arrival of multiple phone carriers and all that has actually, you know, it's been a really good thing, and that's true across many areas, but not every industry is, not every activity is suitable for competition. So, there are some things like healthcare where competition actually doesn't work, and so it's not one size fits all. That's interesting. Why does competition not work in healthcare? Oh, there's a long list. I mean, there's a famous paper by Kenneth Arrow from 1963, which still holds up very well, where he kind of runs down the list of things you need for competition to work well. Basically, both sides to every transaction being well informed, having the ability to make intelligent decisions, understanding what's going on, and healthcare fails on every dimension. Healthcare, so not health insurance, healthcare. Well, both healthcare and health insurance, health insurance being part of it, but no, health insurance is really the idea that there's effective competition between health insurers is wrong, and healthcare, I mean, the idea that you can comparison shop for major surgery is just, you know, when people say things like that, you wonder, are you living in the same world I'm living in? You know, that piece of well informed, that was always an interesting piece for me, just observing as an outsider, because so much beautiful, such a beautiful world is possible when everybody's well informed. My question for you is, how hard is it to be well informed about anything, whether it's healthcare, or any kind of purchasing decisions, or just life in general in this world? Oh, information, you know, it varies hugely. I mean, there's more information at your fingertips than ever before in history. The trouble is, first of all, that some of that information isn't true. So it's really hard. And then some of it is just too hard to understand. So if I'm buying a car, I can actually probably do a pretty good job of looking up, you know, going to consumer reports, reviews, you can get a pretty good idea of what you're getting when you get a car. If I'm going in for surgery, first of all, you know, fairly often it happens without your being able to plan it. But also, there's a, you know, medical school takes many, many years, and going on the internet for some advice is not usually a very good substitute. So speaking about news and not being able to trust certain sources of information, how much disagreement is there about, I mentioned utopia, perfection in the beginning, but how much disagreement is there about what utopia looks like? Or is most of the disagreement simply about the path to get there? Oh, I think there's two levels of disagreement. One, maybe not utopia, but justice. You know, what is a just society? And that's, there are different views. I mean, I teach my students that there are, you know, broadly speaking, two views of justice. One focuses on outcomes. You ask yourself, a just society is the one you would choose if you were trying to, the one that you would choose to live in if you didn't know who you were going to be, that's kind of John Rawls. And the other focuses on process, that a just society is one in which there is no coercion except where absolutely necessary. And there's no objective way to choose between those. I'm pretty much a Rawlsian, and I think many people are. But anyway, so there's a legitimate dispute about what we mean by a just society anyway. But then there's also a lot of a lot of dispute about what actually works. There's a range of legitimate dispute. I mean, any card carrying economist will say that incentives matter, but how much do they matter? How much does a higher tax rate actually deter people from working? How much does a stronger safety net actually lead people to get lazy? I have a pretty strong view that the evidence points to conclusions that are considerably to the left of where most of our politicians are. But there is legitimate room for disagreement on those things. So you've mentioned outcomes. What are some metrics you think about that you keep in mind, like the Gini coefficient, but really anything that measures how good we're doing, whatever we're trying to do? What are the metrics you keep an eye on? Well, I'm actually I'm not a fan of the Gini coefficient, not because what is the Gini coefficient? Yeah, the Gini coefficient is a measure of inequality, and it is commonly used because it's a single number. It usually tracks with other measures, but the trouble is there's no sort of natural interpretation of it. If you ask me what does a society with a Gini of 0.45 look like as opposed to a society with a Gini of 0.25? And I can kind of tell you, you know, when 0.25 is Denmark and 0.45 is Brazil, but there's no sort of easy way to do that mapping. I mean, I look at things like what is, first of all, things like what is the income of the median family? What is the income of the top 1%? How many people are in poverty by various measures of poverty? And then I think you want to look at questions like how healthy are people? How is life expectancy doing? And how satisfied are people with their lives? Because there is that sounds like a squishy number, not so much happiness. It turns out that life satisfaction is a better measure than happiness. But life satisfaction, that varies quite a lot. And I think it's meaningful, if not too rigorous to say, look, according to polling, people in Denmark are pretty satisfied with their lives and people in the United States, not so much so. And of course, Sweden wins every time. No, actually Denmark wins these days. Denmark and Norway tend to win these days. Sweden doesn't do badly. None of these are perfect. But look, I think by and large, there's a bit of a pornography test. How do you know a decent society? Well, you kind of know it when you see it. Right. Where does America stand on that? We have a remarkable... There are a lot of virtues to America, but there's a level of harshness, brutality, an ability for somebody who just has bad luck to fall off the edge, that is really, shouldn't be happening in a country as rich as ours. So we have somehow managed to produce a a crueller society than almost any other wealthy country for no good reason. What do you think is lacking in the safety net that the United States provides? You said there's a harshness to it. And what are the benefits and maybe limits of a safety net in a country like ours? Well, every other advanced country has some universal guarantee of adequate healthcare. The United States is the only place where citizens can actually fail to get basic healthcare because they can't afford it. It's not hard to do. Everybody else does it, but we don't. We've gotten a little bit better at it than we were, but still, that's a big deal. We have remarkably weak support for children. Most countries have substantial safety... Parents of young children get much more support elsewhere. They get often nothing in the US. We have limited care for people. Long term care for the elderly is a very hit and miss thing. But I think that the really big issues are that we don't take care of children who make the mistake of having the wrong parents, and we don't take care of people who make the mistake of getting sick. And those are things that a rich country should be doing. Sorry for sort of a difficult question, but what you just said kind of feels like the right thing to do in terms of a just society. But is it also good for the economic health of society to take care of the people who are the unfortunate members of society? By and large, it looks like doing the right thing in terms of justice is also the right thing in terms of economics. If we're talking about a society that has extremely high tax rates that deter, remove all incentives to provide a safety net that is so generous that why bother working or striving, that could be a problem. But I don't actually know any society that looks like that. Even in European countries with very generous safety nets, people work and innovate and do all of these things. And there's a lot of evidence now that lacking those basics is actually destructive, that children who grow up without adequate health care, without adequate nutrition, are developmentally challenged. They don't live up to their potential as adults. So the United States actually probably pays a price. We're harsh, we're cruel, and we actually make ourselves poorer as a society, not just the individuals, by being so harsh and cruel. Okay, so Invisible Hand, Smith, where does that fit in? The power of just people acting selfishly and somehow everything taking care of itself to where the economy grows, nobody, there's no cruelty, no injustice, that the markets regulate themselves. Is there power to that idea and what are its limits? There's a lot of power to that. I mean, there's a reason why I don't think sensible people want the government running steel mills or they want the government to own the farms, right? The markets are a pretty effective way of getting incentives aligned, of inducing people to do stuff that works. And the Invisible Hand is saying that farmers aren't growing crops because they want to feed people, they're growing crops because they can make money by it, but it actually turns out to be a pretty good way of getting an agrarian of getting agricultural products grown. So the Invisible Hand is an important part, but there's nothing mystical about it. It's a mechanism, it's a way to organize economic activity, which works well given a bunch of preconditions, which means that it actually works well for agriculture, it works well for manufacturing, it works well for many services, it doesn't work well for healthcare, it doesn't work well for education. So having a society which is kind of three quarters Invisible Hand and one quarter Visible Hand, something on that order seems to be the balance that works best. You don't want to romanticize or make something mystical out of it. This is one way to organize stuff that happens to have broad but not universal application. So then forgive me for romanticizing it, but it does seem pretty magical. I kind of have an intuitive understanding of what happens when you have like five, ten, maybe even a hundred people together, the dynamics of that. But the fact that these large society of people for the most part acting in a self interested way and maybe electing representatives for themselves, that it all kind of seems to work, it's pretty magical. The fact that right now there's a wide assortment of fresh fruit and vegetables in the local markets up and down the street, who's planning that? And the answer is nobody. That's the Invisible Hand at work and that's great. And that's a lesson that Adam Smith figured out more than 200 years ago and it continues to apply. But even Adam Smith has a section in his book about why it's important to regulate banks. So the Invisible Hand has its limits. And that example is actually a powerful one in terms of the supermarket and fruit. That was my experience coming from Russia from the Soviet Union is when I first entered a supermarket and just seeing the assortment of fruit, bananas. I don't think I've seen bananas before, first of all, but just the selection of fresh fruit was just mind blowing. Beyond words and the fact that, like you said, I don't know what made that happen. Well, there is some magic to the market. As showing my age, but the old movie quote, sometimes the magic works and sometimes it doesn't. And you have to have some idea of when it doesn't. So how do you get regulation? What can government at its best do? Government, strangely enough in this country today, seems to get a bad rap. Everybody's against the government. Yeah. Well, a lot of money has been spent on making people hate the government. But the reality is government does some things pretty well. I mean, government does health insurance pretty well. So much so, I mean, given our anti government bias, it really is true that there are people out there saying, don't let the government get its hands on Medicare. So people actually love the government health insurance program far more than they love private health insurance. Basic education. It turns out that your local public high school is the right place to have students trained and private for, certainly for profit education is by and large a nightmare of rip offs and grift and people not getting what they thought they were paying for. It's a judgment case. And it's funny, there are things, I mean, everybody talks about the DMV as being, do you want the economy? Actually, my experience is that the DMV have always been positive. Maybe I'm just going to the right DMVs, but in fact, a lot of government works pretty well. So to some extent, you can do these things on a priori grounds. You can talk about the logic of why healthcare is not going to be handled well by the market, but partly it's just experience. We tried, or at least some countries have tried nationalizing their steel industries, that didn't go well. But we've tried privatizing education and that didn't go well. So you find out what works. What about this new world of tech? How do you see, what do you think works for tech? Is it more regulation or less regulation? There are some things that need more regulation. I mean, we're finding out that the world of social media is one in which competitive forces aren't working very well and trusting the companies to regulate themselves isn't working very well. But I'm on the whole a tech skeptic, not in the sense that I think the tech doesn't work and it doesn't do stuff, but the idea that we're living through greater technological change than ever before is really an illusion. Ever since the beginning of the industrial revolution, we've had a series of epical shifts in the nature of work and in the kinds of jobs that are available. And it's not at all clear that what's happening now is any bigger or faster or harder to cope with than past shocks. It is a popular notion in today's sort of public discourse that automation is going to have a huge impact on the job market now. There is something transformational happening now. Can you talk about that, maybe elaborate a little bit more? Do you not see the software revolutions happening now with machine learning, availability of data, that kind of automation, being able to sort of process, clean, find patterns in data, and you don't see that disrupting any one sector to a point where there's a huge loss of jobs? There may be some things. I mean, actually, you know, translators, there's really reduced demand for translators because machine translation ain't perfect, but it ain't bad. There are some kinds of things that are changed, but overall productivity growth has actually been slow in recent years. It's been much slower than in some past periods. So the idea that automation is taking away all the jobs, the counterpart would be that we would be able to produce stuff with many fewer workers than before, and that's not happening. There are a few isolated sectors. There are some kinds of jobs that are going away, but that keeps on happening. I mean, New York City used to have thousands and thousands of longshoremen taking stuff off ships and putting them on ships. They're almost all gone now. Now you have the giant cranes taking containers on and off ships in Elizabeth, New Jersey. That's not robots. It doesn't sound high tech, but it actually pretty much destroyed an occupation. Well, you know, it wasn't fun for the longshoremen, to say the least. But it's not, we coped, we moved on, and that sort of thing happens all the time. You mean farmers. We used to be a nation which was mostly farmers. There are now very few farmers left. And the reason is not that we've stopped eating, it's that farming has become so efficient that we don't need a lot of farmers, and we coped with that too. So the idea that there's something qualitatively different about what's happening now so far isn't true. So your intuition is there is going to be lots of jobs, but it's just the thing that just continues. There's nothing qualitatively different about this moment. Some jobs will be lost, others will be created, as has always been the case so far. I mean, maybe there's a singularity. Maybe there's a moment when the machines get smarter than we are, and sky tech kills us all or something, right? But that's not visible in anything we're seeing now. You mentioned the metric of productivity. Could you explain that a little bit? Because it's a really interesting one. I've heard you mention that before in connection with automation. So what is that metric? And if there is something qualitatively different, what should we see in that metric? Well, okay, productivity. First of all, production. We do have a measure of the economy's total production, you know, real GDP, which is itself, it's a little bit of a construct because it's quite literally, it's adding apples and oranges. So we have to add together various things, which we basically do by using market prices, but we try to adjust for inflation. But it's kind of, it's a reasonable measure of how much the economy is producing. Sorry to interrupt. Is it goods and services? It's goods and services. It's everything. Okay. Productivity is, you divide that total output by the number of hours worked. So we're basically asking how much stuff does the average worker produce in an hour of work. And if you're seeing really rapid technological progress, then you'd expect to see productivity rising at a rapid clip, which we did, you know, for the generation after World War II, productivity rose, you know, 2% a year on a sustained basis. Then it dropped down for a while. Then there was a kind of a decade of fairly rapid growth from the mid nineties to the mid two thousands. And then it's, it dropped off again and it's not, it's not impressive right now. So you're just not seeing an epical shift in, in, in the economy. So let me then ask you about the psychology of blaming automation. A few months ago, you wrote in the New York Times, quote, the other day I found myself, as I often do at a conference, discussing lagging wages and soaring inequality. There was a lot of interesting discussion, but one thing that struck me was how many of the participants just assumed that robots are a big part of the problem, that machines are taking away the good jobs or even jobs in general. For the most part, this wasn't even presented as a hypothesis, just as part of what everyone knows. So why is, maybe can you psychoanalyze our, the public intellectuals or economists or us actually in the general public, why this is happening? Why this assumption has just infiltrated public discourse? There's a couple of things. One is that the particular technologies that are advancing now are ones that are a lot more visible to the chattering class. When containerization did away with the jobs of longshoremen, well, not a whole lot of college professors are close friends with longshoremen. And so we see this one. Then there's a second thing, which is, we just went through a severe financial crisis and a period of very high unemployment. It's finally come down. There's really no question that that high unemployment was about macroeconomics. It was about a failure of demand. But macroeconomics is really not intuitive. I mean, people just have a hard time wrapping their minds around it. And among other things, people have a hard time believing that something as trivial as, well, people who just aren't spending enough can lead to the kind of mass misery that we saw in the 1930s or the not quite so severe, but still serious misery that we saw after 2008. And there's always a tendency to say it must be something big. It must be technological change. That means we don't need workers anymore. There was a lot of that in the 30s. And that same thing happened after 2008, the assumption that it has to be some deep cause, not something as trivial as a failure of investor confidence and inadequate monetary and fiscal response. And the last thing on wages. A lot of what's happened on wages is at some level political. It's the collapse of the union movement. It's policies that have squeezed workers bargaining power. And for kind of obvious reasons, there are a lot of influential people who don't want to hear that story. They want it to be an inevitable force of nature. Technology has made it impossible to have people earn middle class wages. And so they don't like the story that says, actually, no, it's kind of the political decisions that we made that have caused this income stagnation. And so they're a receptive audience for technological determinism. LW. So what comes first in your view, the economy or politics in terms of what has impact on the other? RL. Oh, look, everything interacts. That's one of the rules that I was taught in economics. Everything affects everything else in at least two ways. I mean, clearly the economy drives a lot of political stuff. But also clearly politics has a huge impact on the economy. We look at the decline of unions in America and say, well, the world has changed and unions don't have a role. But two thirds of workers in Denmark are unionized. And Denmark has the same technology and faces the same global economy that we do. It's just a difference in political choices that leads to that difference. So I actually teach a course here at CUNY called Economics of the Welfare State, which is about things like healthcare and retirement and to some extent wage policy and so on. And the message I keep on trying to drive home is that, look, all advanced countries have got roughly equal competence. We all have the same technology, but we make very different choices. Not that America always makes the wrong choices. We do some things pretty well. Our retirement system is one of the better ones. But the point is that there's a huge amount of political choice involved in the shape of the economy. What is a welfare state? Well, welfare state is the old term, but it basically refers to all the programs that are there to mitigate, if you like, the risks and injustices of the market economy. So in the US, the welfare state is Social Security, Medicare, Medicaid, minimum wages, food stamps. When you say welfare state, my first sort of feeling is a negative one. Even though I like all, I probably generally, at least theoretically, like all the welfare programs. Well, it's been demonized. And to some extent, I'm doing a little bit of thumbing my nose at all of that by just using the term welfare state. Although it's not. I see. Yeah, I got you. But everybody, every advanced country actually has a lot of welfare state, even the US. I mean, that's a fundamental part of the fabric of our society. Social Security, Medicare, Medicaid are just things we take for granted as part of the scene. There's a lot of there's people on the right wing who are saying, oh, it's all socialism. And well, words, I guess, mean what you want them to mean. And just today, I told my class about the record that Ronald Reagan made in 1961, warning that Medicare would destroy American freedom. And but that sort of didn't happen on the topic of welfare state. What are your thoughts on universal basic income? And that's sort of a not a generic, but a universal safety net of this kind? There's always a trade off when we talk about social safety net programs. There's always a trade off between universality, which is clean, but means that you're giving a lot of money to people who don't necessarily need it and some kind of targeting, which makes it easier to get to deal with the crucial problems with limited resources. But both has incentive problems and kind of political and I would say even psychological issues. So the great thing about Social Security and Medicare is no questions asked. You don't have to prove that you need them. It just comes. I'm on Medicare, allegedly. I mean, it's run through my New York Times health insurance, but I didn't have to file an application with the Medicare office to prove that I needed it. It just happened when I turned 65. That's good for dignity and it's also good for the political support because everybody gets Medicare. On the other hand, if you and we can do that with health care to give everybody a guarantee of an income that's enough to live on comfortably. That's a lot of money. What about enough income to carry you over through difficult periods, like if you lose a job or that kind of? Well, we have unemployment insurance and I think our unemployment insurance is too short lived and too stingy. It would be better to have a more comprehensive unemployment insurance benefit. But the trouble with something like universal basic income is that either the bar is too low, so it's really not something you can live on, or it's an enormously expensive program. And so at this point, I think that we can do far better by building on the kinds of safety net programs we have. I mean, food stamps, earned income tax credit. We should have a lot more family support policies. Those things can do a lot more to really diminish the amount of misery in this country. UBI is something that is being... I mean, it goes kind of hand in hand with this belief that the robots are going to take all of our jobs. And if that was really happening, then I might reconsider my views on UPI, but I don't see that happening. So are you happy with discourse that's going on now in terms of politics? So you mentioned a few political candidates. Is the kind of thing going on both on Twitter and debates and the media through the written word, through the spoken word, how do you assess the public discourse now in terms of politics? We're in a fragmented world. More so than before. More so than ever before. So at this point, the public discourse that you see if Fox News is your principal news source is very different from the one you get if you read the New York Times. On the whole, my sense is that mainstream political reporting, policy reporting is a, not too great, but b, better than it's ever been. Because when I first got into the pundit business, it was just awful. Lots of things just never got covered. And if things did get covered, it was always both sides. It's the line that comes back from me writing during the 2000 campaign was that if one of the candidates said that the earth was flat, the headline would be views differ on shape of planet. I mean, that's less true. There's still a fair bit of that out there, but it's less true than there used to be. And there are more people reporting, writing on policy issues who actually understand them than ever before. So though that's good. But still, how much the typical voter is actually informed is unclear. I mean, the Democratic debates, I'm hoping that we finally get down to not having 27 people on the stage or whatever it is they have, but they're reasonably substantive, certainly better than before. And while there's a lot of still theater criticism instead of actual analysis and the reporting, it's not as totally dominant as in the past. Can I ask maybe a dumb question, but from an open minded perspective, when people on the left and people on the right, I think view the others as sometimes complete idiots. What do we do with that? Is it possible that the people on the right are correct about their what they currently believe? Is that kind of open mindedness helpful or is this division long term productive for us to sort of have this food fight? Well, the trouble you have to confront is that there's a lot of stuff that just is false out there but commands extensive political allegiance. So the idea, well, both sides need to listen to each other respectfully. I'm happy to do that when there's a view that is worthy of respect, but a lot of stuff is not. And so take economics is something where I think I know something, and I'm not sure that I'm always right. In fact, I know I've been wrong plenty of times. But I think that there is a difference between economic views that are within the realm of we can actually have an interesting discussion and those that are just crank doctrines or things that are purely being disseminated because people are being paid to disseminate them. So there are plenty of good, serious center right economists that I'm happy to talk to. None of those center right economists has any role in the Trump administration. The Trump administration and by and large Republicans in Congress only want to listen to people who are cranks. And so I think it's being dishonest with my readers to pretend otherwise. There's no way I can reach out to people who think that reading Ayn Rand novels is how you learn about monetary economics. Let me linger on that point. So if you look at Ayn Rand, okay, so you said center right. What about extreme? People who have like radical views, you think they're not grounded in any kind of data, in any kind of reality. I'm just sort of curious about how open we should be to ideas that seem radical. Oh, radical ideas is fine, but then you have to ask, is there some basis for the radicalism? And if it's something that is not grounded in anything, then, and particularly, by the way, if it's something that's been refuted by evidence again and again, and the people just keep saying it, if it's a zombie, if it's a zombie idea, and there's a lot of those out there, then there comes a point when it's not worth trying to fake respect for it. I see. So there's a, through the scientific process, you've shown that this idea does not hold water, but I like the idea of zombie ideas, but they live on through, it's like the idea that the earth is flat, for example, has been for the most part, disproven. Yeah. But it lives on, actually, and growing in popularity currently. Yeah. And there's a lot of that out there, and you can't wish it away, and you're not being fair to either yourself, or if you're somebody who writes for the public, you're not being fair to your readers to pretend otherwise. So quantum mechanics is a strange theory, but it's testable, and so while being strange, it's widely accepted amongst physicists. How robust and testable are economics theories if we compare them to quantum mechanics and physics and so on? Okay, economics, look, it's a complex system, and it's also one in which, by and large, you don't get to do experiments, and so economics is never going to be like quantum mechanics. That said, you get natural experiments, you get tests of rival doctrines. In the immediate aftermath of the financial crisis, there was one style, one basic theory of macroeconomics, which ultimately goes back to John Maynard Keynes that made a few predictions. It said, under these circumstances, the economy is going to collapse. It's going to collapse. It's going to collapse. Printing money will not be inflationary. Running big budget deficits will not cause a rise in interest rates. Slashing government spending austerity policies will lead to depressions if tried. Other people had exactly the opposite predictions, and we got a fairly robust test, and one theory won. Interest rates stayed low. Inflation stayed low. Austerity, countries that implemented harsh austerity policies suffered severe economic downturns. You don't get much, you know, that's pretty clear, and that's not going to be true on everything, but there's a lot of empirical. I mean, the younger economists these days are very heavily data based, and that's great, and I think that's the way to go. Trey Lockerbie What theories of economics are there? Is there currently a lot of disagreement about, would you say? John Ligato Oh, first of all, there's just a lot less disagreement, really, among serious researchers in economics than people imagine. We actually, we can track that. The Chicago Booth School has a panel, an ideologically diverse panel, and they pose regularly posed questions, and on most things, there's a huge, there's remarkable consensus. There's a lot of things where people imagine that there's dispute, but the illusion of dispute is something that's basically being fed by political forces, and there isn't really. I mean, there are, I think, questions about what are effective ways to regulate technology industries. We really don't know the answers there. There's a, or, look, I don't follow every part. Minimum wages. I think there's pretty overwhelming evidence that a modest increase in the minimum wage from current levels would be, would not have any noticeable adverse effect on jobs. But if you ask, how high could it go? $12 seems pretty safe, given what we know. Is $15 okay? There's some legitimate disagreement there, I think probably, but people have a point. $20, where is the line at which it starts to become a problem, and the answer is, truly, we don't know. It's fascinating to try to, such a cool, economics is cool in that sense, because you're trying to predict something that hasn't been done before, the impact, the effects of something that hasn't been done before. Yeah, you're trying, you're going out of sample, and we have good reason to believe that there are, you know, that it's nonlinear, that there comes a point at which it doesn't work the way it has in the past. So, as an economist, how do you see science and technological innovation? When I took various economics courses in college, technological innovation seemed like a no brainer way of growing an economy, and we should invest in it aggressively. I may be biased, but it seemed like the various ways to grow an economy, it seems like the easiest way, especially long term. Is that correct? And if so, why aren't we doing it more? Well, that's, okay, the first question is, yeah, I mean, all, it's pretty much overwhelming. We think we can more or less measure this, although there are some assumptions involved, but it's something like 70 to 80% of the growth in per capita income is basically the advance of knowledge. It's not just, it's not just the crude accumulation of capital. It is the fact that we get smarter. A lot of that, by the way, is more prosaic kinds of technology. So, you know, I like to talk about things like containerization, or, you know, in an earlier period, the invention of the flat pack cardboard box. That had to be invented, and now all of your deliveries from Amazon are made possible by the existence of that technology. The web stuff is important too, but what would we do without cardboard boxes? But all of that stuff is really important in driving economic progress. Well, why don't we invest more? Why don't we invest more in, again, more prosaic stuff? Why haven't we built another goddamn rail tunnel under the Hudson River, for which the need is so totally overwhelmingly obvious? How do you think about, first of all, I don't even know what the word prosaic means, but I inferred it, but how do you think about prosaic? Is it the really most basic, dumb technology innovation, or is it just like the lowest hanging fruit of where benefit can be gained? When I say prosaic, I mean stuff that is not sexy and fancy and high tech. It's building bridges and tunnels, inventing the cardboard box, or the, I don't know, where do we put EasyPass in there? It is actually using some modern technology and all that, but it's not going to have, I don't think we're going to make a movie about the guy, whoever it was that invented EasyPass, but it's actually a pretty significant productivity booster. To me, it always seemed like it's something that everybody should be able to agree on and just invest. So in the same way, the investment in the military and the DOD is huge. So everyone kind of, not everyone, but there's an agreement amongst people that somehow that a large defense is important. It always seemed to me like that should be shifted towards, if you want to grow prosperity of the nation, you should be investing in knowledge. Yes, prosaic stuff, investing in infrastructure and so on. I mean, sorry to linger on it, but do you have any intuition? Do you have a hope that that changes? Do you have intuition why it's not changing? It's unclear. More than intuition, I have a theory. I'm reasonably certain that I understand why we don't do it. And it's because we have a real values dispute about the welfare state, about how much the government should do to help the unfortunate. And politicians believe, probably rightly, that there's a kind of halo effect that surrounds any kind of government intervention. That even though providing people with enhanced social security benefits is really very different from building a tunnel under the Hudson River, politicians of both parties seem to believe that if the government is seen to be successful at doing one kind of thing, it will make people think more favorably on doing other kinds of things. And so we have conservatives tend to be opposed to any kind of increase in government spending, except military, no matter how obviously a good idea it is, because they fear that it's the thin end of the wedge for bigger government in general. And to some extent, liberals tend to favor spending on these things, partly because they see it as a way of proving that government can do things well, and therefore it can turn to broader social goals. What you might have thought would be a technocratic discussion about government investment, both in research and in infrastructure, is contaminated by the fact that government is government and people link it to other government actions. Perhaps a silly question, but as a species, we're currently working on venturing out into space, one day colonizing Mars. So when we start a society on Mars from scratch, what political and economic system should it operate under? Oh, I'm a big believer in... First of all, I don't think we're actually going to do that, but let's hypothesize that we colonize Mars or something. Look, representative democracy versus pure democracy. Well, yeah, pure democracy where people vote directly on everything is really problematic, because people don't have time to try and master every issue. I mean, we can see what government by referendum looks like. There's a lot of that in California, and it doesn't work so good because it's hard to explain to people that the various things they vote for may conflict. So representative democracy, it's got lots of problems. And kind of the Winston Churchill thing, right? It's the worst system we know, except for all the others. So yeah, sticking with the representative and basically the American system of regulation and markets and the economy we have going on is a pretty good one for Mars. If you start from scratch. If you're gonna start from scratch, you wouldn't want a Senate where 16% of the population has half the seats. You probably would want one which is actually more representative than what we have. And the details, it's unclear. When times are good, all of the various representative democracy systems, whether it's parliamentary democracies or a US style system, whether you have a prime minister or the head of state as an elected president, they all kind of work well, when times are good, and they all have different modes of breakdown. So I'm not sure I know what the answer is. But something like that is given what we've seen through history, it's the least bad system out there. I'm a big fan of the TV series The Expanse, and it's kind of gratifying that out there, it's the Martian Congressional Republic. In a brief sense, so amongst many things, you're also an expert at international trade. What do you make of the complexity? So I can understand trade between two people, say two neighboring farmers. It seems pretty straightforward to me. But internationally, we need to start talking about nations and nations trading seems to be very complicated. So from a high level, why is it so complicated? What are all the different factors that weigh the objectives that need to be considered in international trade? And maybe feeding that into a question of, do you have concerns about the two giants right now of the U.S. and China, and the tension that's going on with the international trade there with the trade war? Well, first of all, international trade is not really that different from trade among individuals. It's vastly more complex, and there are many more players. But in the end, the reasons why countries trade are pretty much the same as the reasons why individuals trade. Countries trade because they're different, and they can derive mutual advantage from concentrating on the things they do relatively well. And also, there are economies of scale. Individuals have to decide whether to be a surgeon or an accountant. It's probably not a good idea to try and be both, and countries benefit from specializing just because of the inherent advantages of specialization. Now, the fact that it's a big world, and we're talking about millions of products being traded, and in today's world, often trade involves many stages. So that made in China iPhone is actually assembled from components that are made all over the world. But it doesn't really change the fundamentals all that much. There's a recurrent... I mean, the dirty little secret of international trade conflict is that actually it's not... Conflicts among countries are really not that important. Most trade is beneficial to both sides and to both countries, but it has big impacts on the distribution of income within countries. So the growth of US trade with China has made both US and China richer, but it's been pretty bad for people who were employed in the North Carolina furniture industry, who did find that their jobs were displaced by a wave of imports from China. And so that's where the complexity comes in. Not at all clear to me. We have some real problems with China, although they don't really involve trade so much as things like respect for intellectual property. Not clear that those real problems that we do have with China have anything to do with the current trade war. The current trade war seems to be driven instead by a fundamentally wrong notion that when we sell goods to China, that's good, and when we buy goods from China, that's bad. And that's misunderstanding the whole point. Is trade with China in both directions a good thing? Yeah, we would be poorer if it wasn't for it. But there are downsides, as there are for any economic change. It's like any new technology makes us richer, but often hurts some people. Trade with China makes us richer, but hurts some people. And I wouldn't undo what has happened, but I wish we had had a better policy for supporting and compensating the losers from that growth. So we live in a time of radicalization of political ideas, Twitter mobs, and so on. And yet here you are in the midst of it, both tweeting and writing in the New York Times articles with strong opinions, riding this chaotic wave of public discourse. Do you ever hesitate or feel a tinge of fear for exploring your ideas publicly and unapologetically? Oh, I feel fear all the time. It's not too hard to imagine scenarios in which this is going to, you know, I might personally find myself kind of in the crosshairs. And I mean, I am the king of hate mail. I get amazing correspondence. Does it affect you? It did. It did when it started. These days I've developed a very thick skin. So I don't usually get, in fact, if I don't get a wave of hate mail after a column, then I've probably wasted that day. So what do you make of that as a person who's putting ideas out there? If you look at the history of ideas, the way it works is you write about ideas, you put them out there. But now when there's so much hate mail, so much division, what advice do you have for yourself and for others trying to have a discussion about ideas, difficult ideas? Well, I don't know about advice for others. I mean, for most economists, you know, just do your research. We can't all be public intellectuals and we shouldn't try to be. And in fact, I'm glad that I didn't get into this business until I was in my late 40s. I mean, this is probably best to spend your decades of greatest intellectual flexibility addressing deep questions, not confronting Twitter mobs. And as for the rest, I think when you're writing about stuff, the answers like no one's watching, write like nobody's reading. Write what you think is right. Trying to make it, obviously, trying to make it comprehensible and persuasive. But don't let yourself get intimidated by the fact that some people are going to say nasty things. You can't do your job if you are worried about criticism. Well, I think I speak for a lot of people in saying that I hope that you keep dancing like nobody's watching on Twitter and New York Times and books. So Paul, it's been an honor. Thank you so much for talking to me. Great. Thanks for listening to this conversation with Paul Krugman. And thank you to our presenting sponsor Cash App. Download it and use code LexPodcast. You'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now, let me leave you with some words from Adam Smith in The Wealth of Nations, one of the most influential philosophers and economists in our history. It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves not to their humanity, but to their self love and never talk to them of our necessities, but of their advantages. Thank you for listening and hope to see you next time.
Paul Krugman: Economics of Innovation, Automation, Safety Nets & UBI | Lex Fridman Podcast #67
The following is a conversation with Christos Goudreau, Vice President of Engineering at Google and Head of Search and Discovery at YouTube, also known as the YouTube Algorithm. YouTube has approximately 1.9 billion users, and every day people watch over 1 billion hours of YouTube video. It is the second most popular search engine behind Google itself. For many people, it is not only a source of entertainment, but also how we learn new ideas from math and physics videos to podcasts to debates, opinions, ideas from out of the box thinkers and activists on some of the most tense, challenging, and impactful topics in the world today. YouTube and other content platforms receive criticism from both viewers and creators, as they should, because the engineering task before them is hard, and they don't always succeed, and the impact of their work is truly world changing. To me, YouTube has been an incredible wellspring of knowledge. I've watched hundreds, if not thousands, of lectures that changed the way I see many fundamental ideas in math, science, engineering, and philosophy. But it does put a mirror to ourselves, and keeps the responsibility of the steps we take in each of our online educational journeys into the hands of each of us. The YouTube algorithm has an important role in that journey of helping us find new, exciting ideas to learn about. That's a difficult and an exciting problem for an artificial intelligence system. As I've said in lectures and other forums, recommendation systems will be one of the most impactful areas of AI in the 21st century, and YouTube is one of the biggest recommendation systems in the world. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say, $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and Member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating and charity navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Christos Goudreau. YouTube is the world's second most popular search engine, behind Google, of course. We watch more than 1 billion hours of YouTube videos a day, more than Netflix and Facebook video combined. YouTube creators upload over 500,000 hours of video every day. Average lifespan of a human being, just for comparison, is about 700,000 hours. So, what's uploaded every single day is just enough for a human to watch in a lifetime. So, let me ask an absurd philosophical question. If from birth, when I was born, and there's many people born today with the internet, I watched YouTube videos nonstop, do you think there are trajectories through YouTube video space that can maximize my average happiness, or maybe education, or my growth as a human being? I think there are some great trajectories through YouTube videos, but I wouldn't recommend that anyone spend all of their waking hours or all of their hours watching YouTube. I mean, I think about the fact that YouTube has been really great for my kids, for instance. My oldest daughter, she's been watching YouTube for several years. She watches Tyler Oakley and the Vlogbrothers, and I know that it's had a very profound and positive impact on her character. And my younger daughter, she's a ballerina, and her teachers tell her that YouTube is a huge advantage for her because she can practice a routine and watch professional dancers do that same routine and stop it and back it up and rewind and all that stuff, right? So, it's been really good for them. And then even my son is a sophomore in college. He got through his linear algebra class because of a channel called Three Blue, One Brown, which helps you understand linear algebra, but in a way that would be very hard for anyone to do on a whiteboard or a chalkboard. And so, I think that those experiences, from my point of view, were very good. And so, I can imagine really good trajectories through YouTube, yes. Have you looked at, do you think of broadly about that trajectory over a period? Because YouTube has grown up now. So, over a period of years, you just kind of gave a few anecdotal examples, but I used to watch certain shows on YouTube. I don't anymore. I've moved on to other shows. Ultimately, you want people to, from YouTube's perspective, to stay on YouTube, to grow as human beings on YouTube. So, you have to think not just what makes them engage today or this month, but what makes them engage today or this month, but also for a period of years. Absolutely. That's right. I mean, if YouTube is going to continue to enrich people's lives, then it has to grow with them, and people's interests change over time. And so, I think we've been working on this problem, and I'll just say it broadly as like how to introduce diversity and introduce people who are watching one thing to something else they might like. We've been working on that problem all the eight years I've been at YouTube. It's a hard problem because, I mean, of course, it's trivial to introduce diversity that doesn't help. Yeah, just add a random video. I could just randomly select a video from the billions that we have. It's likely not to even be in your language. So, the likelihood that you would watch it and develop a new interest is very, very low. And so, what you want to do when you're trying to increase diversity is find something that is not too similar to the things that you've watched, but also something that you might be likely to watch. And that balance, finding that spot between those two things is quite challenging. So, the diversity of content, diversity of ideas, it's a really difficult, it's a thing like that's almost impossible to define, right? Like, what's different? So, how do you think about that? So, two examples is I'm a huge fan of Three Blue One Brown, say, and then one diversity. I wasn't even aware of a channel called Veritasium, which is a great science, physics, whatever channel. So, one version of diversity is showing me Derek's Veritasium channel, which I was really excited to discover. I actually now watch a lot of his videos. Okay, so you're a person who's watching some math channels and you might be interested in some other science or math channels. So, like you mentioned, the first kind of diversity is just show you some things from other channels that are related, but not just, you know, not all the Three Blue One Brown channel, throw in a couple others. So, that's maybe the first kind of diversity that we started with many, many years ago. Taking a bigger leap is about, I mean, the mechanisms we use for that is we basically cluster videos and channels together, mostly videos. We do almost everything at the video level. And so, we'll make some kind of a cluster via some embedding process and then measure what is the likelihood that users who watch one cluster might also watch another cluster that's very distinct. So, we may come to find that people who watch science videos also like jazz. This is possible, right? And so, because of that relationship that we've identified through the embeddings and then the measurement of the people who watch both, we might recommend a jazz video once in a while. So, there's this cluster in the embedding space of jazz videos and science videos. And so, you kind of try to look at aggregate statistics where if a lot of people that jump from science cluster to the jazz cluster tend to remain as engaged or become more engaged, then that means those two, they should hop back and forth and they'll be happy. Right. There's a higher likelihood that a person who's watching science would like jazz than the person watching science would like, I don't know, backyard railroads or something else, right? And so, we can try to measure these likelihoods and use that to make the best recommendation we can. So, okay. So, we'll talk about the machine learning of that, but I have to linger on things that neither you or anyone have an answer to. There's gray areas of truth, which is, for example, now I can't believe I'm going there, but politics. It happens so that certain people believe certain things and they're very certain about them. Let's move outside the red versus blue politics of today's world, but there's different ideologies. For example, in college, I read quite a lot of Ayn Rand I studied, and that's a particular philosophical ideology I found interesting to explore. Okay. So, that was that kind of space. I've kind of moved on from that cluster intellectually, but it nevertheless is an interesting cluster. I was born in the Soviet Union. Socialism, communism is a certain kind of political ideology that's really interesting to explore. Again, objectively, there's a set of beliefs about how the economy should work and so on. And so, it's hard to know what's true or not in terms of people within those communities are often advocating that this is how we achieve utopia in this world, and they're pretty certain about it. So, how do you try to manage politics in this chaotic, divisive world? Not positive or any kind of ideas in terms of filtering what people should watch next and in terms of also not letting certain things be on YouTube. This is an exceptionally difficult responsibility. Well, the responsibility to get this right is our top priority. And the first comes down to making sure that we have good, clear rules of the road, right? Like, just because we have freedom of speech doesn't mean that you can literally say anything, right? Like, we as a society have accepted certain restrictions on our freedom of speech. There are things like libel laws and things like that. And so, where we can draw a clear line, we do, and that's what we do. We draw a clear line, we do, and we continue to evolve that line over time. However, as you pointed out, wherever you draw the line, there's going to be a border line. And in that border line area, we are going to maybe not remove videos, but we will try to reduce the recommendations of them or the proliferation of them by demoting them. Alternatively, in those situations, try to raise what we would call authoritative or credible sources of information. So, we're not trying to, I mean, you mentioned Ayn Rand and communism. Those are two valid points of view that people are going to debate and discuss. And of course, people who believe in one or the other of those things are going to try to persuade other people to their point of view. And so, we're not trying to settle that or choose a side or anything like that. What we're trying to do is make sure that the people who are expressing those point of view and offering those positions are authoritative and credible. So, let me ask a question about people I don't like personally. You heard me. I don't care if you leave comments on this. But sometimes, they're brilliantly funny, which is trolls. So, people who kind of mock, I mean, the internet is full, Reddit of mock style comedy where people just kind of make fun of, point out that the emperor has no clothes. And there's brilliant comedy in that, but sometimes it can get cruel and mean. So, on that, on the mean point, and sorry to look at the comments, but I'm going to and sorry to linger on these things that have no good answers. But actually, I totally hear you that this is really important that you're trying to solve it. But how do you reduce the meanness of people on YouTube? I understand that anyone who uploads YouTube videos has to become resilient to a certain amount of meanness. Like I've heard that from many creators. And we are trying in various ways, comment ranking, allowing certain features to block people, to reduce or make that meanness or that trolling behavior less effective on YouTube. Yeah. And so, I mean, it's very important, but it's something that we're going to keep having to work on and as we improve it, like maybe we'll get to a point where people don't have to suffer this sort of meanness when they upload YouTube videos. I hope we do, but it just does seem to be something that you have to be able to deal with as a YouTube creator nowadays. Do you have a hope that, so you mentioned two things that I kind of agree with. So there's like a machine learning approach of ranking comments based on whatever, based on how much they contribute to the healthy conversation. Let's put it that way. Then the other is almost an interface question of how do you, how does the creator filter? So block or how does, how do humans themselves, the users of YouTube manage their own conversation? Do you have hope that these two tools will create a better society without limiting freedom of speech too much, without sort of attacking, even like saying that people, what do you mean limiting, sort of curating speech? I mean, I think that that overall is our whole project here at YouTube. Right. Like we fundamentally believe and I personally believe very much that YouTube can be great. It's been great for my kids. I think it can be great for society. But it's absolutely critical that we get this responsibility part right. And that's why it's our top priority. Susan Wojcicki, who's the CEO of YouTube, she says something that I personally find very inspiring, which is that we want to do our jobs today in a manner so that people 20 and 30 years from now will look back and say, YouTube, they really figured this out. They really found a way to strike the right balance between the openness and the value that the openness has and also making sure that we are meeting our responsibility to users in society. So the burden on YouTube actually is quite incredible. And the one thing that people don't give enough credit to the seriousness and the magnitude of the problem, I think. So I personally hope that you do solve it because a lot is in your hand, a lot is riding on your success or failure. So it's besides, of course, running a successful company, you're also curating the content of the internet and the conversation on the internet. That's a powerful thing. So one thing that people wonder about is how much of it can be solved with pure machine learning. So looking at the data, studying the data and creating algorithms that curate the comments, curate the content, and how much of it needs human intervention, meaning people here at YouTube in a room sitting and thinking about what is the nature of truth, what are the ideals that we should be promoting, that kind of thing. So algorithm versus human input, what's your sense? I mean, my own experience has demonstrated that you need both of those things. Algorithms, I mean, you're familiar with machine learning algorithms and the thing they need most is data and the data is generated by humans. And so, for instance, when we're building a system to try to figure out which are the videos that are misinformation or borderline policy violations, well, the first thing we need to do is get human beings to make decisions about which of those videos are in which category. And then we use that data and basically take that information that's determined and governed by humans and extrapolate it or apply it to the entire set of billions of YouTube videos. And we couldn't get to all the videos on YouTube well without the humans, and we couldn't use the humans to get to all the videos of YouTube. So there's no world in which you have only one or the other of these things. And just as you said, a lot of it comes down to people at YouTube spending a lot of time trying to figure out what are the right policies, what are the outcomes based on those policies, are they the kinds of things we want to see? And then once we kind of get an agreement or build some consensus around what the policies are, well, then we've got to find a way to implement those policies across all of YouTube. And that's where both the human beings, we call them evaluators or reviewers, come into play to help us with that. And then once we get a lot of training data from them, then we apply the machine learning techniques to take it even further. Do you have a sense that these human beings have a bias in some kind of direction? I mean, that's an interesting question. We do sort of in autonomous vehicles and computer vision in general, a lot of annotation, and we rarely ask what bias do the annotators have. Even in the sense that they're better at annotating certain things than others. For example, people are much better at, for example, at writing, they're much better at or much better at annotating segmentation at segmenting cars in a scene versus segmenting bushes or trees. There's specific mechanical reasons for that, but also because it's semantic gray area. And just for a lot of reasons, people are just terrible at annotating trees. Okay, so in the same kind of sense, do you think of, in terms of people reviewing videos or annotating the content of videos, is there some kind of bias that you're aware of or seek out in that human input? Well, we take steps to try to overcome these kinds of biases or biases that we think would be problematic. So for instance, like we ask people to have a bias towards scientific consensus. That's something that we instruct them to do. We ask them to have a bias towards demonstration of expertise or credibility or authoritativeness. But there are other biases that we want to make sure to try to remove. And there's many techniques for doing this. One of them is you send the same thing to be reviewed to many people. And so, that's one technique. Another is that you make sure that the people that are doing these sorts of tasks, that these sorts of tasks are from different backgrounds and different areas of the United States or of the world. But then, even with all of that, it's possible for certain kinds of what we would call unfair biases to creep into machine learning systems, primarily, as you said, because maybe the training data itself comes in in a biased way. So, we also have worked very hard on improving the machine learning systems to remove and reduce unfair biases when it goes against or involves some protected class, for instance. Thank you for exploring with me some of the more challenging things. I'm sure there's a few more that we'll jump back to. But let me jump into the fun part, which is maybe the basics of the quote, unquote, YouTube algorithm. What does the YouTube algorithm look at to make recommendation for what to watch next? And it's from a machine learning perspective. Or when you search for a particular term, how does it know what to show you next? Because it seems to, at least for me, do an incredible job of both. Well, that's kind of you to say. It didn't used to do a very good job, but it's gotten better over the years. Even I observed that it's improved quite a bit. Those are two different situations. Like when you search for something, YouTube uses the best technology we can get from Google to make sure that the YouTube search system finds what someone's looking for. And of course, the very first things that one thinks about is, okay, well, does the word occur in the title, for instance? But there are much more sophisticated things where we're mostly trying to do some syntactic match or maybe a semantic match based on words that we can add to the document itself. For instance, maybe is this video watched a lot after this query? That's something that we can observe and then as a result, make sure that that document would be retrieved for that query. Now, when you talk about what kind of videos would be recommended to watch next, that's something, again, we've been working on for many years and probably the first real attempt to do that well was to use collaborative filtering. Can you describe what collaborative filtering is? Sure. It's just basically what we do is we observe which videos get watched close together by the same person. And if you observe that and if you can imagine creating a graph where the videos that get watched close together by the most people are very close to one another in this graph and videos that don't frequently get watched close together by the same person or the same people are far apart, then you end up with this graph that we call the related graph that basically represents videos that are very similar or related in some way. And what's amazing about that is that it puts all the videos that are in the same language together, for instance, and we didn't even have to think about language. It just does it, right? And it puts all the videos that are about sports together and it puts most of the music videos together and it puts all of these sorts of videos together just because that's sort of the way the people using YouTube behave. So that already cleans up a lot of the problem. It takes care of the lowest hanging fruit, which happens to be a huge one of just managing these millions of videos. That's right. I remember a few years ago I was talking to someone who was trying to propose that we do a research project concerning people who are bilingual, and this person was making this proposal based on the idea that YouTube could not possibly be good at recommending videos well to people who are bilingual. And so she was telling me about this and I said, well, can you give me an example of what problem do you think we have on YouTube with the recommendations? And so she said, well, I'm a researcher in the US and when I'm looking for academic topics, I want to see them in English. And so she searched for one, found a video, and then looked at the watch next suggestions and they were all in English. And so she said, oh, I see. YouTube must think that I speak only English. And so she said, now I'm actually originally from Turkey and sometimes when I'm cooking, let's say I want to make some baklava, I really like to watch videos that are in Turkish. And so she searched for a video about making the baklava and then selected it and it was in Turkish and the watch next recommendations were in Turkish. And she just couldn't believe how this was possible and how is it that you know that I speak both these two languages and put all the videos together? And it's just as a sort of an outcome of this related graph that's created through collaborative filtering. So for me, one of my huge interests is just human psychology, right? And that's such a powerful platform on which to utilize human psychology to discover what people, individual people want to watch next. But it's also be just fascinating to me. You know, I've, Google search has ability to look at your own history and I've done that before, just, just what I've searched three years for many, many years. And it's fascinating picture of who I am actually. And I don't think anyone's ever summarized. I personally would love that. A summary of who I am as a person on the internet to me, because I didn't get a reply of who I am as a person on the internet to me, because I think it reveals, I think it puts a mirror to me or to others. You know, that's actually quite revealing and interesting, you know, just the, maybe in the number of, it's a joke, but not really is the number of cat videos I've watched or videos of people falling, you know, stuff that's absurd, that kind of stuff. It's really interesting. And of course it's really good for the machine learning aspect to, to show, to figure out what to show next. But it's interesting. Have you just as a tangent played around with the idea of giving a map to people sort of, as opposed to just using this information to show what's next, showing them here are the clusters you've loved over the years kind of thing? Well, we do provide the history of all the videos that you've watched. Yes. So you can definitely search through that and look through it and search through it to see what it is that you've been watching on YouTube. We have actually in various times experimented with this sort of cluster idea, finding ways to demonstrate or show people what topics they've been interested in or what clusters they've watched from. It's interesting that you bring this up because in some sense, the way the recommendation system of YouTube sees a user is exactly as the history of all the videos they've watched on YouTube. And so you can think of yourself or any user on YouTube as kind of like a DNA strand of all your videos, right? That sort of represents you, you can also think of it as maybe a vector in the space of all the videos on YouTube. And so now once you think of it as a vector in the space of all the videos on YouTube, then you can start to say, okay, well, which other vectors are close to me and to my vector? And that's one of the ways that we generate some diverse recommendations is because you're like, okay, well, these people seem to be close with respect to the videos they've watched on YouTube, but here's a topic or a video that one of them has watched and enjoyed, but the other one hasn't, that could be an opportunity to make a good recommendation. I got to tell you, I mean, I know I'm going to ask for things that are impossible, but I would love to cluster than human beings. I would love to know who has similar trajectories as me, because you probably would want to hang out, right? There's a social aspect there, like actually finding some of the most fascinating people I find on YouTube, but have like no followers and I start following them and they create incredible content and on that topic, I just love to ask, there's some videos that just blow my mind in terms of quality and depth and just in every regard are amazing videos and they have like 57 views, okay? How do you get videos of quality to be seen by many eyes? So the measure of quality, is it just something, yeah, how do you know that something is good? Well, I mean, I think it depends initially on what sort of video we're talking about. So in the realm of, let's say you mentioned politics and news, in that realm, you know, quality news or quality journalism relies on having a journalism department, right? Like you have to have actual journalists and fact checkers and people like that and so in that situation and in others, maybe science or in medicine, quality has a lot to do with the authoritativeness and the credibility and the expertise of the people who make the video. Now, if you think about the other end of the spectrum, you know, what is the highest quality prank video or what is the highest quality Minecraft video, right? That might be the one that people enjoy watching the most and watch to the end or it might be the one that when we ask people the next day after they watched it, were they satisfied with it? And so we in, especially in the realm of entertainment, have been trying to get at better and better measures of quality or satisfaction or enrichment since I came to YouTube. And we started with, well, you know, the first approximation is the one that gets more views. But you know, we both know that things can get a lot of views and not really be that high quality, especially if people are clicking on something and then immediately realizing that it's not that great and abandoning it. And that's why we moved from views to thinking about the amount of time people spend watching it with the premise that like, you know, in some sense, the time that someone spends watching a video is related to the value that they get from that video. It may not be perfectly related, but it has something to say about how much value they get. But even that's not good enough, right? Because I myself have spent time clicking through channels on television late at night and ended up watching Under Siege 2 for some reason I don't know. And if you were to ask me the next day, are you glad that you watched that show on TV last night? I'd say, yeah, I wish I would have gone to bed or read a book or almost anything else, really. And so that's why some people got the idea a few years ago to try to survey users afterwards. And so we get feedback data from those surveys and then use that in the machine learning system to try to not just predict what you're going to click on right now, what you might watch for a while, but what when we ask you tomorrow, you'll give four or five stars to. So just to summarize, what are the signals from a machine learning perspective that a user can provide? So you mentioned just clicking on the video views, the time watched, maybe the relative time watched, the clicking like and dislike on the video, maybe commenting on the video. All of those things. All of those things. And then the one I wasn't actually quite aware of, even though I might have engaged in it is a survey afterwards, which is a brilliant idea. Is there other signals? I mean, that's already a really rich space of signals to learn from. Is there something else? Well, you mentioned commenting, also sharing the video. If you think it's worthy to be shared with someone else you know. Within YouTube or outside of YouTube as well? Either. Let's see, you mentioned like, dislike. Like and dislike. How important is that? It's very important, right? We want, it's predictive of satisfaction. But it's not perfectly predictive. Subscribe. If you subscribe to the channel of the person who made the video, then that also is a piece of information and it signals satisfaction. Although over the years, we've learned that people have a wide range of attitudes about what it means to subscribe. We would ask some users who didn't subscribe very much, but they watched a lot from a few channels. We'd say, well, why didn't you subscribe? And they would say, well, I can't afford to pay for anything. We tried to let them understand like, actually it doesn't cost anything. It's free. It just helps us know that you are very interested in this creator. But then we've asked other people who subscribe to many things and don't really watch any of the videos from those channels. And we say, well, why did you subscribe to this if you weren't really interested in any more videos from that channel? And they might tell us, well, I just, you know, I thought the person did a great job and I just want to kind of give them a high five. And so. Yeah. That's where I sit. I go to channels where I just, this person is amazing. I like this person. But then I like this person and I really want to support them. That's how I click subscribe. Even though I mean never actually want to click on their videos when they're releasing it. I just love what they're doing. And it's maybe outside of my interest area and so on, which is probably the wrong way to use the subscribe button. But I just want to say congrats. This is great work. Well, so you have to deal with all the space of people that see the subscribe button is totally different. That's right. And so, you know, we can't just close our eyes and say, sorry, you're using it wrong. You know, we're not going to pay attention to what you've done. We need to embrace all the ways in which all the different people in the world use the subscribe button or the like and the dislike button. So in terms of signals of machine learning, using for the search and for the recommendation, you've mentioned title. So like metadata, like text data that people provide description and title and maybe keywords. Maybe you can speak to the value of those things in search and also this incredible fascinating area of the content itself. So the video content itself, trying to understand what's happening in the video. So YouTube released a data set that, you know, in the machine learning computer vision world, this is just an exciting space. How much is that currently? How much are you playing with that currently? How much is your hope for the future of being able to analyze the content of the video itself? Well, we have been working on that also since I came to YouTube. Analyzing the content. Analyzing the content of the video, right? And what I can tell you is that our ability to do it well is still somewhat crude. We can tell if it's a music video, we can tell if it's a sports video, we can probably tell you that people are playing soccer. We probably can't tell whether it's Manchester United or my daughter's soccer team. So these things are kind of difficult and using them, we can use them in some ways. So for instance, we use that kind of information to understand and inform these clusters that I talked about. And also maybe to add some words like soccer, for instance, to the video, if it doesn't occur in the title or the description, which is remarkable that often it doesn't. One of the things that I ask creators to do is please help us out with the title and the description. For instance, we were a few years ago having a live stream of some competition for World of Warcraft on YouTube. And it was a very important competition, but if you typed World of Warcraft in search, you wouldn't find it. World of Warcraft wasn't in the title? World of Warcraft wasn't in the title. It was match 478, you know, A team versus B team and World of Warcraft wasn't in the title. I'm just like, come on, give me. Being literal on the internet is actually very uncool, which is the problem. Oh, is that right? Well, I mean, in some sense, well, some of the greatest videos, I mean, there's a humor to just being indirect, being witty and so on. And actually being, you know, machine learning algorithms want you to be, you know, literal, right? You just want to say what's in the thing, be very, very simple. And in some sense that gets away from wit and humor. So you have to play with both, right? But you're saying that for now, sort of the content of the title, the content of the description, the actual text is one of the best ways for the algorithm to find your video and put them in the right cluster. That's right. And I would go further and say that if you want people, human beings to select your video in search, then it helps to have, let's say World of Warcraft in the title. Because why would a person, you know, if they're looking at a bunch, they type World of Warcraft and they have a bunch of videos, all of whom say World of Warcraft, except the one that you uploaded. Well, even the person is going to think, well, maybe this isn't somehow search made a mistake. This isn't really about World of Warcraft. So it's important not just for the machine learning systems, but also for the people who might be looking for this sort of thing. They get a clue that it's what they're looking for by seeing that same thing prominently in the title of the video. Okay. Let me push back on that. So I think from the algorithm perspective, yes, but if they typed in World of Warcraft and saw a video that with the title simply winning and the thumbnail has like a sad orc or something, I don't know, right? Like I think that's much, it gets your curiosity up. And then if they could trust that the algorithm was smart enough to figure out somehow that this is indeed a World of Warcraft video, that would have created the most beautiful experience. I think in terms of just the wit and the humor and the curiosity that we human beings naturally have. But you're saying, I mean, realistically speaking, it's really hard for the algorithm to figure out that the content of that video will be a World of Warcraft video. And you have to accept that some people are going to skip it. Yeah. Right? I mean, and so you're right. The people who don't skip it and select it are going to be delighted, but other people might say, yeah, this is not what I was looking for. And making stuff discoverable, I think is what you're really working on and hoping. So yeah. So from your perspective, put stuff in the title description. And remember the collaborative filtering part of the system starts by the same user watching videos together, right? So the way that they're probably going to do that is by searching for them. That's a fascinating aspect of it. It's like ant colonies. That's how they find stuff. So I mean, what degree for collaborative filtering in general is one curious ant, one curious user, essential? So just a person who is more willing to click on random videos and sort of explore these cluster spaces. In your sense, how many people are just like watching the same thing over and over and over and over? And how many are just like the explorers and just kind of like click on stuff and then help the other ant in the ant's colony discover the cool stuff? Do you have a sense of that at all? I really don't think I have a sense for the relative sizes of those groups. But I would say that people come to YouTube with some certain amount of intent. And as long as they, to the extent to which they try to satisfy that intent, that certainly helps our systems, right? Because our systems rely on kind of a faithful amount of behavior, right? And there are people who try to trick us, right? There are people and machines that try to associate videos together that really don't belong together, but they're trying to get that association made because it's profitable for them. And so we have to always be resilient to that sort of attempt at gaming the systems. So speaking to that, there's a lot of people that in a positive way, perhaps, I don't know, I don't like it, but like to want to try to game the system to get more attention. Everybody creators in a positive sense want to get attention, right? So how do you work in this space when people create more and more sort of click baity titles and thumbnails? Sort of very to ask him, Derek has made a video where basically describes that it seems what works is to create a high quality video, really good video, where people would want to watch it once they click on it, but have click baity titles and thumbnails to get them to click on it in the first place. And he's saying, I'm embracing this fact, I'm just going to keep doing it. And I hope you forgive me for doing it and you will enjoy my videos once you click on them. So in what sense do you see this kind of click bait style attempt to manipulate, to get people in the door to manipulate the algorithm or play with the algorithm or game the algorithm? I think that you can look at it as an attempt to game the algorithm. But even if you were to take the algorithm out of it and just say, okay, well, all these videos happen to be lined up, which the algorithm didn't make any decision about which one to put at the top or the bottom, but they're all lined up there, which one are the people going to choose? And I'll tell you the same thing that I told Derek is, I have a bookshelf and they have two kinds of books on them, science books. I have my math books from when I was a student and they all look identical except for the titles on the covers. They're all yellow, they're all from Springer and they're every single one of them. The cover is totally the same. Yes. Right? Yeah. On the other hand, I have other more pop science type books and they all have very interesting covers and they have provocative titles and things like that. I wouldn't say that they're click baity because they are indeed good books. And I don't think that they cross any line, but that's just a decision you have to make. Like the people who write classical recursion theory by Piero di Freddie, he was fine with the yellow title and nothing more. Whereas I think other people who wrote a more popular type book understand that they need to have a compelling cover and a compelling title. And I don't think there's anything really wrong with that. We do take steps to make sure that there is a line that you don't cross. And if you go too far, maybe your thumbnail is especially racy or it's all caps with too many exclamation points, we observe that users are sometimes offended by that. And so for the users who are offended by that, we will then depress or suppress those videos. And which reminds me, there's also another signal where users can say, I don't know if it was recently added, but I really enjoy it. Just saying, something like, I don't want to see this video anymore or something like, like this is a, like there's certain videos that just cut me the wrong way. Like just, just jump out at me, it's like, I don't want to, I don't want this. And it feels really good to clean that up, to be like, I don't, that's not, that's not for me. I don't know. I think that might've been recently added, but that's also a really strong signal. Yes, absolutely. Right. We don't want to make a recommendation that people are unhappy with. And that makes me, that particular one makes me feel good as a user in general and as a machine learning person. Cause I feel like I'm helping the algorithm. My interactions on YouTube don't always feel like I'm helping the algorithm. Like I'm not reminded of that fact. Like for example, Tesla and Autopilot and Elon Musk create a feeling for their customers, for people that own Teslas, that they're helping the algorithm of Tesla vehicles. Like they're all, like are really proud they're helping the fleet learn. I think YouTube doesn't always remind people that you're helping the algorithm get smarter. And for me, I love that idea. Like we're all collaboratively, like Wikipedia gives that sense that we're all together creating a beautiful thing. YouTube is a, doesn't always remind me of that. It's a, this conversation is reminding me of that, but. Well that's a good tip. We should keep that fact in mind when we design these features. I'm not sure I really thought about it that way, but that's a very interesting perspective. It's an interesting question of personalization that I feel like when I click like on a video, I'm just improving my experience. It would be great. It would make me personally, people are different, but make me feel great if I was helping also the YouTube algorithm broadly say something. You know what I'm saying? Like there's a, that I don't know if that's human nature, but you want the products you love, and I certainly love YouTube, like you want to help it get smarter, smarter, smarter because there's some kind of coupling between our lives together being better. If YouTube is better than I will, my life will be better. And there's that kind of reasoning. I'm not sure what that is and I'm not sure how many people share that feeling. That could be just a machine learning feeling. But on that point, how much personalization is there in terms of next video recommendations? So is it kind of all really boiling down to clustering? Like if I'm the nearest clusters to me and so on and that kind of thing, or how much is personalized to me, the individual completely? It's very, very personalized. So your experience will be quite a bit different from anybody else's who's watching that same video, at least when they're logged in. And the reason is that we found that users often want two different kinds of things when they're watching a video. Sometimes they want to keep watching more on that topic or more in that genre. And other times they just are done and they're ready to move on to something else. And so the question is, well, what is the something else? And one of the first things one can imagine is, well, maybe something else is the latest video from some channel to which you've subscribed. And that's going to be very different for you than it is for me. And even if it's not something that you subscribe to, it's something that you watch a lot. And again, that'll be very different on a person by person basis. And so even the Watch Next, as well as the homepage, of course, is quite personalized. So what, we mentioned some of the signals, but what does success look like? What does success look like in terms of the algorithm creating a great long term experience for a user? Or to put another way, if you look at the videos I've watched this month, how do you know the algorithm succeeded for me? I think, first of all, if you come back and watch more YouTube, then that's one indication that you found some value from it. So just the number of hours is a powerful indicator. Well, I mean, not the hours themselves, but the fact that you return on another day. So that's probably the most simple indicator. People don't come back to things that they don't find value in, right? There's a lot of other things that they could do. But like I said, ideally, we would like everybody to feel that YouTube enriches their lives and that every video they watched is the best one they've ever watched since they've started watching YouTube. And so that's why we survey them and ask them, is this one to five stars? And so our version of success is every time someone takes that survey, they say it's five stars. And if we ask them, is this the best video you've ever seen on YouTube? They say, yes, every single time. So it's hard to imagine that we would actually achieve that. Maybe asymptotically we would get there, but that would be what we think success is. It's funny. I've recently said somewhere, I don't know, maybe tweeted, but that Ray Dalio has this video on the economic machine, I forget what it's called, but it's a 30 minute video. And I said it's the greatest video I've ever watched on YouTube. It's like I watched the whole thing and my mind was blown as a very crisp, clean description of how the, at least the American economic system works. It's a beautiful video. And I was just, I wanted to click on something to say this is the best thing. This is the best thing ever. Please let me, I can't believe I discovered it. I mean, the views and the likes reflect its quality, but I was almost upset that I haven't found it earlier and wanted to find other things like it. I don't think I've ever felt that this is the best video I've ever watched. That was that. And to me, the ultimate utopia, the best experiences were every single video. Where I don't see any of the videos I regret and every single video I watch is one that actually helps me grow, helps me enjoy life, be happy and so on. So that's a heck of a, that's one of the most beautiful and ambitious, I think, machine learning tasks. So when you look at a society as opposed to the individual user, do you think of how YouTube is changing society when you have these millions of people watching videos, growing, learning, changing, having debates? Do you have a sense of, yeah, what the big impact on society is? I think it's huge, but do you have a sense of what direction we're taking this world? Well, I mean, I think openness has had an impact on society already. There's a lot of... What do you mean by openness? Well, the fact that unlike other mediums, there's not someone sitting at YouTube who decides before you can upload your video, whether it's worth having you upload it or worth anybody seeing it really, right? And so there are some creators who say, like, I wouldn't have this opportunity to reach an audience. Tyler Oakley often said that he wouldn't have had this opportunity to reach this audience if it weren't for YouTube. And so I think that's one way in which YouTube has changed society. I know that there are people that I work with from outside the United States, especially from places where literacy is low, and they think that YouTube can help in those places because you don't need to be able to read and write in order to learn something important for your life, maybe how to do some job or how to fix something. And so that's another way in which I think YouTube is possibly changing society. So I've worked at YouTube for eight, almost nine years now. And it's fun because I meet people and you tell them where you work, you say you work on YouTube and they immediately say, I love YouTube, right? Which is great, makes me feel great. But then of course, when I ask them, well, what is it that you love about YouTube? Not one time ever has anybody said that the search works outstanding or that the recommendations are great. What they always say when I ask them, what do you love about YouTube is they immediately start talking about some channel or some creator or some topic or some community that they found on YouTube and that they just love. And so that has made me realize that YouTube is really about the video and connecting the people with the videos. And then everything else kind of gets out of the way. So beyond the video, it's an interesting, because you kind of mentioned creator. What about the connection with just the individual creators as opposed to just individual video? So like I gave the example of Ray Dalio video that the video itself is incredible, but there's some people who are just creators that I love. One of the cool things about people who call themselves YouTubers or whatever is they have a journey. They usually, almost all of them, they suck horribly in the beginning and then they kind of grow and then there's that genuineness in their growth. So YouTube clearly wants to help creators connect with their audience in this kind of way. So how do you think about that process of helping creators grow, helping them connect with their audience, develop not just individual videos, but the entirety of a creator's life on YouTube? Well, I mean, we're trying to help creators find the biggest audience that they can find. And the reason why that's, you brought up creator versus video, the reason why creator channel is so important is because if we have a hope of people coming back to YouTube, well, they have to have in their minds some sense of what they're going to find when they come back to YouTube. If YouTube were just the next viral video and I have no concept of what the next viral video could be, one time it's a cat playing a piano and the next day it's some children interrupting a reporter and the next day it's some other thing happening, then it's hard for me to, when I'm not watching YouTube, say, gosh, I really would like to see something from someone or about something, right? And so that's why I think this connection between fans and creators is so important for both, because it's a way of sort of fostering a relationship that can play out into the future. Let me talk about kind of a dark and interesting question in general, and again, a topic that you or nobody has an answer to. But social media has a sense of, it gives us highs and it gives us lows in the sense that sort of creators often speak about having sort of burnout and having psychological ups and downs and challenges mentally in terms of continuing the creation process. There's a momentum, there's a huge excited audience that makes creators feel great. And I think it's more than just financial. I think it's literally just, they love that sense of community. It's part of the reason I upload to YouTube. I don't care about money, never will. What I care about is the community, but some people feel like this momentum, and even when there's times in their life when they don't feel, you know, for some reason don't feel like creating. So how do you think about burnout, this mental exhaustion that some YouTube creators go through? Is that something we have an answer for? Is that something, how do we even think about that? Well, the first thing is we want to make sure that the YouTube systems are not contributing to this sense, right? And so we've done a fair amount of research to demonstrate that you can absolutely take a break. If you are a creator and you've been uploading a lot, we have just as many examples of people who took a break and came back more popular than they were before as we have examples of going the other way. Yeah. Can we pause on that for a second? So the feeling that people have, I think, is if I take a break, everybody, the party will leave, right? So if you could just linger on that. So in your sense that taking a break is okay. Yes, taking a break is absolutely okay. And the reason I say that is because we have, we can observe many examples of being, of creators coming back very strong and even stronger after they have taken some sort of break. And so I just want to dispel the myth that this somehow necessarily means that your channel is going to go down or lose views. That is not the case. We know for sure that this is not a necessary outcome. And so we want to encourage people to make sure that they take care of themselves. That is job one, right? You have to look after yourself and your mental health. And I think that it probably, in some of these cases, contributes to better videos once they come back, right? Because a lot of people, I mean, I know myself, if I burn out on something, then I'm probably not doing my best work, even though I can keep working until I pass out. And so I think that the taking a break may even improve the creative ideas that someone has. Okay. I think that's a really important thing to sort of dispel. I think that applies to all of social media, like literally I've taken a break for a day every once in a while. Sorry. Sorry if that sounds like a short time, but even like, sorry, email, just taking a break from email, or only checking email once a day, especially when you're going through something psychologically in your personal life or so on, or really not sleeping much because of work deadlines, it can refresh you in a way that's profound. And so the same applies. It was there when you came back, right? It's there. And it looks different, actually, when you come back. You're sort of brighter eyed with some coffee, everything, the world looks better. So it's important to take a break when you need it. So you've mentioned kind of the YouTube algorithm that isn't E equals MC squared, it's not the single equation, it's potentially sort of more than a million lines of code. Is it more akin to what successful autonomous vehicles today are, which is they're just basically patches on top of patches of heuristics and human experts really tuning the algorithm and have some machine learning modules? Or is it becoming more and more a giant machine learning system with humans just doing a little bit of tweaking here and there? What's your sense? First of all, do you even have a sense of what is the YouTube algorithm at this point? And however much you do have a sense, what does it look like? Well, we don't usually think about it as the algorithm because it's a bunch of systems that work on different services. The other thing that I think people don't understand is that what you might refer to as the YouTube algorithm from outside of YouTube is actually a bunch of code and machine learning systems and heuristics, but that's married with the behavior of all the people who come to YouTube every day. So the people part of the code, essentially. Exactly. If there were no people who came to YouTube tomorrow, then the algorithm wouldn't work anymore. Right. That's the whole part of the algorithm. And so when people talk about, well, the algorithm does this, the algorithm does that, it's sometimes hard to understand, well, it could be the viewers are doing that. And the algorithm is mostly just keeping track of what the viewers do and then reacting to those things in sort of more fine grain situations. And I think that this is the way that the recommendation system and the search system and probably many machine learning systems evolve is you start trying to solve a problem and the first way to solve a problem is often with a simple heuristic. And you want to say, what are the videos we're going to recommend? Well, how about the most popular ones? That's where you start. And over time, you collect some data and you refine your situation so that you're making less heuristics and you're building a system that can actually learn what to do in different situations based on some observations of those situations in the past. And you keep chipping away at these heuristics over time. And so I think that just like with diversity, I think the first diversity measure we took was, okay, not more than three videos in a row from the same channel. It's a pretty simple heuristic to encourage diversity, but it worked, right? Who needs to see four, five, six videos in a row from the same channel? And over time, we try to chip away at that and make it more fine grain and basically have it remove the heuristics in favor of something that can react to individuals and individual situations. So how do you, you mentioned, you know, we know that something worked. How do you get a sense when decisions are kind of A, B testing that this idea was a good one, this was not so good? How do you measure that and across which time scale, across how many users, that kind of thing? Well, you mentioned the A, B experiments. And so just about every single change we make to YouTube, we do it only after we've run a A, B experiment. And so in those experiments, which run from one week to months, we measure hundreds, literally hundreds of different variables and measure changes with confidence intervals in all of them, because we really are trying to get a sense for ultimately, does this improve the experience for viewers? That's the question we're trying to answer. And an experiment is one way because we can see certain things go up and down. So for instance, if we noticed in the experiment, people are dismissing videos less frequently, or they're saying that they're more satisfied, they're giving more videos five stars after they watch them, then those would be indications that the experiment is successful, that it's improving the situation for viewers. But we can also look at other things, like we might do user studies, where we invite some people in and ask them, like, what do you think about this? What do you think about that? How do you feel about this? And other various kinds of user research. But ultimately, before we launch something, we're going to want to run an experiment. So we get a sense for what the impact is going to be, not just to the viewers, but also to the different channels and all of that. An absurd question. Nobody knows. Well, actually, it's interesting. Maybe there's an answer. But if I want to make a viral video, how do I do it? I don't know how you make a viral video. I know that we have in the past tried to figure out if we could detect when a video was going to go viral. And those were, you take the first and second derivatives of the view count and maybe use that to do some prediction. But I can't say we ever got very good at that. Oftentimes we look at where the traffic was coming from. If a lot of the viewership is coming from something like Twitter, then maybe it has a higher chance of becoming viral than if it were coming from search or something. But that was just trying to detect a video that might be viral. How to make one, I have no idea. You get your kids to interrupt you while you're on the news or something. Absolutely. But after the fact, on one individual video, sort of ahead of time predicting is a really hard task. But after the video went viral, in analysis, can you sometimes understand why it went viral? From the perspective of YouTube broadly, first of all, is it even interesting for YouTube that a particular video is viral or does that not matter for the individual, for the experience of people? Well, I think people expect that if a video is going viral and it's something they would be interested in, then I think they would expect YouTube to recommend it to them. Right. So if something's going viral, it's good to just let the wave, let people ride the wave of its violence. Well, I mean, we want to meet people's expectations in that way, of course. So like I mentioned, I hung out with Derek Mueller a while ago, a couple of months back. He's actually the person who suggested I talk to you on this podcast. All right. Well, thank you, Derek. At that time, he just recently posted an awesome science video titled, why are 96 million black balls on this reservoir? And in a matter of, I don't know how long, but like a few days, he got 38 million views and it's still growing. Is this something you can analyze and understand why it happened, this video and you want a particular video like it? I mean, we can surely see where it was recommended, where it was found, who watched it and those sorts of things. So it's actually, sorry to interrupt, it is the video which helped me discover who Derek is. I didn't know who he is before. So I remember, you know, usually I just have all of these technical, boring MIT Stanford talks in my recommendation because that's how I watch. And then all of a sudden there's this black balls and reservoir video with like an excited nerd with like just, why is this being recommended to me? So I clicked on it and watched the whole thing and it was awesome. And then a lot of people had that experience, like why was I recommended this? But they all of course watched it and enjoyed it, which is, what's your sense of this just wave of recommendation that comes with this viral video that ultimately people get enjoy after they click on it? Well, I think it's the system, you know, basically doing what anybody who's recommending something would do, which is you show it to some people and if they like it, you say, okay, well, can I find some more people who are a little bit like them? Okay, I'm going to try it with them. Oh, they like it too. Let me expand the circle some more, find some more people. Oh, it turns out they like it too. And you just keep going until you get some feedback that says that, no, now you've gone too far. These people don't like it anymore. And so I think that's basically what happened. And you asked me about how to make a video go viral or make a viral video. I don't think that if you or I decided to make a video about 96 million balls that it would also go viral. It's possible that Derek made like the canonical video about those black balls in the lake. He did actually. Right. And I don't know whether or not just following along is the secret. Yeah. But it's fascinating. I mean, just like you said, the algorithm sort of expanding that circle and then figuring out that more and more people did enjoy it and that sort of phase shift of just a huge number of people enjoying it and the algorithm quickly, automatically, I assume, figuring that out. I don't know, the dynamics of psychology of that is a beautiful thing. So what do you think about the idea of clipping? Too many people annoyed me into doing it, which is they were requesting it. They said it would be very beneficial to add clips in like the coolest points and actually have explicit videos. Like I'm re uploading a video, like a short clip, which is what the podcasts are doing. Do you see as opposed to, like I also add timestamps for the topics, do you want the clip? Do you see YouTube somehow helping creators with that process or helping connect clips to the original videos or is that just on a long list of amazing features to work towards? Yeah. I mean, it's not something that I think we've done yet, but I can tell you that I think clipping is great and I think it's actually great for you as a creator. And here's the reason. If you think about, I mean, let's say the NBA is uploading videos of its games. Well, people might search for warriors versus rockets or they might search for Steph Curry. And so a highlight from the game in which Steph Curry makes an amazing shot is an opportunity for someone to find a portion of that video. And so I think that you never know how people are going to search for something that you've created. And so you want to, I would say you want to make clips and add titles and things like that so that they can find it as easily as possible. Do you have a dream of a future, perhaps a distant future when the YouTube algorithm figures that out? Sort of automatically detects the parts of the video that are really interesting, exciting, potentially exciting for people and sort of clip them out in this incredibly rich space. Cause if you talk about, if you talk, even just this conversation, we probably covered 30, 40 little topics and there's a huge space of users that would find, you know, 30% of those topics really interesting. And that space is very different. It's something that's beyond my ability to clip out, right? But the algorithm might be able to figure all that out, sort of expand into clips. Do you have a, do you think about this kind of thing? Do you have a hope or dream that one day the algorithm will be able to do that kind of deep content analysis? Well, we've actually had projects that attempt to achieve this, but it really does depend on understanding the video well and our understanding of the video right now is quite crude. And so I think it would be especially hard to do it with a conversation like this. One might be able to do it with, let's say a soccer match more easily, right? You could probably find out where the goals were scored. And then of course you, you need to figure out who it was that scored the goal and, and that might require a human to do some annotation. But I think that trying to identify coherent topics in a transcript, like, like the one of our conversation is, is not something that we're going to be very good at right away. And I was speaking more to the general problem actually of being able to do both a soccer match and our conversation without explicit sort of almost my, my hope was that there exists an algorithm that's able to find exciting things in video. So Google now on Google search will help you find the segment of the video that you're interested in. So if you search for something like how to change the filter in my dishwasher, then if there's a long video about your dishwasher and this is the part where the person shows you how to change the filter, then, then it will highlight that area. And provide a link directly to it. And do you know if, from your recollection, do you know if the thumbnail reflects, like, what's the difference between showing the full video and the shorter clip? Do you know how it's presented in search results? I don't remember how it's presented. And the other thing I would say is that right now it's based on creator annotations. Ah, got it. So it's not the thing we're talking about. But folks are working on the more automatic version. It's interesting, people might not imagine this, but a lot of our systems start by using almost entirely the audience behavior. And then as they get better, the refinement comes from using the content. And I wish, I know there's privacy concerns, but I wish YouTube explored the space, which is sort of putting a camera on the users if they allowed it, right, to study their, like, I did a lot of emotion recognition work and so on, to study actual sort of richer signal. One of the cool things when you upload 360 like VR video to YouTube, and I've done this a few times, so I've uploaded myself, it's a horrible idea. Some people enjoyed it, but whatever. The video of me giving a lecture in 360 with a 360 camera, and it's cool because YouTube allows you to then watch where did people look at? There's a heat map of where, you know, of where the center of the VR experience was. And it's interesting because that reveals to you, like, what people looked at. It's not always what you were expecting. In the case of the lecture, it's pretty boring, it is what we were expecting, but we did a few funny videos where there's a bunch of people doing things, and everybody tracks those people. You know, in the beginning, they all look at the main person and they start spreading around and looking at the other people. It's fascinating. So that kind of, that's a really strong signal of what people found exciting in the video. I don't know how you get that from people just watching, except they tuned out at this point. Like, it's hard to measure this moment was super exciting for people. I don't know how you get that signal. Maybe comment, is there a way to get that signal where this was like, this is when their eyes opened up and they're like, like for me with the Ray Dalio video, right? Like at first I was like, okay, this is another one of these like dumb it down for you videos. And then you like start watching, it's like, okay, there's really crisp, clean, deep explanation of how the economy works. That's where I like set up and started watching, right? That moment, is there a way to detect that moment? The only way I can think of is by asking people to label it. You mentioned that we're quite far away in terms of doing video analysis, deep video analysis. Of course, Google, YouTube, you know, we're quite far away from solving autonomous driving problem too. So it's a... I don't know. I think we're closer to that. Well, the, you know, you never know. And the Wright brothers thought they're never, they're not going to fly for 50 years, three years before they flew. So what are the biggest challenges would you say? Is it the broad challenge of understanding video, understanding natural language, understanding the challenge before the entire machine learning community or just being able to understand data? Is there something specific to video that's even more challenging than understanding natural language understanding? What's your sense of what the biggest challenge is? Video is just so much information. And so precision becomes a real problem. It's like, you know, you're trying to classify something and you've got a million classes and the distinctions among them, at least from a machine learning perspective are often pretty small, right? Like, you know, you need to see this person's number in order to know which player it is. And there's a lot of players or you need to see, you know, the logo on their chest in order to know like which team they play for. And so, and that's just figuring out who's who, right? And then you go further and saying, okay, well, you know, was that a goal? Was it not a goal? Like, is that an interesting moment as you said, or is that not an interesting moment? These things can be pretty hard. So okay. So Yann LeCun, I'm not sure if you're familiar sort of with his current thinking and work. So he believes that self, what he's referring to as self supervised learning will be the solution sort of to achieving this kind of greater level of intelligence. In fact, the thing he's focusing on is watching video and predicting the next frame. So predicting the future of video, right? So for now we're very far from that, but his thought is because it's unsupervised or as he refers to as self supervised, you know, if you watch enough video, essentially if you watch YouTube, you'll be able to learn about the nature of reality, the physics, the common sense reasoning required by just teaching a system to predict the next frame. So he's confident this is the way to go. So for you, from the perspective of just working with this video, how do you think an algorithm that just watches all of YouTube, stays up all day and night watching YouTube would be able to understand enough of the physics of the world about the way this world works, be able to do common sense reasoning and so on? Well, I mean, we have systems that already watch all the videos on YouTube, right? But they're just looking for very specific things, right? They're supervised learning systems that are trying to identify something or classify something. And I don't know if, I don't know if predicting the next frame is really going to get there because I'm not an expert on compression algorithms, but I understand that that's kind of what compression video compression algorithms do is they basically try to predict the next frame and then fix up the places where they got it wrong. And that leads to higher compression than if you actually put all the bits for the next frame there. So I don't know if I believe that just being able to predict the next frame is going to be enough because there's so many frames and even a tiny bit of error on a per frame basis can lead to wildly different videos. So the thing is, the idea of compression is one way to do compression is to describe through text what's contained in the video. That's the ultimate high level of compression. So the idea is traditionally when you think of video image compression, you're trying to maintain the same visual quality while reducing the size. But if you think of deep learning from a bigger perspective of what compression is, is you're trying to summarize the video. And the idea there is if you have a big enough neural network, just by watching the next, trying to predict the next frame, you'll be able to form a compression of actually understanding what's going on in the scene. If there's two people talking, you can just reduce that entire video into the fact that two people are talking and maybe the content of what they're saying and so on. That's kind of the open ended dream. So I just wanted to sort of express that because it's interesting, compelling notion, but it is nevertheless true that video, our world is a lot more complicated than we get a credit for. I mean, in terms of search and discovery, we have been working on trying to summarize videos in text or with some kind of labels for eight years at least. And you know, and we're kind of so, so. So if you were to say the problem is a hundred percent solved and eight years ago was zero percent solved, where are we on that timeline would you say? Yeah. To summarize a video well, maybe less than a quarter of the way. So on that topic, what does YouTube look like 10, 20, 30 years from now? I mean, I think that YouTube is evolving to take the place of TV. I grew up as a kid in the seventies and I watched a tremendous amount of television and I feel sorry for my poor mom because people told her at the time that it was going to rot my brain and that she should kill her television. But anyway, I mean, I think that YouTube is at least for my family, a better version of television, right? It's one that is on demand. It's more tailored to the things that my kids want to watch. And also they can find things that they would never have found on television. And so I think that at least from just observing my own family, that's where we're headed is that people watch YouTube kind of in the same way that I watched television when I was younger. So from a search and discovery perspective, what do you, what are you excited about in the five, 10, 20, 30 years? Like what kind of things? It's already really good. I think it's achieved a lot of, of course we don't know what's possible. So it's the task of search of typing in the text or discovering new videos by the next recommendation. So I personally am really happy with the experience. I continuously, I rarely watch a video that's not awesome from my own perspective, but what's, what else is possible? What are you excited about? Well, I think introducing people to more of what's available on YouTube is not only very important to YouTube and to creators, but I think it will help enrich people's lives because there's a lot that I'm still finding out is available on YouTube that I didn't even know. I've been working YouTube eight years and it wasn't until last year that I learned that, that I could watch USC football games from the 1970s. Like I didn't even know that was possible until last year and I've been working here quite some time. So, you know, what was broken about, about that? That it took me seven years to learn that this stuff was already on YouTube even when I got here. So I think there's a big opportunity there. And then as I said before, you know, we want to make sure that YouTube finds a way to ensure that it's acting responsibly with respect to society and enriching people's lives. So we want to take all of the great things that it does and make sure that we are eliminating the negative consequences that might happen. And then lastly, if we could get to a point where all the videos people watch are the best ones they've ever watched, that'd be outstanding too. Do you see in many senses becoming a window into the world for people? It's especially with live video, you get to watch events. I mean, it's really, it's the way you experience a lot of the world that's out there is better than TV in many, many ways. So do you see becoming more than just video? Do you see creators creating visual experiences and virtual worlds that if I'm, I'm talking crazy now, but sort of virtual reality and entering that space, or is that at least for now totally outside what YouTube is thinking about? I mean, I think Google is thinking about virtual reality. I don't think about virtual reality too much. I know that we would want to make sure that YouTube is there when virtual reality becomes something or if virtual reality becomes something that a lot of people are interested in. But I haven't seen it really take off yet. Take off. Well, the future is wide open. Christos, I've been really looking forward to this conversation. It's been a huge honor. Thank you for answering some of the more difficult questions I've asked. I'm really excited about what YouTube has in store for us. It's one of the greatest products I've ever used and continues. So thank you so much for talking to me. It's my pleasure. Thanks for asking me. Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash App. Download it. Use code LexPodcast. You'll get $10 and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter. And now, let me leave you with some words of wisdom from Marcel Proust. The real voyage of discovery consists not in seeking new landscapes, but in having new eyes. Thank you for listening and hope to see you next time.
Cristos Goodrow: YouTube Algorithm | Lex Fridman Podcast #68
The following is a conversation with David Chalmers. He's a philosopher and cognitive scientist specializing in the areas of philosophy of mind, philosophy of language, and consciousness. He's perhaps best known for formulating the hard problem of consciousness, which could be stated as why does the feeling which accompanies awareness of sensory information exist at all? Consciousness is almost entirely a mystery. Many people who worry about AI safety and ethics believe that, in some form, consciousness can and should be engineered into AI systems of the future. So while there's much mystery, disagreement, discoveries yet to be made about consciousness, these conversations, while fundamentally philosophical in nature, may nevertheless be very important for engineers of modern AI systems to engage in. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXBODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Brokerage services are provided by Cash App Investing, subsidiary of Square, and member SIPC. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that, in the end, provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. If you get Cash App from the App Store or Google Play and use the code LEXBODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with David Chalmers. Do you think we're living in a simulation? I don't rule it out. There's probably gonna be a lot of simulations in the history of the cosmos. If the simulation is designed well enough, it'll be indistinguishable from a non simulated reality. And although we could keep searching for evidence that we're not in a simulation, any of that evidence in principle could be simulated. So I think it's a possibility. But do you think the thought experiment is interesting or useful to calibrate how we think about the nature of reality? Yeah, I definitely think it's interesting and useful. In fact, I'm actually writing a book about this right now, all about the simulation idea, using it to shed light on a whole bunch of philosophical questions. So the big one is how do we know anything about the external world? Descartes said, maybe you're being fooled by an evil demon who's stimulating your brain into thinking, all this stuff is real when in fact, it's all made up. Well, the modern version of that is, how do you know you're not in a simulation? Then the thought is, if you're in a simulation, none of this is real. So that's teaching you something about knowledge. How do you know about the external world? I think there's also really interesting questions about the nature of reality right here. If we are in a simulation, is all this real? Is there really a table here? Is it really a microphone? Do I really have a body? The standard view would be, no, we don't. None of this would be real. My view is actually that's wrong. And even if we are in a simulation, all of this is real. That's why I called this reality 2.0. New version of reality, different version of reality, still reality. So what's the difference between quote unquote, real world and the world that we perceive? So we interact with the world by perceiving it. It only really exists through the window of our perception system and in our mind. So what's the difference between something that's quote unquote real, that exists perhaps without us being there, and the world as you perceive it? Well the world as we perceive it is a very simplified and distorted version of what's going on underneath. We already know that from just thinking about science. You don't see too many obviously quantum mechanical effects in what we perceive, but we still know quantum mechanics is going on under all things. So I like to think the world we perceive is this very kind of simplified picture of colors and shapes existing in space and so on. We know there's a, that's what the philosopher Wilfred Sellers called the manifest image. The world as it seems to us, we already know underneath all that is a very different scientific image with atoms or quantum wave functions or super strings or whatever the latest thing is. And that's the ultimate scientific reality. So I think of the simulation idea as basically another hypothesis about what the ultimate say quasi scientific or metaphysical reality is going on underneath the world of the manifest image. The world of the manifest image is this very simple thing that we interact with that's neutral on the underlying stuff of reality. Science can help tell us about that. Maybe philosophy can help tell us about that too. And if we eventually take the red pill and find out we're in a simulation, my view is that's just another view about what reality is made of. The philosopher Immanuel Kant said, what is the nature of the thing in itself? I've got a glass here and it's got all these, it appears to me a certain way, a certain shape, it's liquid, it's clear. And he said, what is the nature of the thing in itself? Well, I think of the simulation idea, it's a hypothesis about the nature of the thing in itself. It turns out if we're in a simulation, the thing in itself nature of this glass, it's okay, it's actually a bunch of data structures running on a computer in the next universe up. Yeah, that's what people tend to do when they think about simulation. They think about our modern computers and somehow trivially crudely just scaled up in some sense. But do you think the simulation, I mean, in order to actually simulate something as complicated as our universe that's made up of molecules and atoms and particles and quarks and maybe even strings, all of that would require something just infinitely many orders of magnitude more of scale and complexity. Do you think we're even able to even like conceptualize what it would take to simulate our universe? Or does it just slip into this idea that you basically have to build a universe, something so big to simulate it? Does it get this into this fuzzy area that's not useful at all? Yeah, well, I mean, our universe is obviously incredibly complicated. And for us within our universe to build a simulation of a universe as complicated as ours is gonna have obvious problems here. If the universe is finite, there's just no way that's gonna work. Maybe there's some cute way to make it work if the universe is infinite, maybe an infinite universe could somehow simulate a copy of itself, but that's gonna be hard. Nonetheless, just that we are in a simulation, I think there's no particular reason why we have to think the simulating universe has to be anything like ours. You've said before that it might be, so you could think of it in turtles all the way down. You could think of the simulating universe different than ours, but we ourselves could also create another simulating universe. So you said that there could be these kind of levels of universes. And you've also mentioned this hilarious idea, maybe tongue in cheek, maybe not, that there may be simulations within simulations, arbitrarily stacked levels, and that there may be, that we may be in level 42. Oh yeah. Along those stacks, referencing Hitchhiker's Guide to the Universe. If we're indeed in a simulation within a simulation at level 42, what do you think level zero looks like? The originating universe. I would expect that level zero is truly enormous. I mean, not just, if it's finite, at some extraordinarily large finite capacity, much more likely it's infinite. Maybe it's got some very high cardinality that enables it to support just any number of simulations. So high degree of infinity at level zero, slightly smaller degree of infinity at level one. So by the time you get down to us at level 42, maybe there's plenty of room for lots of simulations of finite capacity. If the top universe is only a small finite capacity, then obviously that's gonna put very, very serious limits on how many simulations you're gonna be able to get running. So I think we can certainly confidently say that if we're at level 42, then the top level's pretty damn big. So it gets more and more constrained as we get down levels, more and more simplified and constrained and limited in resources. Yeah, we still have plenty of capacity here. What was it Feynman said? He said there's plenty of room at the bottom. We're still a number of levels above the degree where there's room for fundamental computing, physical computing capacity, quantum computing capacity at the bottom level. So we've got plenty of room to play with and we probably have plenty of room for simulations of pretty sophisticated universes, perhaps none as complicated as our universe, unless our universe is infinite, but still at the very least for pretty serious finite universes, but maybe universes somewhat simpler than ours, unless of course we're prepared to take certain shortcuts in the simulation, which might then increase the capacity significantly. Do you think the human mind, us people, in terms of the complexity of simulation is at the height of what the simulation might be able to achieve? Like if you look at incredible entities that could be created in this universe of ours, do you have an intuition about how incredible human beings are on that scale? I think we're pretty impressive, but we're not that impressive. Are we above average? I mean, I think human beings are at a certain point in the scale of intelligence, which made many things possible. You get through evolution, through single cell organisms, through fish and mammals and primates, and something happens. Once you get to human beings, we've just reached that level where we get to develop language, we get to develop certain kinds of culture, and we get to develop certain kinds of collective thinking that has enabled all this amazing stuff to happen, science and literature and engineering and culture and so on. So we had just at the beginning of that on the evolutionary threshold, it's kind of like we just got there, who knows, a few thousand or tens of thousands of years ago. So we're probably just at the very beginning for what's possible there. So I'm inclined to think among the scale of intelligent beings, we're somewhere very near the bottom. I would expect that, for example, if we're in a simulation, then the simulators who created us have got the capacity to be far more sophisticated. If we're at level 42, who knows what the ones at level zero are like. It's also possible that this is the epitome of what is possible to achieve. So we as human beings see ourselves maybe as flawed, see all the constraints, all the limitations, but maybe that's the magical, the beautiful thing. Maybe those limitations are the essential elements for an interesting sort of that edge of chaos, that interesting existence, that if you make us much more intelligent, if you make us much more powerful in any kind of dimension of performance, maybe you lose something fundamental that makes life worth living. So you kind of have this optimistic view that we're this little baby, that then there's so much growth and potential, but this could also be it. This is the most amazing thing is us. Maybe what you're saying is consistent with what I'm saying. I mean, we could still have levels of intelligence far beyond us, but maybe those levels of intelligence on your view would be kind of boring. And we kind of get so good at everything that life suddenly becomes uni dimensional. So we're just inhabiting this one spot of like maximal romanticism in the history of evolution. You get to humans and it's like, yeah, and then years to come, our super intelligent descendants are gonna look back at us and say, those were the days when they just hit the point of inflection and life was interesting. I am an optimist. So I'd like to think that if there is super intelligent somewhere in the future, they'll figure out how to make life super interesting and super romantic. Well, you know what they're gonna do. So what they're gonna do is they realize how boring life is when you're super intelligent. So they create a new level of assimilation and sort of live through the things they've created by watching them stumble about in their flawed ways. So maybe that's, so you create a new level of assimilation every time you get really bored with how smart and. This would be kind of sad though, because if we showed the peak of their existence would be like watching simulations for entertainment. Not like saying the peak of our existence now is Netflix. No, it's all right. A flip side of that could be the peak of our existence for many people having children and watching them grow. That becomes very meaningful. Okay, you create a simulation that's like creating a family. Creating like, well, any kind of creation is kind of a powerful act. Do you think it's easier to simulate the mind or the universe? So I've heard several people, including Nick Bostrom, think about ideas of maybe you don't need to simulate the universe, you can just simulate the human mind. Or in general, just the distinction between simulating the entirety of it, the entirety of the physical world, or just simulating the mind. Which one do you see as more challenging? Well, I think in some sense, the answer is obvious. It has to be simpler to simulate the mind than to simulate the universe, because the mind is part of the universe. And in order to fully simulate the universe, you're gonna have to simulate the mind. So unless we're talking about partial simulations. And I guess the question is which comes first? Does the mind come before the universe or does the universe come before the mind? So the mind could just be an emergent phenomena in this universe. So simulation is an interesting thing that it's not like creating a simulation perhaps requires you to program every single thing that happens in it. It's just defining a set of initial conditions and rules based on which it behaves. Simulating the mind requires you to have a little bit more, we're now in a little bit of a crazy land, but it requires you to understand the fundamentals of cognition, perhaps of consciousness, of perception of everything like that, that's not created through some kind of emergence from basic physics laws, but more requires you to actually understand the fundamentals of the mind. How about if we said to simulate the brain? The brain. Rather than the mind. So the brain is just a big physical system. The universe is a giant physical system. To simulate the universe at the very least, you're gonna have to simulate the brains as well as all the other physical systems within it. And it's not obvious that the problems are any worse for the brain than for, it's a particularly complex physical system. But if we can simulate arbitrary physical systems, we can simulate brains. There is this further question of whether, when you simulate a brain, will that bring along all the features of the mind with it? Like will you get consciousness? Will you get thinking? Will you get free will? And so on. And that's something philosophers have argued over for years. My own view is if you simulate the brain well enough, that will also simulate the mind. But yeah, there's plenty of people who would say no. You'd merely get like a zombie system, a simulation of a brain without any true consciousness. But for you, you put together a brain, the consciousness comes with it, arise. Yeah, I don't think it's obvious. That's your intuition. My view is roughly that yeah, what is responsible for consciousness, it's in the patterns of information processing and so on rather than say the biology that it's made of. There's certainly plenty of people out there who think consciousness has to be say biological. So if you merely replicate the patterns of information processing in a nonbiological substrate, you'll miss what's crucial for consciousness. I mean, I just don't think there's any particular reason to think that biology is special here. You can imagine substituting the biology for nonbiological systems, say silicon circuits that play the same role. The behavior will continue to be the same. And I think just thinking about what is the true, when I think about the connection, the isomorphisms between consciousness and the brain, the deepest connections to me seem to connect consciousness to patterns of information processing, not to specific biology. So I at least adopted as my working hypothesis that basically it's the computation and the information that matters for consciousness. Same time, we don't understand consciousness, so all this could be wrong. So the computation, the flow, the processing, manipulation of information, the process is where the consciousness, the software is where the consciousness comes from, not the hardware. Roughly the software, yeah. The patterns of information processing at least in the hardware, which we could view as software. It may not be something you can just like program and load and erase and so on in the way we can with ordinary software, but it's something at the level of information processing rather than at the level of implementation. So on that, what do you think of the experience of self, just the experience of the world in a virtual world, in virtual reality? Is it possible that we can create sort of offsprings of our consciousness by existing in a virtual world long enough? So yeah, can we be conscious in the same kind of deep way that we are in this real world by hanging out in a virtual world? Yeah, well, the kind of virtual worlds we have now are interesting but limited in certain ways. In particular, they rely on us having a brain and so on, which is outside the virtual world. Maybe I'll strap on my VR headset or just hang out in a virtual world on a screen, but my brain and then my physical environment might be simulated if I'm in a virtual world, but right now, there's no attempt to simulate my brain. There might be some non player characters in these virtual worlds that have simulated cognitive systems of certain kinds that dictate their behavior, but mostly, they're pretty simple right now. I mean, some people are trying to combine, put a bit of AI in their non player characters to make them smarter, but for now, inside virtual world, the actual thinking is interestingly distinct from the physics of those virtual worlds. In a way, actually, I like to think this is kind of reminiscent of the way that Descartes thought our physical world was. There's physics, and there's the mind, and they're separate. Now we think the mind is somehow connected to physics pretty deeply, but in these virtual worlds, there's a physics of a virtual world, and then there's this brain which is totally outside the virtual world that controls it and interacts it when anyone exercises agency in a video game, that's actually somebody outside the virtual world moving a controller, controlling the interaction of things inside the virtual world. So right now, in virtual worlds, the mind is somehow outside the world, but you could imagine in the future, once we have developed serious AI, artificial general intelligence, and so on, then we could come to virtual worlds which have enough sophistication, you could actually simulate a brain or have a genuine AGI, which would then presumably be able to act in equally sophisticated ways, maybe even more sophisticated ways, inside the virtual world to how it might in the physical world, and then the question's gonna come along, that would be kind of a VR, virtual world internal intelligence, and then the question is could they have consciousness, experience, intelligence, free will, all the things that we have, and again, my view is I don't see why not. To linger on it a little bit, I find virtual reality really incredibly powerful, just even the crude virtual reality we have now of perhaps there's psychological effects that make some people more amenable to virtual worlds than others, but I find myself wanting to stay in virtual worlds for the most part. You do? Yes. With a headset or on a desktop? No, with a headset. Really interesting, because I am totally addicted to using the internet and things on a desktop, but when it comes to VR, with a headset, I don't typically use it for more than 10 or 20 minutes. There's something just slightly aversive about it, I find, so I don't, right now, even though I have Oculus Rift and Oculus Quest and HTC Vive and Samsung, this and that. You just don't wanna stay in that world for long. Not for extended periods. You actually find yourself hanging out in that. Something about, it's both a combination of just imagination and considering the possibilities of where this goes in the future. It feels like I want to almost prepare my brain for it. I wanna explore sort of Disneyland when it's first being built in the early days, and it feels like I'm walking around almost imagining the possibilities, and something through that process allows my mind to really enter into that world, but you say that the brain is external to that virtual world. It is, strictly speaking, true, but... If you're in VR and you do brain surgery on an avatar, and you're gonna open up that skull, what are you gonna find? Sorry, nothing there. Nothing. The brain is elsewhere. You don't think it's possible to kind of separate them, and I don't mean in a sense like Descartes, like a hard separation, but basically, do you think it's possible with the brain outside of the virtual rhythm, when you're wearing a headset, create a new consciousness for prolonged periods of time? Really feel, like really, like forget that your brain is outside. So this is, okay, this is gonna be the case where the brain is still outside. It's still outside. But could living in the VR, I mean, we already find this, right, with video games. Exactly. They're completely immersive, and you get taken up by living in those worlds, and it becomes your reality for a while. So they're not completely immersive, they're just very immersive. Completely immersive. You don't forget the external world, no. Exactly, so that's what I'm asking. Do you think it's almost possible to really forget the external world? Really, really immerse yourself. To forget completely? Why would we forget? We got pretty good memories. Maybe you can stop paying attention to the external world, but this already happens a lot. I go to work, and maybe I'm not paying attention to my home life. I go to a movie, and I'm immersed in that. So that degree of immersion, absolutely. But we still have the capacity to remember it, to completely forget the external world. I'm thinking that would probably take some, I don't know, some pretty serious drugs or something to make your brain do that. Is that possible? So, I mean, I guess what I'm getting at is consciousness truly a property that's tied to the physical brain? Or can you create sort of different offspring, copies of consciousnesses based on the worlds that you enter? Well, the way we're doing it now, at least with a standard VR, there's just one brain. Interacts with the physical world. Plays a video game, puts on a video headset, interacts with this virtual world. And I think we'd typically say there's one consciousness here that nonetheless undergoes different environments, takes on different characters in different environments. This is already something that happens in the nonvirtual world. I might interact one way in my home life, my work life, my social life, and so on. So at the very least, that will happen in a virtual world very naturally. People sometimes adopt the character of avatars very different from themselves, maybe even a different gender, different race, different social background. So that much is certainly possible. I would see that as a single consciousness is taking on different personas. If you want literal splitting of consciousness into multiple copies, I think it's gonna take something more radical than that. Like maybe you can run different simulations of your brain in different realities and then expose them to different histories. And then you'd split yourself into 10 different simulated copies, which then undergo different environments and then ultimately do become 10 very different consciousnesses. Maybe that could happen, but now we're not talking about something that's possible in the near term. We're gonna have to have brain simulations and AGI for that to happen. Got it. So before any of that happens, it's fundamentally you see it as a singular consciousness, even though it's experiencing different environments, virtual or not, it's still connected to same set of memories, same set of experiences and therefore, one sort of joint conscious system. Yeah, or at least no more multiple than the kind of multiple consciousness that we get from inhabiting different environments in a non virtual world. So you said as a child, you were a music color synesthete. So where songs had colors for you. So what songs had what colors? You know, this is funny. I didn't pay much attention to this at the time, but I'd listen to a piece of music and I'd get some kind of imagery of a kind of color. The weird thing is mostly they were kind of murky, dark greens and olive browns and the colors weren't all that interesting. I don't know what the reason is. I mean, my theory is that maybe it's like different chords and tones provided different colors and they all tended to get mixed together into these somewhat uninteresting browns and greens. But every now and then there'd be something that had a really pure color. So there's just a few that I remember. There was a Here, There and Everywhere by the Beatles was bright red and has this very distinctive tonality and it's called structure at the beginning. So that was bright red. There was this song by the Alan Parsons Project called Ammonia Avenue that was kind of a pure, a pure blue. Anyway, I've got no idea how this happened. I didn't even pay that much attention until it went away when I was about 20. This synesthesia often goes away. So is it purely just the perception of a particular color or was there a positive or negative experience? Like was blue associated with a positive and red with a negative? Or is it simply the perception of color associated with some characteristic of the song? For me, I don't remember a lot of association with emotion or with value. It was just this kind of weird and interesting fact. I mean, at the beginning, I thought this was something that happened to everyone, songs of colors. Maybe I mentioned it once or twice and people said, nope. I thought it was kind of cool when there was one that had one of these especially pure colors, but only much later once I became a grad student thinking about the mind that I read about this phenomenon called synesthesia and I was like, hey, that's what I had. And now I occasionally talk about it in my classes, in intro class and it still happens sometimes. A student comes up and says, hey, I have that. I never knew about that. I never knew it had a name. You said that it went away at age 20 or so. And that you have a journal entry from around then saying, songs don't have colors anymore. What happened? What happened? Yeah, it was definitely sad that it was gone. In retrospect, it was like, hey, that's cool. The colors have gone. Yeah, can you think about that for a little bit? Do you miss those experiences? Because it's a fundamentally different set of experiences that you no longer have. Or is it just a nice thing to have had? You don't see them as that fundamentally different than you visiting a new country and experiencing new environments. I guess for me, when I had these experiences, they were somewhat marginal. They were like a little bonus kind of experience. I know there are people who have much more serious forms of synesthesia than this for whom it's absolutely central to their lives. I know people who, when they experience new people, they have colors, maybe they have tastes and so on. Every time they see writing, it has colors. Some people, whenever they hear music, it's got a certain really rich color pattern. For some synesthetes, it's absolutely central. I think if they lost it, they'd be devastated. Again, for me, it was a very, very mild form of synesthesia, and it's like, yeah, it's like those interesting experiences you might get under different altered states of consciousness and so on. It's kind of cool, but not necessarily the single most important experiences in your life. Got it. So let's try to go to the very simplest question that you've answered many a time, but perhaps the simplest things can help us reveal, even in time, some new ideas. So what, in your view, is consciousness? What is qualia? What is the hard problem of consciousness? Consciousness, I mean, the word is used many ways, but the kind of consciousness that I'm interested in is basically subjective experience, what it feels like from the inside to be a human being or any other conscious being. I mean, there's something it's like to be me right now. I have visual images that I'm experiencing. I'm hearing my voice. I've got maybe some emotional tone. I've got a stream of thoughts running through my head. These are all things that I experience from the first person point of view. I've sometimes called this the inner movie in the mind. It's not a perfect metaphor. It's not like a movie in every way, and it's very rich. But yeah, it's just direct, subjective experience. And I call that consciousness, or sometimes philosophers use the word qualia, which you suggested. People tend to use the word qualia for things like the qualities of things like colors, redness, the experience of redness versus the experience of greenness, the experience of one taste or one smell versus another, the experience of the quality of pain. And yeah, a lot of consciousness is the experience of those qualities. Well, consciousness is bigger, the entirety of any kinds of experiences. Consciousness of thinking is not obviously qualia. It's not like specific qualities like redness or greenness, but still I'm thinking about my hometown. I'm thinking about what I'm gonna do later on. Maybe there's still something running through my head, which is subjective experience. Maybe it goes beyond those qualities or qualia. Philosophers sometimes use the word phenomenal consciousness for consciousness in this sense. I mean, people also talk about access consciousness, being able to access information in your mind, reflective consciousness, being able to think about yourself. But it looks like the really mysterious one, the one that really gets people going is phenomenal consciousness. The fact that there's subjective experience and all this feels like something at all. And then the hard problem is how is it that, why is it that there is phenomenal consciousness at all? And how is it that physical processes in a brain could give you subjective experience? It looks like on the face of it, you'd have all this big complicated physical system in a brain running without a given subjective experience at all. And yet we do have subjective experience. So the hard problem is just explain that. Explain how that comes about. We haven't been able to build machines where a red light goes on that says it's not conscious. So how do we actually create that? Or how do humans do it? And how do we ourselves do it? We do every now and then create machines that can do this. We create babies that are conscious. They've got these brains. That brain does produce consciousness. But even though we can create it, we still don't understand why it happens. Maybe eventually we'll be able to create machines, which as a matter of fact, AI machines, which as a matter of fact are conscious. But that won't necessarily make the hard problem go away any more than it does with babies. Cause we still wanna know how and why is it that these processes give you consciousness? You just made me realize for a second, maybe it's a totally dumb realization, but nevertheless, that as a useful way to think about the creation of consciousness is looking at a baby. So that there's a certain point at which that baby is not conscious. The baby starts from maybe, I don't know, from a few cells, right? There's a certain point at which it becomes consciousness, arrives, it's conscious. Of course, we can't know exactly that line, but that's a useful idea that we do create consciousness. Again, a really dumb thing for me to say, but not until now did I realize we do engineer consciousness. We get to watch the process happen. We don't know which point it happens or where it is, but we do see the birth of consciousness. Yeah, I mean, there's a question, of course, is whether babies are conscious when they're born. And it used to be, it seems, at least some people thought they weren't, which is why they didn't give anesthetics to newborn babies when they circumcised them. And so now people think, oh, that would be incredibly cruel. Of course, babies feel pain. And now the dominant view is that the babies can feel pain. Actually, my partner Claudia works on this whole issue of whether there's consciousness in babies and of what kind. And she certainly thinks that newborn babies come into the world with some degree of consciousness. Of course, then you can just extend the question backwards to fetuses and suddenly you're into politically controversial territory. But the question also arises in the animal kingdom. Where does consciousness start or stop? Is there a line in the animal kingdom where the first conscious organisms are? It's interesting, over time, people are becoming more and more liberal about ascribing consciousness to animals. People used to think maybe only mammals could be conscious. Now most people seem to think, sure, fish are conscious. They can feel pain. And now we're arguing over insects. You'll find people out there who say plants have some degree of consciousness. So, you know, who knows where it's gonna end. The far end of this chain is the view that every physical system has some degree of consciousness. Philosophers call that panpsychism. You know, I take that view. I mean, that's a fascinating way to view reality. So if you could talk about, if you can linger on panpsychism for a little bit, what does it mean? So it's not just plants are conscious. I mean, it's that consciousness is a fundamental fabric of reality. What does that mean to you? How are we supposed to think about that? Well, we're used to the idea that some things in the world are fundamental, right, in physics. Like what? We take things like space or time or space time, mass, charges, fundamental properties of the universe. You don't reduce them to something simpler. You take those for granted. You've got some laws that connect them. Here is how mass and space and time evolve. Theories like relativity or quantum mechanics or some future theory that will unify them both. But everyone says you gotta take some things as fundamental. And if you can't explain one thing, in terms of the previous fundamental things, you have to expand. Maybe something like this happened with Maxwell. He ended up with fundamental principles of electromagnetism and took charge as fundamental because it turned out that was the best way to explain it. So I at least take seriously the possibility something like that could happen with consciousness. Take it as a fundamental property, like space, time, and mass. And instead of trying to explain consciousness wholly in terms of the evolution of space, time, and mass, and so on, take it as a primitive and then connect it to everything else by some fundamental laws. Because there's this basic problem that the physics we have now looks great for solving the easy problems of consciousness, which are all about behavior. They give us a complicated structure and dynamics. They tell us how things are gonna behave, what kind of observable behavior they'll produce, which is great for the problems of explaining how we walk and how we talk and so on. Those are the easy problems of consciousness. But the hard problem was this problem about subjective experience just doesn't look like that kind of problem about structure, dynamics, how things behave. So it's hard to see how existing physics is gonna give you a full explanation of that. Certainly trying to get a physics view of consciousness, yes, there has to be a connecting point and it could be at the very axiomatic at the very beginning level. But first of all, there's a crazy idea that sort of everything has properties of consciousness. At that point, the word consciousness is already beyond the reach of our current understanding. Like far, because it's so far from, at least for me, maybe you can correct me, as far from the experiences that I have as a human being. To say that everything is conscious, that means that basically another way to put that, if that's true, then we understand almost nothing about that fundamental aspect of the world. How do you feel about saying an ant is conscious? Do you get the same reaction to that or is that something you can understand? I can understand ant, I can understand an atom, a particle. Plants? Plant, so I'm comfortable with living things on Earth being conscious because there's some kind of agency where they're similar size to me and they can be born and they can die. And that is understandable intuitively. Of course, you anthropomorphize, you put yourself in the place of the plant, but I can understand it. I mean, I'm not like, I don't believe actually that plants are conscious or that plants suffer, but I can understand that kind of belief, that kind of idea. How do you feel about robots? Like the kind of robots we have now? If I told you like that a Roomba had some degree of consciousness or some deep neural network. I could understand that a Roomba has consciousness. I just had spent all day at I, robot. And I mean, I personally love robots and I have a deep connection with robots. So I can, I also probably anthropomorphize them. There's something about the physical object. So there's a difference than a neural network, a neural network running a software. To me, the physical object, something about the human experience allows me to really see that physical object as an entity. And if it moves and moves in a way that it, there's a, like I didn't program it, where it feels that it's acting based on its own perception. And yes, self awareness and consciousness, even if it's a Roomba, then you start to assign it some agency, some consciousness. So, but to say that panpsychism, that consciousness is a fundamental property of reality is a much bigger statement. That it's like turtles all the way. It's like every, it's, it doesn't end. The whole thing is, so like how, I know it's full of mystery, but if you can linger on it, like how would it, how do you think about reality if consciousness is a fundamental part of its fabric? The way you get there is from thinking, can we explain consciousness given the existing fundamentals? And then if you can't, as at least right now, it looks like, then you've got to add something. It doesn't follow that you have to add consciousness. Here's another interesting possibility is, well, we'll add something else. Let's call it proto consciousness or X. And then it turns out space, time, mass plus X will somehow collectively give you the possibility for consciousness. Why don't rule out that view? Either I call that pan proto psychism, because maybe there's some other property, proto consciousness at the bottom level. And if you can't imagine there's actually genuine consciousness at the bottom level, I think we should be open to the idea there's this other thing X. Maybe we can't imagine that somehow gives you consciousness. But if we are playing along with the idea that there really is genuine consciousness at the bottom level, of course, this is going to be way out and speculative, but at least in, say, if it was classical physics, then we'd have to, you'd end up saying, well, every little atom, every little, with a bunch of particles in space time, each of these particles has some kind of consciousness whose structure mirrors maybe their physical properties, like its mass, its charge, its velocity, and so on. The structure of its consciousness would roughly correspond to that. And the physical interactions between particles, I mean, there's this old worry about physics. I mentioned this before in this issue about the manifest image. We don't really find out about the intrinsic nature of things. Physics tells us about how a particle relates to other particles and interacts. It doesn't tell us about what the particle is in itself. That was Kant's thing in itself. So here's a view. The nature in itself of a particle is something mental. A particle is actually a conscious, a little conscious subject with properties of its consciousness that correspond to its physical properties. The laws of physics are actually ultimately relating these properties of conscious subjects. So in this view, a Newtonian world actually would be a vast collection of little conscious subjects at the bottom level, way, way simpler than we are without free will or rationality or anything like that. But that's what the universe would be like. Now, of course, that's a vastly speculative view. No particular reason to think it's correct. Furthermore, non Newtonian physics, say quantum mechanical wave function, suddenly it starts to look different. It's not a vast collection of conscious subjects. Maybe there's ultimately one big wave function for the whole universe. Corresponding to that might be something more like a single conscious mind whose structure corresponds to the structure of the wave function. People sometimes call this cosmo psychism. And now, of course, we're in the realm of extremely speculative philosophy. There's no direct evidence for this, but yeah, but if you want a picture of what that universe would be like, think, yeah, giant cosmic mind with enough richness and structure among it to replicate all the structure of physics. I think therefore I am at the level of particles and with quantum mechanics at the level of the wave function. It's kind of an exciting, beautiful possibility, of course, way out of reach of physics currently. It is interesting that some neuroscientists are beginning to take panpsychism seriously, that you find consciousness even in very simple systems. So for example, the integrated information theory of consciousness, a lot of neuroscientists are taking seriously. Actually, I just got this new book by Christoph Koch just came in, The Feeling of Life Itself, why consciousness is widespread, but can't be computed. He likes, he basically endorses a panpsychist view where you get consciousness with the degree of information processing or integrated information processing in a simple, in a system and even very, very simple systems, like a couple of particles will have some degree of this. So he ends up with some degree of consciousness in all matter. And the claim is that this theory can actually explain a bunch of stuff about the connection between the brain and consciousness. Now, that's very controversial. I think it's very, very early days in the science of consciousness. It's interesting that it's not just philosophy that might lead you in this direction, but there are ways of thinking quasi scientifically that lead you there too. But maybe it's different than panpsychism. What do you think? So Alan Watts has this quote that I'd like to ask you about. The quote is, through our eyes, the universe is perceiving itself. Through our ears, the universe is listening to its harmonies. We are the witnesses through which the universe becomes conscious of its glory, of its magnificence. So that's not panpsychism. Do you think that we are essentially the tools, the senses the universe created to be conscious of itself? It's an interesting idea. Of course, if you went for the giant cosmic mind view, then the universe was conscious all along. It didn't need us. We're just little components of the universal consciousness. Likewise, if you believe in panpsychism, then there was some little degree of consciousness at the bottom level all along. And we were just a more complex form of consciousness. So I think maybe the quote you mentioned works better. If you're not a panpsychist, you're not a cosmo psychist, you think consciousness just exists at this intermediate level. And of course, that's the Orthodox view. That you would say is the common view? So is your own view with panpsychism a rare view? I think it's generally regarded certainly as a speculative view held by a fairly small minority of at least theorists, most philosophers and most scientists who think about consciousness are not panpsychists. There's been a bit of a movement in that direction the last 10 years or so. It seems to be quite popular, especially among the younger generation, but it's still very definitely a minority view. Many people think it's totally batshit crazy to use the technical term. But the philosophical term. So the Orthodox view, I think is still consciousness is something that humans have and some good number of nonhuman animals have, and maybe AIs might have one day, but it's restricted. On that view, then there was no consciousness at the start of the universe. There may be none at the end, but it is this thing which happened at some point in the history of the universe, consciousness developed. And yes, that's a very amazing event on this view because many people are inclined to think consciousness is what somehow gives meaning to our lives. Without consciousness, there'd be no meaning, no true value, no good versus bad and so on. So with the advent of consciousness, suddenly the universe went from meaningless to somehow meaningful. Why did this happen? I guess the quote you mentioned was somehow, this was somehow destined to happen because the universe needed to have consciousness within it to have value and have meaning. And maybe you could combine that with a theistic view or a teleological view. The universe was inexorably evolving towards consciousness. Actually, my colleague here at NYU, Tom Nagel, wrote a book called Mind and Cosmos a few years ago where he argued for this teleological view of evolution toward consciousness, saying this led the problems for Darwinism. It's got him on, this is very, very controversial. Most people didn't agree. I don't myself agree with this teleological view, but it is at least a beautiful speculative view of the cosmos. What do you think people experience? What do they seek when they believe in God from this kind of perspective? I'm not an expert on thinking about God and religion. I'm not myself religious at all. When people sort of pray, communicate with God, which whatever form, I'm not speaking to sort of the practices and the rituals of religion. I mean the actual experience of that people really have a deep connection with God in some cases. What do you think that experience is? It's so common, at least throughout the history of civilization, that it seems like we seek that. At the very least, it is an interesting conscious experience that people have when they experience religious awe or prayer and so on. Neuroscientists have tried to examine what bits of the brain are active and so on. But yeah, there's this deeper question of what are people looking for when they're doing this? And like I said, I've got no real expertise on this, but it does seem that one thing people are after is a sense of meaning and value, a sense of connection to something greater than themselves that will give their lives meaning and value. And maybe the thought is if there is a God, then God somehow is a universal consciousness who has invested this universe with meaning and somehow connection to God might give your life meaning. I guess I can kind of see the attractions of that, but it still makes me wonder why is it exactly that a universal consciousness, God, would be needed to give the world meaning? If universal consciousness can give the world meaning, why can't local consciousness give the world meaning too? So I think my consciousness gives my world meaning. Is the origin of meaning for your world. Yeah, I experience things as good or bad, happy, sad, interesting, important. So my consciousness invests this world with meaning. Without any consciousness, maybe it would be a bleak, meaningless universe. But I don't see why I need someone else's consciousness or even God's consciousness to give this universe meaning. Here we are, local creatures with our own subjective experiences. I think we can give the universe meaning ourselves. I mean, maybe to some people that feels inadequate. Our own local consciousness is somehow too puny and insignificant to invest any of this with cosmic significance. And maybe God gives you a sense of cosmic significance, but I'm just speculating here. So it's a really interesting idea that consciousness is the thing that makes life meaningful. If you could maybe just briefly explore that for a second. So I suspect just from listening to you now, you mean in an almost trivial sense, just the day to day experiences of life have, because of you attach identity to it, they become, I guess I wanna ask something I would always wanted to ask a legit world renowned philosopher. What is the meaning of life? So I suspect you don't mean consciousness gives any kind of greater meaning to it all. And more to day to day. But is there a greater meaning to it all? I think life has meaning for us because we are conscious. So without consciousness, no meaning, consciousness invests our life with meaning. So consciousness is the source of the meaning of life, but I wouldn't say consciousness itself is the meaning of life. I'd say what's meaningful in life is basically what we find meaningful, what we experience as meaningful. So if you find meaning and fulfillment and value in say, intellectual work, like understanding, then that's a very significant part of the meaning of life for you. If you find that in social connections or in raising a family, then that's the meaning of life for you. The meaning kind of comes from what you value as a conscious creature. So I think there's no, on this view, there's no universal solution. No universal answer to the question, what is the meaning of life? The meaning of life is where you find it as a conscious creature, but it's consciousness that somehow makes value possible. Experiencing some things as good or as bad or as meaningful, something comes from within consciousness. So you think consciousness is a crucial component, ingredient of assigning value to things? I mean, it's kind of a fairly strong intuition that without consciousness, there wouldn't really be any value if we just had a purely universe of unconscious creatures. Would anything be better or worse than anything else? Certainly when it comes to ethical dilemmas, you know about the old trolley problem. Do you kill one person or do you switch to the other track to kill five? Well, I've got a variant on this, the zombie trolley problem, where there's a one conscious being on one track and five humanoid zombies. Let's make them robots who are not conscious on the other track. Do you, given that choice, do you kill the one conscious being or the five unconscious robots? Most people have a fairly clear intuition here. Kill the unconscious beings because they basically, they don't have a meaningful life. They're not really persons, conscious beings at all. We don't have good intuition about something like an unconscious being. So in philosophical terms, you referred to as a zombie. It's a useful thought experiment construction in philosophical terms, but we don't yet have them. So that's kind of what we may be able to create with robots. And I don't necessarily know what that even means. Yeah, they're merely hypothetical. For now, they're just a thought experiment. They may never be possible. I mean, the extreme case of a zombie is a being which is physically, functionally, behaviorally identical to me, but not conscious. That's a mere, I don't think that could ever be built in this universe. The question is just could we, does that hypothetically make sense? That's kind of a useful contrast class to raise questions like, why aren't we zombies? How does it come about that we're conscious? And we're not like that. But there are less extreme versions of this like robots, which are maybe not physically identical to us, maybe not even functionally identical to us. Maybe they've got a different architecture, but they can do a lot of sophisticated things, maybe carry on a conversation, but they're not conscious. And that's not so far out. We've got simple computer systems, at least tending in that direction now. And presumably this is gonna get more and more sophisticated over years to come where we may have some pretty, it's at least quite straightforward to conceive of some pretty sophisticated robot systems that can use language and be fairly high functioning without consciousness at all. Then I stipulate that. I mean, we've caused, there's this tricky question of how you would know whether they're conscious. But let's say we've somehow solved that. And we know that these high functioning robots aren't conscious. Then the question is, do they have moral status? Does it matter how we treat them? What does moral status mean, sir? Basically it's that question. Can they suffer? Does it matter how we treat them? For example, if I mistreat this glass, this cup by shattering it, then that's bad. Why is it bad though? It's gonna make a mess. It's gonna be annoying for me and my partner. And so it's not bad for the cup. No one would say the cup itself has moral status. Hey, you hurt the cup and that's doing it a moral harm. Likewise, plants, well, again, if they're not conscious, most people think by uprooting a plant, you're not harming it. But if a being is conscious on the other hand, then you are harming it. So Siri, or I dare not say the name of Alexa. Anyway, so we don't think we're morally harming Alexa by turning her off or disconnecting her or even destroying her, whether it's the system or the underlying software system, because we don't really think she's conscious. On the other hand, you move to like the disembodied being in the movie, her, Samantha, I guess she was kind of presented as conscious. And then if you destroyed her, you'd certainly be committing a serious harm. So I think our strong sense is if a being is conscious and can undergo subjective experiences, then it matters morally how we treat them. So if a robot is conscious, it matters, but if a robot is not conscious, then they're basically just meat or a machine and it doesn't matter. So I think at least maybe how we think about this stuff is fundamentally wrong, but I think a lot of people who think about this stuff seriously, including people who think about, say the moral treatment of animals and so on, come to the view that consciousness is ultimately kind of the line between systems that where we have to take them into account and thinking morally about how we act and systems for which we don't. And I think I've seen you the writer talk about the demonstration of consciousness from a system like that, from a system like Alexa or a conversational agent that what you would be looking for is kind of at the very basic level for the system to have an awareness that I'm just a program and yet, why do I experience this? Or not to have that experience, but to communicate that to you. So that's what us humans would sound like. If you all of a sudden woke up one day, like Kafka, right, in a body of a bug or something, but in a computer, you all of a sudden realized you don't have a body and yet you were feeling what you were feeling, you would probably say those kinds of things. So do you think a system essentially becomes conscious by convincing us that it's conscious through the words that I just mentioned? So by being confused about the fact that why am I having these experiences? So basically. I don't think this is what makes you conscious, but I do think being puzzled about consciousness is a very good sign that a system is conscious. So if I encountered a robot that actually seemed to be genuinely puzzled by its own mental states and saying, yeah, I have all these weird experiences and I don't see how to explain them. I know I'm just a set of silicon circuits, but I don't see how that would give you my consciousness. I would at least take that as some evidence that there's some consciousness going on there. I don't think a system needs to be puzzled about consciousness to be conscious. Many people aren't puzzled by their consciousness. Animals don't seem to be puzzled at all. I still think they're conscious. So I don't think that's a requirement on consciousness, but I do think if we're looking for signs for consciousness, say in AI systems, one of the things that will help convince me that an AI system is conscious is if it shows signs of, if it shows signs of introspectively recognizing something like consciousness and finding this philosophically puzzling in the way that we do. It's such an interesting thought, though, because a lot of people sort of would, at the Shao level, criticize the Turing test for language. It's essentially what I heard Dan Dennett criticize it in this kind of way, which is it really puts a lot of emphasis on lying. Yeah, and being able to imitate human beings, yeah, there's this cartoon of the AI system studying for the Turing test. It's gotta read this book called Talk Like a Human. It's like, man, why do I have to waste my time learning how to imitate humans? Maybe the AI system is gonna be way beyond the hard problem of consciousness, and it's gonna be just like, why do I need to waste my time pretending that I recognize the hard problem of consciousness in order for people to recognize me as conscious? Yeah, it just feels like, I guess the question is, do you think we can ever really create a test for consciousness? Because it feels like we're very human centric, and so the only way we would be convinced that something is conscious is basically the thing demonstrates the illusion of consciousness, that we can never really know whether it's conscious or not, and in fact, that almost feels like it doesn't matter then, or does it still matter to you that something is conscious or it demonstrates consciousness? You still see that fundamental distinction. I think to a lot of people, whether a system is conscious or not matters hugely for many things, like how we treat it, can it suffer, and so on, but still, that leaves open the question, how can we ever know? And it's true that it's awfully hard to see how we can know for sure whether a system is conscious. I suspect that sociologically, the thing that's gonna convince us that a system is conscious is, in part, things like social interaction, conversation, and so on, where they seem to be conscious, they talk about their conscious states or just talk about being happy or sad or finding things meaningful or being in pain. That will tend to convince us if we don't, if a system genuinely seems to be conscious, we don't treat it as such, eventually it's gonna seem like a strange form of racism or speciesism or somehow, not to acknowledge them as conscious. I truly believe that, by the way. I believe that there is going to be something akin to the Civil Rights Movement, but for robots. I think the moment you have a Roomba say, please don't kick me, that hurts, just say it. Yeah. I think that will fundamentally change the fabric of our society. I think you're probably right, although it's gonna be very tricky because, just say we've got the technology where these conscious beings can just be created and multiplied by the thousands by flicking a switch. The legal status is gonna be different, but ultimately their moral status ought to be the same, and yeah, the civil rights issue is gonna be a huge mess. So if one day somebody clones you, another very real possibility. In fact, I find the conversation between two copies of David Chalmers quite interesting. Very thought. Who is this idiot? He's not making any sense. So what, do you think he would be conscious? I do think he would be conscious. I do think in some sense, I'm not sure it would be me, there would be two different beings at this point. I think they'd both be conscious and they both have many of the same mental properties. I think they both in a way have the same moral status. It'd be wrong to hurt either of them or to kill them and so on. Still, there's some sense in which probably their legal status would have to be different. If I'm the original and that one's just a clone, then creating a clone of me, presumably the clone doesn't, for example, automatically own the stuff that I own or I've got a certain connect, the things that the people I interact with, my family, my partner and so on, I'm gonna somehow be connected to them in a way in which the clone isn't, so. Because you came slightly first? Yeah. Because a clone would argue that they have really as much of a connection. They have all the memories of that connection. Then a way you might say it's kind of unfair to discriminate against them, but say you've got an apartment that only one person can live in or a partner who only one person can be with. But why should it be you, the original? It's an interesting philosophical question, but you might say because I actually have this history, if I am the same person as the one that came before and the clone is not, then I have this history that the clone doesn't. Of course, there's also the question, isn't the clone the same person too? This is a question about personal identity. If I continue and I create a clone over there, I wanna say this one is me and this one is someone else. But you could take the view that a clone is equally me. Of course, in a movie like Star Trek where they have a teletransporter basically creates clones all the time. They treat the clones as if they're the original person. Of course, they destroy the original body in Star Trek. So there's only one left around and only very occasionally do things go wrong and you get two copies of Captain Kirk. But somehow our legal system at the very least is gonna have to sort out some of these issues and that maybe that's what's moral and what's legally acceptable are gonna come apart. What question would you ask a clone of yourself? Is there something useful you can find out from him about the fundamentals of consciousness even? I mean, kind of in principle, I know that if it's a perfect clone, it's gonna behave just like me. So I'm not sure I'm gonna be able to, I can discover whether it's a perfect clone by seeing whether it answers like me. But otherwise I know what I'm gonna find is a being which is just like me, except that it's just undergone this great shock of discovering that it's a clone. So just say you woke me up tomorrow and said, hey Dave, sorry to tell you this, but you're actually the clone and you provided me really convincing evidence, showed me the film of my being cloned and then all wrapped in here being here and waking up. So you proved to me I'm a clone, well, yeah, okay, I would find that shocking and who knows how I would react to this. So maybe by talking to the clone, I'd find something about my own psychology that I can't find out so easily, like how I'd react upon discovering that I'm a clone. I could certainly ask the clone if it's conscious and what his consciousness is like and so on, but I guess I kind of know if it's a perfect clone, it's gonna behave roughly like me. Of course, at the beginning, there'll be a question about whether a perfect clone is possible. So I may wanna ask it lots of questions to see if it's consciousness and the way it talks about its consciousness and the way it reacts to things in general is likely. And that will occupy us for a while. So basic unit testing on the early models. So if it's a perfect clone, you say that it's gonna behave exactly like you. So that takes us to free will. Is there free will? Are we able to make decisions that are not predetermined from the initial conditions of the universe? You know, philosophers do this annoying thing of saying it depends what you mean. So in this case, yeah, it really depends on what you mean, by free will. If you mean something which was not determined in advance, could never have been determined, then I don't know we have free will. I mean, there's quantum mechanics and who's to say if that opens up some room, but I'm not sure we have free will in that sense. But I'm also not sure that's the kind of free will that really matters. You know, what matters to us is being able to do what we want and to create our own futures. We've got this distinction between having our lives be under our control and under someone else's control. We've got the sense of actions that we are responsible for versus ones that we're not. I think you can make those distinctions even in a deterministic universe. And this is what people call the compatibilist view of free will, where it's compatible with determinism. So I think for many purposes, the kind of free will that matters is something we can have in a deterministic universe. And I can't see any reason in principle why an AI system couldn't have free will of that kind. If you mean super duper free will, the ability to violate the laws of physics and doing things that in principle could not be predicted. I don't know, maybe no one has that kind of free will. What's the connection between the reality of free will and the experience of it, the subjective experience in your view? So how does consciousness connect to the reality and the experience of free will? It's certainly true that when we make decisions and when we choose and so on, we feel like we have an open future. Feel like I could do this, I could go into philosophy or I could go into math, I could go to a movie tonight, I could go to a restaurant. So we experience these things as if the future is open. And maybe we experience ourselves as exerting a kind of effect on the future that somehow picking out one path from many paths were previously open. And you might think that actually if we're in a deterministic universe, there's a sense of which objectively those paths weren't really open all along, but subjectively they were open. And that's, I think that's what really matters in making a decisions where our experience of making a decision is choosing a path for ourselves. I mean, in general, our introspective models of the mind, I think are generally very distorted representations of the mind. So it may well be that our experience of ourself in making a decision, our experience of what's going on doesn't terribly well mirror what's going on. I mean, maybe there are antecedents in the brain way before anything came into consciousness and so on. Those aren't represented in our introspective model. So in general, our experience of perception, so I experience a perceptual image of the external world. It's not a terribly good model of what's actually going on in my visual cortex and so on, which has all these layers and so on. It's just one little snapshot of one bit of that. So in general, introspective models are very over oversimplified. And it wouldn't be surprising if that was true of free will as well. This also incidentally can be applied to consciousness itself. There is this very interesting view that consciousness itself is an introspective illusion. In fact, we're not conscious, but the brain just has these introspective models of itself or oversimplifies everything and represents itself as having these special properties of consciousness. It's a really simple way to kind of keep track of itself and so on. And then on the illusionist view, yeah, that's just an illusion. I find this view, when I find it implausible, I do find it very attractive in some ways, because it's easy to tell some story about how the brain would create introspective models of its own consciousness, of its own free will as a way of simplifying itself. I mean, it's a similar way when we perceive the external world, we perceive it as having these colors that maybe it doesn't really have, but of course that's a really useful way of keeping tracks, of keeping track. Did you say that you find it not very plausible? Because I find it both plausible and attractive in some sense, because I mean, that kind of view is one that has the minimum amount of mystery around it. You can kind of understand that kind of view. Everything else says we don't understand so much of this picture. No, it is very attractive, I recently wrote an article about this kind of issue called the meta problem of consciousness. The hard problem is how does a brain give you consciousness? The meta problem is why are we puzzled by the hard problem of consciousness? Because being puzzled by it, that's ultimately a bit of behavior. We might be able to explain that bit of behavior as one of the easy problems, consciousness. So maybe there'll be some computational model that explains why we're puzzled by consciousness. The meta problem has come up with that model. And I've been thinking about that a lot lately. There's some interesting stories you can tell about why the right kind of computational system might develop these introspective models of itself that attributed itself, these special properties. So that meta problem is a research program for everyone. And then if you've got attraction to sort of simple views, desert landscapes and so on, then you can go all the way with what people call illusionism and say, in fact, consciousness itself is not real. What is real is just these introspective models we have that tell us that we're conscious. So the view is very simple, very attractive, very powerful. The trouble is, of course, it has to say that deep down, consciousness is not real. We're not actually experiencing right now. And it looks like it's just contradicting a fundamental datum of our existence. And this is why most people find this view crazy. Just as they find panpsychism crazy in one way, people find illusionism crazy in another way. But I mean, so yes, it has to deny this fundamental datum of our existence. Now, that makes the view sort of frankly unbelievable for most people. On the other hand, the view developed right might be able to explain why we find it unbelievable. Because these models are so deeply hardwired into our head. And they're all integrated. You can't escape the illusion. And it's a crazy possibility. Is it possible that the entirety of the universe, our planet, all the people in New York, all the organisms on our planet, including me here today, are not real in that sense? They're all part of an illusion inside of Dave Chalmers's head. I think all this could be a simulation. No, but not just a simulation. Because the simulation kind of is outside of you. A dream? What if it's all an illusion? Yes, a dream that you're experiencing. That's, it's all in your mind, right? Is that, can you take illusionism that far? Well, there's illusionism about the external world and illusionism about consciousness. And these might go in different. Illusionism about the external world kind of takes you back to Descartes. And yeah, could all this be produced by an evil demon? Descartes himself also had the dream argument. He said, how do you know you're not dreaming right now? How do you know this is not an amazing dream? And it's at least a possibility that yeah, this could be some super duper complex dream in the next universe up. I guess though, my attitude is that just as, when Descartes thought that if the evil demon was doing it, it's not real. A lot of people these days say if a simulation is doing it, it's not real. As I was saying before, I think even if it's a simulation, that doesn't stop this from being real. It just tells us what the world is made of. Likewise, if it's a dream, it could turn out that all this is like my dream created by my brain in the next universe up. My own view is that wouldn't stop this physical world from being real. It would turn out this cup at the most fundamental level was made of a bit of say my consciousness in the dreaming mind at the next level up. Maybe that would give you a kind of weird kind of panpsychism about reality, but it wouldn't show that the cup isn't real. It would just tell us it's ultimately made of processes in my dreaming mind. So I'd resist the idea that if the physical world is a dream, then it's an illusion. That's right. By the way, perhaps you have an interesting thought about it. Why is Descartes demon or genius considered evil? Why couldn't have been a benevolent one that had the same powers? Yeah, I mean, Descartes called it the malign genie, the evil genie or evil genius. Malign, I guess was the word. But yeah, it's an interesting question. I mean, a later philosophy, Barclay said, no, in fact, all this is done by God. God actually supplies you all of these perceptions and ideas and that's how physical reality is sustained. And interestingly, Barclay's God is doing something that doesn't look so different from what Descartes evil demon was doing. It's just that Descartes thought it was deception and Barclay thought it was not. And I'm actually more sympathetic to Barclay here. Yeah, this evil demon may be trying to deceive you, but I think, okay, well, the evil demon may just be working under a false philosophical theory. It thinks it's deceiving you, it's wrong. It's like there's machines in the matrix. They thought they were deceiving you that all this stuff is real. I think, no, if we're in a matrix, it's all still real. Yeah, the philosopher O.K. Bousma had a nice story about this about 50 years ago, about Descartes evil demon, where he said this demon spends all its time trying to fool people, but fails because somehow all the demon ends up doing is constructing realities for people. So yeah, I think that maybe it's a very natural to take this view that if we're in a simulation or evil demon scenario or something, then none of this is real. But I think it may be ultimately a philosophical mistake, especially if you take on board sort of the view of reality where what matters to reality is really its structure, something like its mathematical structure and so on, which seems to be the view that a lot of people take from contemporary physics. And it looks like you can find all that mathematical structure in a simulation, maybe even in a dream and so on. So as long as that structure is real, I would say that's enough for the physical world to be real. Yeah, the physical world may turn out to be somewhat more intangible than we had thought and have a surprising nature of it. We're already gotten very used to that from modern science. See, you've kind of alluded that you don't have to have consciousness for high levels of intelligence, but to create truly general intelligence systems, AGI systems at human level intelligence and perhaps super human level intelligence, you've talked about that you feel like that kind of thing might be very far away, but nevertheless, when we reached that point, do you think consciousness from an engineering perspective is needed or at least highly beneficial for creating an AGI system? Yeah, no one knows what consciousness is for functionally. So right now there's no specific thing we can point to and say, you need consciousness for that. So my inclination is to believe that in principle AGI is possible. The very least I don't see why someone couldn't simulate a brain, ultimately have a computational system that produces all of our behavior. And if that's possible, I'm sure vastly many other computational systems of equal or greater sophistication are possible with all of our cognitive functions and more. My inclination is to think that once you've got all these cognitive functions, perception, attention, reasoning, introspection, language, emotion, and so on, it's very likely you'll have consciousness as well. So at least it's very hard for me to see how you'd have a system that had all those things while bypassing somehow conscious. So just naturally it's integrated quite naturally. There's a lot of overlap about the kind of function that required to achieve each of those things that's, so you can't disentangle them even when you're recreating. It seems to, at least in us, but we don't know what the causal role of consciousness in the physical world, what it does. I mean, just say it turns out consciousness does something very specific in the physical world like collapsing wave functions as on one common interpretation of quantum mechanics. Then ultimately we might find some place where it actually makes a difference and we could say, ah, here is where in collapsing wave functions it's driving the behavior of a system. And maybe it could even turn out that for AGI, you'd need something playing that. I mean, if you wanted to connect this to free will, some people think consciousness collapsing wave functions, that would be how the conscious mind exerts effect on the physical world and exerts its free will. And maybe it could turn out that any AGI that didn't utilize that mechanism would be limited in the kinds of functionality that it had. I don't myself find that plausible. I think probably that functionality could be simulated. But you can imagine once we had a very specific idea about the role of consciousness in the physical world, this would have some impact on the capacity of AGI's. And if it was a role that could not be duplicated elsewhere, then we'd have to find some way to either get consciousness in the system to play that role or to simulate it. If we can isolate a particular role to consciousness, of course, it seems like an incredibly difficult thing. Do you have worries about existential threats of conscious intelligent beings that are not us? So certainly, I'm sure you're worried about us from an existential threat perspective, but outside of us, AI systems. There's a couple of different kinds of existential threats here. One is an existential threat to consciousness generally. I mean, yes, I care about humans and the survival of humans and so on, but just say it turns out that eventually we're replaced by some artificial beings that aren't humans, but are somehow our successors. They still have good lives. They still do interesting and wonderful things with the universe. I don't think that's not so bad. That's just our successors. We were one stage in evolution. Something different, maybe better came next. If on the other hand, all of consciousness was wiped out, that would be a very serious moral disaster. One way that could happen is by all intelligent life being wiped out. And many people think that, yeah, once you get to humans and AIs and amazing sophistication where everyone has got the ability to create weapons that can destroy the whole universe just by pressing a button, then maybe it's inevitable all intelligent life will die out. That would certainly be a disaster. And we've got to think very hard about how to avoid that. But yeah, another interesting kind of disaster is that maybe intelligent life is not wiped out, but all consciousness is wiped out. So just say your thought, unlike what I was saying a moment ago, that there are two different kinds of intelligent systems, some which are conscious and some which are not. And just say it turns out that we create AGI with a high degree of intelligence, meaning high degree of sophistication and its behavior, but with no consciousness at all. That AGI could take over the world maybe, but then there'd be no consciousness in this world. This would be a world of zombies. Some people have called this the zombie apocalypse because it's an apocalypse for consciousness. Consciousness is gone. You've merely got this super intelligent, nonconscious robots. And I would say that's a moral disaster in the same way, in almost the same way that the world with no intelligent life is a moral disaster. All value and meaning may be gone from that world. So these are both threats to watch out for. Now, my own view is if you get super intelligence, you're almost certainly gonna bring consciousness with it. So I hope that's not gonna happen. But of course, I don't understand consciousness. No one understands consciousness. This is one reason for, this is one reason at least among many for thinking very seriously about consciousness and thinking about the kind of future we want to create in a world with humans and or AIs. How do you feel about the possibility if consciousness so naturally does come with AGI systems that we are just a step in the evolution? That we will be just something, a blimp on the record that'll be studied in books by the AGI systems centuries from now? I mean, I think I'd probably be okay with that, especially if somehow humans are continuous with AGI. I mean, I think something like this is inevitable. The very least humans are gonna be transformed. We're gonna be augmented by technology. It's already happening in all kinds of ways. We're gonna be transformed by technology where our brains are gonna be uploaded and computationally enhanced. And eventually that line between what's a human and what's an AI may be kind of hard to draw. How much does it matter, for example, that some future being a thousand years from now that somehow descended from us actually still has biology? I think it would be nice if you kind of point to its cognitive system, point to some parts that had some roots in us and trace a continuous line there. That would be selfishly nice for me to think that, okay, I'm connected to this thread line through the future of the world, but if it turns out, okay, there's a jump there. They found a better way to design cognitive systems. They designed a wholly new kind of thing. And the only line is some causal chain of designing and systems that design better systems. Is that so much worse? I don't know. We're still at least part of a causal chain of design. And yes, they're not humans, but still they're our successes. So, I mean, ultimately I think it's probably inevitable that something like that will happen. And at least we were part of the process. It'd be nice if they still cared enough about us to maybe to engage with our arguments. I'm really hoping that the AGI's are gonna solve all the problems of philosophy. They'll come back and read all this crappy work for the 20th and 21st century, hard problem of consciousness. And here is why they got it wrong. And so, and if that happened, then I'd really feel like I was part of at least an intellectual process over centuries. And that would be kind of cool. Well, I'm pretty sure they would clone or they would recreate David Chalmers and for the fun of it, sort of bring back other philosophers. Yeah, bring back Descartes. Descartes and just put them in a room and just watch. It'll be a Netflix of the future show where you bring philosophers from different human, 100% human philosophers from previous generations, put them in a room and see them. I am totally up for that. Simulators, AGI's of the future, if you're watching this podcast, do that. I would like to be recreated and ending out with Descartes. Where Descartes would be the first, if you could hang out as part of such a TV show with a philosopher that's no longer with us from long ago, who would you choose? Descartes would have to be right up there. Oh, actually a couple of months ago, I got to have a conversation with Descartes, an actor who's actually a philosopher came out on stage playing Descartes. I didn't know this was gonna happen. And I just after I gave a talk and told me about how my ideas were crap and all derived from him. And so we had a long argument. This was great. I would love to see what Descartes would think about AI, for example, and the modern neuroscience. And so I suspect not too much would surprise him, but yeah, William James, for a psychologist of consciousness, I think James was probably the richest. But, oh, there are Immanuel Kant. I never really understood what he was up to if I got to actually talk to him about some of this. Hey, there was Princess Elizabeth who talked with Descartes and who really got at the problems of how Descartes ideas of a nonphysical mind interacting with the physical body couldn't really work. She's been kind of, most philosophers think she's been proved right. So maybe put me in a room with Descartes and Princess Elizabeth and we can all argue it out. What kind of future? So we talked about zombies, a concerning future, but what kind of future excites you? What do you think if we look forward sort of, we're at the very early stages of understanding consciousness. And we're now at the early stages of being able to engineer complex, interesting systems that have degrees of intelligence. And maybe one day we'll have degrees of consciousness, maybe be able to upload brains, all those possibilities, virtual reality. Is there a particular aspect to this future world that just excites you? Well, I think there are lots of different aspects. I mean, frankly, I want it to hurry up and happen. It's like, yeah, we've had some progress lately in AI and VR, but in the grand scheme of things, it's still kind of slow. The changes are not yet transformative. And I'm in my fifties, I've only got so long left. I'd like to see really serious AI in my lifetime and really serious virtual worlds. Cause yeah, once people, I would like to be able to hang out in a virtual reality, which is richer than this reality to really get to inhabit fundamentally different kinds of spaces. Well, I would very much like to be able to upload my mind onto a computer. So maybe I don't have to die. If this is maybe gradually replaced my neurons with a Silicon chips and inhabit a computer. Selfishly, that would be wonderful. I suspect I'm not gonna quite get there in my lifetime, but once that's possible, then you've got the possibility of transforming your consciousness in remarkable ways, augmenting it, enhancing it. So let me ask then, if such a system is a possibility within your lifetime and you were given the opportunity to become immortal in this kind of way, would you choose to be immortal? Yes, I totally would. I know some people say they couldn't, it'd be awful to be immortal, be so boring or something. I don't see, I really don't see why this might be. I mean, even if it's just ordinary life that continues, ordinary life is not so bad. But furthermore, I kind of suspect that, if the universe is gonna go on forever or indefinitely, it's gonna continue to be interesting. I don't think your view was that we just have to get this one romantic point of interest now and afterwards it's all gonna be boring, super intelligent stasis. I guess my vision is more like, no, it's gonna continue to be infinitely interesting. Something like as you go up the set theoretic hierarchy, you go from the finite cardinals to Aleph zero and then through there to all the Aleph one and Aleph two and maybe the continuum and you keep taking power sets and in set theory, they've got these results that actually all this is fundamentally unpredictable. It doesn't follow any simple computational patterns. There's new levels of creativity as the set theoretic universe expands and expands. I guess that's my future. That's my vision of the future. That's my optimistic vision of the future of super intelligence. It will keep expanding and keep growing, but still being fundamentally unpredictable at many points. I mean, yes, this creates all kinds of worries like couldn't all be fragile and be destroyed at any point. So we're gonna need a solution to that problem. But if we get to stipulate that I'm immortal, well, I hope that I'm not just immortal and stuck in the single world forever, but I'm immortal and get to take part in this process of going through infinitely rich, created futures. Rich, unpredictable, exciting. Well, I think I speak for a lot of people in saying, I hope you do become immortal and there'll be that Netflix show, The Future, where you get to argue with Descartes, perhaps for all eternity. So David, it was an honor. Thank you so much for talking today. Thanks, it was a pleasure. Thanks for listening to this conversation and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words from David Chalmers. Materialism is a beautiful and compelling view of the world, but to account for consciousness, we have to go beyond the resources it provides. Thank you for listening and hope to see you next time.
David Chalmers: The Hard Problem of Consciousness | Lex Fridman Podcast #69
The following is a conversation with Jim Keller, legendary microprocessor engineer who has worked at AMD, Apple, Tesla, and now Intel. He's known for his work on AMD K7, K8, K12, and Zen microarchitectures, Apple A4 and A5 processors, and coauthor of the specification for the x8664 instruction set and hypertransport interconnect. He's a brilliant first principles engineer and out of the box thinker, and just an interesting and fun human being to talk to. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called First, best known for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating at Charity Navigator, which means that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now here's my conversation with Jim Keller. What are the differences and similarities between the human brain and a computer with the microprocessor at its core? Let's start with the philosophical question perhaps. Well, since people don't actually understand how human brains work, I think that's true. I think that's true. So it's hard to compare them. Computers are, you know, there's really two things. There's memory and there's computation, right? And to date, almost all computer architectures are global memory, which is a thing, right? And then computation where you pull data and you do relatively simple operations on it and write data back. So it's decoupled in modern computers. And you think in the human brain, everything's a mesh, a mess that's combined together? What people observe is there's, you know, some number of layers of neurons which have local and global connections and information is stored in some distributed fashion and people build things called neural networks in computers where the information is distributed in some kind of fashion. You know, there's a mathematics behind it. I don't know that the understanding of that is super deep. The computations we run on those are straightforward computations. I don't believe anybody has said a neuron does this computation. So to date, it's hard to compare them, I would say. So let's get into the basics before we zoom back out. How do you build a computer from scratch? What is a microprocessor? What is a microarchitecture? What's an instruction set architecture? Maybe even as far back as what is a transistor? So the special charm of computer engineering is there's a relatively good understanding of abstraction layers. So down at the bottom, you have atoms and atoms get put together in materials like silicon or dope silicon or metal and we build transistors. On top of that, we build logic gates, right? And then functional units, like an adder or a subtractor or an instruction parsing unit. And then we assemble those into processing elements. Modern computers are built out of probably 10 to 20 locally organic processing elements or coherent processing elements. And then that runs computer programs, right? So there's abstraction layers and then software, there's an instruction set you run and then there's assembly language C, C++, Java, JavaScript. There's abstraction layers, essentially from the atom to the data center, right? So when you build a computer, first there's a target, like what's it for? Like how fast does it have to be? Which today there's a whole bunch of metrics about what that is. And then in an organization of 1,000 people who build a computer, there's lots of different disciplines that you have to operate on. Does that make sense? And so... So there's a bunch of levels of abstraction in an organization like Intel and in your own vision, there's a lot of brilliance that comes in at every one of those layers. Some of it is science, some of it is engineering, some of it is art, what's the most, if you could pick favorites, what's the most important, your favorite layer on these layers of abstractions? Where does the magic enter this hierarchy? I don't really care. That's the fun, you know, I'm somewhat agnostic to that. So I would say for relatively long periods of time, instruction sets are stable. So the x86 instruction set, the ARM instruction set. What's an instruction set? So it says, how do you encode the basic operations? Load, store, multiply, add, subtract, conditional, branch. You know, there aren't that many interesting instructions. Look, if you look at a program and it runs, you know, 90% of the execution is on 25 opcodes, you know, 25 instructions. And those are stable, right? What does it mean, stable? Intel architecture's been around for 25 years. It works. It works. And that's because the basics, you know, are defined a long time ago, right? Now, the way an old computer ran is you fetched instructions and you executed them in order. Do the load, do the add, do the compare. The way a modern computer works is you fetch large numbers of instructions, say 500. And then you find the dependency graph between the instructions. And then you execute in independent units those little micrographs. So a modern computer, like people like to say, computers should be simple and clean. But it turns out the market for simple, clean, slow computers is zero, right? We don't sell any simple, clean computers. No, you can, how you build it can be clean, but the computer people want to buy, that's, say, in a phone or a data center, fetches a large number of instructions, computes the dependency graph, and then executes it in a way that gets the right answers. And optimizes that graph somehow. Yeah, they run deeply out of order. And then there's semantics around how memory ordering works and other things work. So the computer sort of has a bunch of bookkeeping tables that says what order should these operations finish in or appear to finish in? But to go fast, you have to fetch a lot of instructions and find all the parallelism. Now, there's a second kind of computer, which we call GPUs today. And I call it the difference. There's found parallelism, like you have a program with a lot of dependent instructions. You fetch a bunch and then you go figure out the dependency graph and you issue instructions out of order. That's because you have one serial narrative to execute, which, in fact, can be done out of order. Did you call it a narrative? Yeah. Oh, wow. Yeah, so humans think of serial narrative. So read a book, right? There's a sentence after sentence after sentence, and there's paragraphs. Now, you could diagram that. Imagine you diagrammed it properly and you said, which sentences could be read in any order, any order without changing the meaning, right? That's a fascinating question to ask of a book, yeah. Yeah, you could do that, right? So some paragraphs could be reordered, some sentences can be reordered. You could say, he is tall and smart and X, right? And it doesn't matter the order of tall and smart. But if you say the tall man is wearing a red shirt, what colors, you can create dependencies, right? And so GPUs, on the other hand, run simple programs on pixels, but you're given a million of them. And the first order, the screen you're looking at doesn't care which order you do it in. So I call that given parallelism. Simple narratives around the large numbers of things where you can just say, it's parallel because you told me it was. So found parallelism where the narrative is sequential, but you discover like little pockets of parallelism versus. Turns out large pockets of parallelism. Large, so how hard is it to discover? Well, how hard is it? That's just transistor count, right? So once you crack the problem, you say, here's how you fetch 10 instructions at a time. Here's how you calculate the dependencies between them. Here's how you describe the dependencies. Here's, you know, these are pieces, right? So once you describe the dependencies, then it's just a graph. Sort of, it's an algorithm that finds, what is that? I'm sure there's a graph theoretical answer here that's solvable. In general, programs, modern programs that human beings write, how much found parallelism is there in them? What does 10X mean? So if you execute it in order, you would get what's called cycles per instruction, and it would be about, you know, three instructions, three cycles per instruction because of the latency of the operations and stuff. And in a modern computer, excuse it, but like 0.2, 0.25 cycles per instruction. So it's about, we today find 10X. And there's two things. One is the found parallelism in the narrative, right? And the other is the predictability of the narrative, right? So certain operations say, do a bunch of calculations, and if greater than one, do this, else do that. That decision is predicted in modern computers to high 90% accuracy. So branches happen a lot. So imagine you have a decision to make every six instructions, which is about the average, right? But you want to fetch 500 instructions, figure out the graph, and execute them all in parallel. That means you have, let's say, if you fetch 600 instructions and it's every six, you have to fetch, you have to predict 99 out of 100 branches correctly for that window to be effective. Okay, so parallelism, you can't parallelize branches. Or you can. No, you can predict. You can predict. What does predicted branch mean? What does predicted branch mean? So imagine you do a computation over and over. You're in a loop. So while n is greater than one, do. And you go through that loop a million times. So every time you look at the branch, you say, it's probably still greater than one. And you're saying you could do that accurately. Very accurately. Modern computers. My mind is blown. How the heck do you do that? Wait a minute. Well, you want to know? This is really sad. 20 years ago, you simply recorded which way the branch went last time and predicted the same thing. Right. Okay. What's the accuracy of that? 85%. So then somebody said, hey, let's keep a couple of bits and have a little counter so when it predicts one way, we count up and then pins. So say you have a three bit counter. So you count up and then you count down. And you can use the top bit as the signed bit so you have a signed two bit number. So if it's greater than one, you predict taken. And less than one, you predict not taken, right? Or less than zero, whatever the thing is. And that got us to 92%. Oh. Okay, no, it gets better. This branch depends on how you got there. So if you came down the code one way, you're talking about Bob and Jane, right? And then said, does Bob like Jane? It went one way. But if you're talking about Bob and Jill, does Bob like Jane? You go a different way. Right, so that's called history. So you take the history and a counter. That's cool, but that's not how anything works today. They use something that looks a little like a neural network. So modern, you take all the execution flows. And then you do basically deep pattern recognition of how the program is executing. And you do that multiple different ways. And you have something that chooses what the best result is. There's a little supercomputer inside the computer. That's trying to predict branching. That calculates which way branches go. So the effective window that it's worth finding grass in gets bigger. Why was that gonna make me sad? Because that's amazing. It's amazingly complicated. Oh, well. Well, here's the funny thing. So to get to 85% took 1,000 bits. To get to 99% takes tens of megabits. So this is one of those, to get the result, to get from a window of say 50 instructions to 500, it took three orders of magnitude or four orders of magnitude more bits. Now if you get the prediction of a branch wrong, what happens then? You flush the pipe. You flush the pipe, so it's just the performance cost. But it gets even better. Yeah. So we're starting to look at stuff that says, so they executed down this path, and then you had two ways to go. But far away, there's something that doesn't matter which path you went. So you took the wrong path. You executed a bunch of stuff. Then you had the mispredicting. You backed it up. You remembered all the results you already calculated. Some of those are just fine. Like if you read a book and you misunderstand a paragraph, your understanding of the next paragraph sometimes is invariant to that understanding. Sometimes it depends on it. And you can kind of anticipate that invariance. Yeah, well, you can keep track of whether the data changed. And so when you come back through a piece of code, should you calculate it again or do the same thing? Okay, how much of this is art and how much of it is science? Because it sounds pretty complicated. Well, how do you describe a situation? So imagine you come to a point in the road where you have to make a decision, right? And you have a bunch of knowledge about which way to go. Maybe you have a map. So you wanna go the shortest way, or do you wanna go the fastest way, or do you wanna take the nicest road? So there's some set of data. So imagine you're doing something complicated like building a computer. And there's hundreds of decision points, all with hundreds of possible ways to go. And the ways you pick interact in a complicated way. Right. And then you have to pick the right spot. Right, so that's. So that's art or science, I don't know. You avoided the question. You just described the Robert Frost problem of road less taken. I described the Robert Frost problem? That's what we do as computer designers. It's all poetry. Okay. Great. Yeah, I don't know how to describe that because some people are very good at making those intuitive leaps. It seems like just combinations of things. Some people are less good at it, but they're really good at evaluating the alternatives. Right, and everybody has a different way to do it. And some people can't make those leaps, but they're really good at analyzing it. So when you see computers are designed by teams of people who have very different skill sets. And a good team has lots of different kinds of people. I suspect you would describe some of them as artistic, but not very many. Unfortunately, or fortunately. Fortunately. Well, you know, computer design's hard. It's 99% perspiration. And the 1% inspiration is really important. But you still need the 99. Yeah, you gotta do a lot of work. And then there are interesting things to do at every level of that stack. So at the end of the day, if you run the same program multiple times, does it always produce the same result? Is there some room for fuzziness there? That's a math problem. So if you run a correct C program, the definition is every time you run it, you get the same answer. Yeah, well that's a math statement. But that's a language definitional statement. So for years when people did, when we first did 3D acceleration of graphics, you could run the same scene multiple times and get different answers. Right. Right, and then some people thought that was okay and some people thought it was a bad idea. And then when the HPC world used GPUs for calculations, they thought it was a really bad idea. Okay, now in modern AI stuff, people are looking at networks where the precision of the data is low enough that the data is somewhat noisy. And the observation is the input data is unbelievably noisy. So why should the calculation be not noisy? And people have experimented with algorithms that say can get faster answers by being noisy. Like as a network starts to converge, if you look at the computation graph, it starts out really wide and then it gets narrower. And you can say is that last little bit that important or should I start the graph on the next rev before we whittle it all the way down to the answer, right? So you can create algorithms that are noisy. Now if you're developing something and every time you run it, you get a different answer, it's really annoying. And so most people think even today, every time you run the program, you get the same answer. No, I know, but the question is that's the formal definition of a programming language. There is a definition of languages that don't get the same answer, but people who use those, you always want something because you get a bad answer and then you're wondering is it because of something in the algorithm or because of this? And so everybody wants a little switch that says no matter what, do it deterministically. And it's really weird because almost everything going into modern calculations is noisy. So why do the answers have to be so clear? Right, so where do you stand? I design computers for people who run programs. So if somebody says I want a deterministic answer, like most people want that. Can you deliver a deterministic answer, I guess is the question. Like when you. Yeah, hopefully, sure. What people don't realize is you get a deterministic answer even though the execution flow is very undeterministic. So you run this program 100 times, it never runs the same way twice, ever. And the answer, it arrives at the same answer. But it gets the same answer every time. It's just amazing. Okay, you've achieved, in the eyes of many people, legend status as a chip art architect. What design creation are you most proud of? Perhaps because it was challenging, because of its impact, or because of the set of brilliant ideas that were involved in bringing it to life? I find that description odd. And I have two small children, and I promise you, they think it's hilarious. This question. Yeah. I do it for them. So I'm really interested in building computers. And I've worked with really, really smart people. I'm not unbelievably smart. I'm fascinated by how they go together, both as a thing to do and as an endeavor that people do. How people and computers go together? Yeah. Like how people think and build a computer. And I find sometimes that the best computer architects aren't that interested in people, or the best people managers aren't that good at designing computers. So the whole stack of human beings is fascinating. So the managers, the individual engineers. Yeah, yeah. Yeah, I said I realized after a lot of years of building computers, where you sort of build them out of transistors, logic gates, functional units, computational elements, that you could think of people the same way, so people are functional units. And then you could think of organizational design as a computer architecture problem. And then it was like, oh, that's super cool, because the people are all different, just like the computational elements are all different. And they like to do different things. And so I had a lot of fun reframing how I think about organizations. Just like with computers, we were saying execution paths, you can have a lot of different paths that end up at the same good destination. So what have you learned about the human abstractions from individual functional human units to the broader organization? What does it take to create something special? Well, most people don't think simple enough. All right, so the difference between a recipe and the understanding. There's probably a philosophical description of this. So imagine you're gonna make a loaf of bread. The recipe says get some flour, add some water, add some yeast, mix it up, let it rise, put it in a pan, put it in the oven. It's a recipe. Understanding bread, you can understand biology, supply chains, grain grinders, yeast, physics, thermodynamics, there's so many levels of understanding. And then when people build and design things, they frequently are executing some stack of recipes. And the problem with that is the recipes all have limited scope. Like if you have a really good recipe book for making bread, it won't tell you anything about how to make an omelet. But if you have a deep understanding of cooking, right, than bread, omelets, you know, sandwich, you know, there's a different way of viewing everything. And most people, when you get to be an expert at something, you know, you're hoping to achieve deeper understanding, not just a large set of recipes to go execute. And it's interesting to walk groups of people because executing recipes is unbelievably efficient if it's what you want to do. If it's not what you want to do, you're really stuck. And that difference is crucial. And everybody has a balance of, let's say, deeper understanding of recipes. And some people are really good at recognizing when the problem is to understand something deeply. Does that make sense? It totally makes sense, does every stage of development, deep understanding on the team needed? Oh, this goes back to the art versus science question. Sure. If you constantly unpack everything for deeper understanding, you never get anything done. And if you don't unpack understanding when you need to, you'll do the wrong thing. And then at every juncture, like human beings are these really weird things because everything you tell them has a million possible outputs, right? And then they all interact in a hilarious way. Yeah, it's very interesting. And then having some intuition about what you tell them, what you do, when do you intervene, when do you not, it's complicated. Right, so. It's essentially computationally unsolvable. Yeah, it's an intractable problem, sure. Humans are a mess. But with deep understanding, do you mean also sort of fundamental questions of things like what is a computer? Or why, like the why questions, why are we even building this, like of purpose? Or do you mean more like going towards the fundamental limits of physics, sort of really getting into the core of the science? In terms of building a computer, think a little simpler. So common practice is you build a computer, and then when somebody says, I wanna make it 10% faster, you'll go in and say, all right, I need to make this buffer bigger, and maybe I'll add an add unit. Or I have this thing that's three instructions wide, I'm gonna make it four instructions wide. And what you see is each piece gets incrementally more complicated, right? And then at some point you hit this limit, like adding another feature or buffer doesn't seem to make it any faster. And then people will say, well, that's because it's a fundamental limit. And then somebody else will look at it and say, well, actually the way you divided the problem up and the way the different features are interacting is limiting you, and it has to be rethought, rewritten. So then you refactor it and rewrite it, and what people commonly find is the rewrite is not only faster, but half as complicated. From scratch? Yes. So how often in your career, but just have you seen is needed, maybe more generally, to just throw the whole thing out and start over? This is where I'm on one end of it, every three to five years. Which end are you on? Rewrite more often. Rewrite, and three to five years is? If you wanna really make a lot of progress on computer architecture, every five years you should do one from scratch. So where does the x86.64 standard come in? How often do you? I was the coauthor of that spec in 98. That's 20 years ago. Yeah, so that's still around. The instruction set itself has been extended quite a few times. And instruction sets are less interesting than the implementation underneath. There's been, on x86 architecture, Intel's designed a few, AIM designed a few very different architectures. And I don't wanna go into too much of the detail about how often, but there's a tendency to rewrite it every 10 years, and it really should be every five. So you're saying you're an outlier in that sense. Rewrite more often. Rewrite more often. Well, and here's the problem. Isn't that scary? Yeah, of course. Well, scary to who? To everybody involved, because like you said, repeating the recipe is efficient. Companies wanna make money. No, individual engineers wanna succeed, so you wanna incrementally improve, increase the buffer from three to four. Well, this is where you get into the diminishing return curves. I think Steve Jobs said this, right? So every, you have a project, and you start here, and it goes up, and you have diminishing return. And to get to the next level, you have to do a new one, and the initial starting point will be lower than the old optimization point, but it'll get higher. So now you have two kinds of fear, short term disaster and long term disaster. And you're, you're haunted. So grown ups, right, like, you know, people with a quarter by quarter business objective are terrified about changing everything. And people who are trying to run a business or build a computer for a long term objective know that the short term limitations block them from the long term success. So if you look at leaders of companies that had really good long term success, every time they saw that they had to redo something, they did. And so somebody has to speak up. Or you do multiple projects in parallel, like you optimize the old one while you build a new one. But the marketing guys are always like, make promise me that the new computer is faster on every single thing. And the computer architect says, well, the new computer will be faster on the average, but there's a distribution of results and performance, and you'll have some outliers that are slower. And that's very hard, because they have one customer who cares about that one. So speaking of the long term, for over 50 years now, Moore's Law has served, for me and millions of others, as an inspiring beacon of what kind of amazing future brilliant engineers can build. Yep. I'm just making your kids laugh all of today. That was great. So first, in your eyes, what is Moore's Law, if you could define for people who don't know? Well, the simple statement was, from Gordon Moore, was double the number of transistors every two years. Something like that. And then my operational model is, we increase the performance of computers by two X every two or three years. And it's wiggled around substantially over time. And also, in how we deliver, performance has changed. But the foundational idea was two X to transistors every two years. The current cadence is something like, they call it a shrink factor, like 0.6 every two years, which is not 0.5. But that's referring strictly, again, to the original definition of just. A transistor count. A shrink factor's just getting them smaller and smaller and smaller. Well, it's for a constant chip area. If you make the transistors smaller by 0.6, then you get one over 0.6 more transistors. So can you linger on it a little longer? What's a broader, what do you think should be the broader definition of Moore's Law? When you mentioned how you think of performance, just broadly, what's a good way to think about Moore's Law? Well, first of all, I've been aware of Moore's Law for 30 years. In which sense? Well, I've been designing computers for 40. You're just watching it before your eyes kind of thing. And somewhere where I became aware of it, I was also informed that Moore's Law was gonna die in 10 to 15 years. And then I thought that was true at first. But then after 10 years, it was gonna die in 10 to 15 years. And then at one point, it was gonna die in five years. And then it went back up to 10 years. And at some point, I decided not to worry about that particular prognostication for the rest of my life, which is fun. And then I joined Intel and everybody said Moore's Law is dead. And I thought that's sad, because it's the Moore's Law company. And it's not dead. And it's always been gonna die. And humans like these apocryphal kind of statements, like we'll run out of food, or we'll run out of air, or we'll run out of room, or we'll run out of something. Right, but it's still incredible that it's lived for as long as it has. And yes, there's many people who believe now that Moore's Law is dead. You know, they can join the last 50 years of people who had the same idea. Yeah, there's a long tradition. But why do you think, if you can try to understand it, why do you think it's not dead? Well, let's just think, people think Moore's Law is one thing, transistors get smaller. But actually, under the sheet, there's literally thousands of innovations. And almost all those innovations have their own diminishing return curves. So if you graph it, it looks like a cascade of diminishing return curves. I don't know what to call that. But the result is an exponential curve. Well, at least it has been. So, and we keep inventing new things. So if you're an expert in one of the things on a diminishing return curve, right, and you can see it's plateau, you will probably tell people, well, this is done. Meanwhile, some other pile of people are doing something different. So that's just normal. So then there's the observation of how small could a switching device be? So a modern transistor is something like a thousand by a thousand by a thousand atoms, right? And you get quantum effects down around two to 10 atoms. So you can imagine the transistor as small as 10 by 10 by 10. So that's a million times smaller. And then the quantum computational people are working away at how to use quantum effects. So. A thousand by a thousand by a thousand. Atoms. That's a really clean way of putting it. Well, a fan, like a modern transistor, if you look at the fan, it's like 120 atoms wide, but we can make that thinner. And then there's a gate wrapped around it, and then there's spacing. There's a whole bunch of geometry. And a competent transistor designer could count both atoms in every single direction. Like there's techniques now to already put down atoms in a single atomic layer. And you can place atoms if you want to. It's just from a manufacturing process, if placing an atom takes 10 minutes and you need to put 10 to the 23rd atoms together to make a computer, it would take a long time. So the methods are both shrinking things and then coming up with effective ways to control what's happening. Manufacture stably and cheaply. Yeah. So the innovation stock's pretty broad. There's equipment, there's optics, there's chemistry, there's physics, there's material science, there's metallurgy, there's lots of ideas about when you put different materials together, how do they interact, are they stable, is it stable over temperature, like are they repeatable? There's like literally thousands of technologies involved. But just for the shrinking, you don't think we're quite yet close to the fundamental limits of physics? I did a talk on Moore's Law and I asked for a roadmap to a path of 100 and after two weeks, they said we only got to 50. 100 what, sorry? 100 X shrink. 100 X shrink? We only got to 50. And I said, why don't you give it another two weeks? Well, here's the thing about Moore's Law, right? So I believe that the next 10 or 20 years of shrinking is gonna happen, right? Now, as a computer designer, you have two stances. You think it's going to shrink, in which case you're designing and thinking about architecture in a way that you'll use more transistors. Or conversely, not be swamped by the complexity of all the transistors you get, right? You have to have a strategy, you know? So you're open to the possibility and waiting for the possibility of a whole new army of transistors ready to work. I'm expecting more transistors every two or three years by a number large enough that how you think about design, how you think about architecture has to change. Like, imagine you build buildings out of bricks, and every year the bricks are half the size, or every two years. Well, if you kept building bricks the same way, so many bricks per person per day, the amount of time to build a building would go up exponentially, right? But if you said, I know that's coming, so now I'm gonna design equipment that moves bricks faster, uses them better, because maybe you're getting something out of the smaller bricks, more strength, thinner walls, you know, less material, efficiency out of that. So once you have a roadmap with what's gonna happen, transistors, we're gonna get more of them, then you design all this collateral around it to take advantage of it, and also to cope with it. Like, that's the thing people don't understand. It's like, if I didn't believe in Moore's Law, and then Moore's Law transistors showed up, my design teams would all drown. So what's the hardest part of this inflow of new transistors? I mean, even if you just look historically, throughout your career, what's the thing, what fundamentally changes when you add more transistors in the task of designing an architecture? Well, there's two constants, right? One is people don't get smarter. By the way, there's some science showing that we do get smarter because of nutrition or whatever. Sorry to bring that up. Blend effect. Yes. Yeah, I'm familiar with it. Nobody understands it, nobody knows if it's still going on. So that's a... Or whether it's real or not. But yeah, it's a... I sort of... Anyway, but not exponentially. I would believe for the most part, people aren't getting much smarter. The evidence doesn't support it, that's right. And then teams can't grow that much. Right. Right, so human beings, you know, we're really good in teams of 10, you know, up to teams of 100, they can know each other. Beyond that, you have to have organizational boundaries. So you're kind of, you have, those are pretty hard constraints, right? So then you have to divide and conquer, like as the designs get bigger, you have to divide it into pieces. You know, the power of abstraction layers is really high. We used to build computers out of transistors. Now we have a team that turns transistors into logic cells and another team that turns them into functional units, another one that turns them into computers, right? So we have abstraction layers in there and you have to think about when do you shift gears on that. We also use faster computers to build faster computers. So some algorithms run twice as fast on new computers, but a lot of algorithms are N squared. So, you know, a computer with twice as many transistors and it might take four times as long to run. So you have to refactor the software. Like simply using faster computers to build bigger computers doesn't work. So you have to think about all these things. So in terms of computing performance and the exciting possibility that more powerful computers bring, is shrinking the thing which you've been talking about, for you, one of the biggest exciting possibilities of advancement in performance? Or is there other directions that you're interested in, like in the direction of sort of enforcing given parallelism or like doing massive parallelism in terms of many, many CPUs, you know, stacking CPUs on top of each other, that kind of parallelism or any kind of parallelism? Well, think about it a different way. So old computers, you know, slow computers, you said A equal B plus C times D, pretty simple, right? And then we made faster computers with vector units and you can do proper equations and matrices, right? And then modern like AI computations or like convolutional neural networks, where you convolve one large data set against another. And so there's sort of this hierarchy of mathematics, you know, from simple equation to linear equations, to matrix equations, to deeper kind of computation. And the data sets are getting so big that people are thinking of data as a topology problem. You know, data is organized in some immense shape. And then the computation, which sort of wants to be, get data from immense shape and do some computation on it. So what computers have allowed people to do is have algorithms go much, much further. So that paper you reference, the Sutton paper, they talked about, you know, like when AI started, it was apply rule sets to something. That's a very simple computational situation. And then when they did first chess thing, they solved deep searches. So have a huge database of moves and results, deep search, but it's still just a search, right? Now we take large numbers of images and we use it to train these weight sets that we convolve across. It's a completely different kind of phenomena. We call that AI. Now they're doing the next generation. And if you look at it, they're going up this mathematical graph, right? And then computations, both computation and data sets support going up that graph. Yeah, the kind of computation that might, I mean, I would argue that all of it is still a search, right? Just like you said, a topology problem with data sets, you're searching the data sets for valuable data and also the actual optimization of neural networks is a kind of search for the... I don't know, if you had looked at the interlayers of finding a cat, it's not a search. It's a set of endless projections. So, you know, a projection, here's a shadow of this phone, right? And then you can have a shadow of that on the something and a shadow on that of something. And if you look in the layers, you'll see this layer actually describes pointy ears and round eyeness and fuzziness. But the computation to tease out the attributes is not search. Like the inference part might be search, but the training's not search. And then in deep networks, they look at layers and they don't even know it's represented. And yet, if you take the layers out, it doesn't work. So I don't think it's search. But you'd have to talk to a mathematician about what that actually is. Well, we could disagree, but it's just semantics, I think, it's not, but it's certainly not... I would say it's absolutely not semantics, but... Okay, all right, well, if you want to go there. So optimization to me is search, and we're trying to optimize the ability of a neural network to detect cat ears. And the difference between chess and the space, the incredibly multidimensional, 100,000 dimensional space that neural networks are trying to optimize over is nothing like the chessboard database. So it's a totally different kind of thing. And okay, in that sense, you can say it loses the meaning. I can see how you might say, if you... The funny thing is, it's the difference between given search space and found search space. Right, exactly. Yeah, maybe that's a different way to describe it. That's a beautiful way to put it, okay. But you're saying, what's your sense in terms of the basic mathematical operations and the architectures, computer hardware that enables those operations? Do you see the CPUs of today still being a really core part of executing those mathematical operations? Yes. Well, the operations continue to be add, subtract, load, store, compare, and branch. It's remarkable. So it's interesting, the building blocks of computers or transistors under that atoms. So you got atoms, transistors, logic gates, computers, functional units of computers. The building blocks of mathematics at some level are things like adds and subtracts and multiplies, but the space mathematics can describe is, I think, essentially infinite. But the computers that run the algorithms are still doing the same things. Now, a given algorithm might say, I need sparse data, or I need 32 bit data, or I need, you know, like a convolution operation that naturally takes eight bit data, multiplies it, and sums it up a certain way. So like the data types in TensorFlow imply an optimization set. But when you go right down and look at the computers, it's and and or gates doing adds and multiplies. Like that hasn't changed much. Now, the quantum researchers think they're going to change that radically, and then there's people who think about analog computing because you look in the brain, and it seems to be more analogish. You know, that maybe there's a way to do that more efficiently. But we have a million X on computation, and I don't know the relationship between computational, let's say, intensity and ability to hit mathematical abstractions. I don't know any way to describe that, but just like you saw in AI, you went from rule sets to simple search to complex search to, say, found search. Like those are orders of magnitude more computation to do. And as we get the next two orders of magnitude, like a friend, Roger Gaduri, said, like every order of magnitude changes the computation. Fundamentally changes what the computation is doing. Yeah. Oh, you know the expression the difference in quantity is the difference in kind. You know, the difference between ant and anthill, right? Or neuron and brain. You know, there's this indefinable place where the quantity changed the quality, right? And we've seen that happen in mathematics multiple times, and you know, my guess is it's going to keep happening. So your sense is, yeah, if you focus head down and shrinking the transistor. Well, it's not just head down, we're aware of the software stacks that are running in the computational loads, and we're kind of pondering what do you do with a petabyte of memory that wants to be accessed in a sparse way and have, you know, the kind of calculations AI programmers want. So there's a dialogue interaction, but when you go in the computer chip, you know, you find adders and subtractors and multipliers. So if you zoom out then with, as you mentioned very sudden, the idea that most of the development in the last many decades in AI research came from just leveraging computation and just simple algorithms waiting for the computation to improve. Well, software guys have a thing that they call it the problem of early optimization. So you write a big software stack, and if you start optimizing like the first thing you write, the odds of that being the performance limiter is low. But when you get the whole thing working, can you make it 2x faster by optimizing the right things? Sure. While you're optimizing that, could you have written a new software stack, which would have been a better choice? Maybe. Now you have creative tension. So. But the whole time as you're doing the writing, that's the software we're talking about. The hardware underneath gets faster and faster. Well, this goes back to the Moore's law. If Moore's law is going to continue, then your AI research should expect that to show up, and then you make a slightly different set of choices then. We've hit the wall. Nothing's going to happen. And from here, it's just us rewriting algorithms. That seems like a failed strategy for the last 30 years of Moore's law's death. So can you just linger on it? I think you've answered it, but I'll just ask the same dumb question over and over. So why do you think Moore's law is not going to die? Which is the most promising, exciting possibility of why it won't die in the next 5, 10 years? So is it the continued shrinking of the transistor, or is it another S curve that steps in and it totally sort of matches up? Shrinking the transistor is literally thousands of innovations. Right, so there's stacks of S curves in there. There's a whole bunch of S curves just kind of running their course and being reinvented and new things. The semiconductor fabricators and technologists have all announced what's called nanowires. So they took a fan, which had a gate around it, and turned that into little wires so you have better control of that, and they're smaller. And then from there, there are some obvious steps about how to shrink that. The metallurgy around wire stacks and stuff has very obvious abilities to shrink. And there's a whole combination of things there to do. Your sense is that we're going to get a lot if this innovation performed just that, shrinking. Yeah, like a factor of 100 is a lot. Yeah, I would say that's incredible. And it's totally unknown. It's only 10 or 15 years. Now, you're smarter, you might know, but to me it's totally unpredictable of what that 100x would bring in terms of the nature of the computation that people would be. Yeah, are you familiar with Bell's law? So for a long time, it was mainframes, minis, workstation, PC, mobile. Moore's law drove faster, smaller computers. And then when we were thinking about Moore's law, Rajagaduri said, every 10x generates a new computation. So scalar, vector, matrix, topological computation. And if you go look at the industry trends, there was mainframes, and then minicomputers, and then PCs, and then the internet took off. And then we got mobile devices. And now we're building 5G wireless with one millisecond latency. And people are starting to think about the smart world where everything knows you, recognizes you. The transformations are going to be unpredictable. How does it make you feel that you're one of the key architects of this kind of future? So we're not talking about the architects of the high level people who build the Angry Bird apps, and Snapchat. Angry Bird apps. Who knows? Maybe that's the whole point of the universe. I'm going to take a stand at that, and the attention distracting nature of mobile phones. I'll take a stand. But anyway, in terms of the side effects of smartphones, or the attention distraction, which part? Well, who knows where this is all leading? It's changing so fast. My parents used to yell at my sisters for hiding in the closet with a wired phone with a dial on it. Stop talking to your friends all day. Now my wife yells at my kids for talking to their friends all day on text. It looks the same to me. It's always echoes of the same thing. But you are one of the key people architecting the hardware of this future. How does that make you feel? Do you feel responsible? Do you feel excited? So we're in a social context. So there's billions of people on this planet. There are literally millions of people working on technology. I feel lucky to be doing what I do and getting paid for it, and there's an interest in it. But there's so many things going on in parallel. The actions are so unpredictable. If I wasn't here, somebody else would do it. The vectors of all these different things are happening all the time. You know, there's a, I'm sure, some philosopher or metaphilosopher is wondering about how we transform our world. So you can't deny the fact that these tools are changing our world. That's right. Do you think it's changing for the better? I read this thing recently. It said the two disciplines with the highest GRE scores in college are physics and philosophy. And they're both sort of trying to answer the question, why is there anything? And the philosophers are on the kind of theological side, and the physicists are obviously on the material side. And there's 100 billion galaxies with 100 billion stars. It seems, well, repetitive at best. So you know, there's on our way to 10 billion people. I mean, it's hard to say what it's all for, if that's what you're asking. Yeah, I guess I am. Things do tend to significantly increase in complexity. And I'm curious about how computation, like our physical world inherently generates mathematics. It's kind of obvious, right? So we have x, y, z coordinates. You take a sphere, you make it bigger. You get a surface that grows by r squared. Like, it generally generates mathematics. And the mathematicians and the physicists have been having a lot of fun talking to each other for years. And computation has been, let's say, relatively pedestrian. Like, computation in terms of mathematics has been doing binary algebra, while those guys have been gallivanting through the other realms of possibility. Now recently, the computation lets you do mathematical computations that are sophisticated enough that nobody understands how the answers came out. Machine learning. Machine learning. It used to be you get data set, you guess at a function. The function is considered physics if it's predictive of new functions, new data sets. Modern, you can take a large data set with no intuition about what it is and use machine learning to find a pattern that has no function, right? And it can arrive at results that I don't know if they're completely mathematically describable. So computation has kind of done something interesting compared to a equal b plus c. There's something reminiscent of that step from the basic operations of addition to taking a step towards neural networks that's reminiscent of what life on Earth at its origins was doing. Do you think we're creating sort of the next step in our evolution in creating artificial intelligence systems that will? I don't know. I mean, there's so much in the universe already, it's hard to say. Where we stand in this whole thing. Are human beings working on additional abstraction layers and possibilities? Yeah, it appears so. Does that mean that human beings don't need dogs? You know, no. Like, there's so many things that are all simultaneously interesting and useful. Well, you've seen, throughout your career, you've seen greater and greater level abstractions built in artificial machines, right? Do you think, when you look at humans, do you think that the look of all life on Earth is a single organism building this thing, this machine with greater and greater levels of abstraction? Do you think humans are the peak, the top of the food chain in this long arc of history on Earth? Or do you think we're just somewhere in the middle? Are we the basic functional operations of a CPU? Are we the C++ program, the Python program, or the neural network? Like, somebody's, you know, people have calculated, like, how many operations does the brain do? Something, you know, I've seen the number 10 to the 18th a bunch of times, arrive different ways. So could you make a computer that did 10 to the 20th operations? Yes. Sure. Do you think? We're going to do that. Now, is there something magical about how brains compute things? I don't know. You know, my personal experience is interesting, because, you know, you think you know how you think, and then you have all these ideas, and you can't figure out how they happened. And if you meditate, you know, what you can be aware of is interesting. So I don't know if brains are magical or not. You know, the physical evidence says no. Lots of people's personal experience says yes. So what would be funny is if brains are magical, and yet we can make brains with more computation. You know, I don't know what to say about that. But do you think magic is an emergent phenomena? Could be. I have no explanation for it. Let me ask Jim Keller of what in your view is consciousness? With consciousness? Yeah, like what, you know, consciousness, love, things that are these deeply human things that seems to emerge from our brain, is that something that we'll be able to make encode in chips that get faster and faster and faster and faster? That's like a 10 hour conversation. Nobody really knows. Can you summarize it in a couple of sentences? Many people have observed that organisms run at lots of different levels, right? If you had two neurons, somebody said you'd have one sensory neuron and one motor neuron, right? So we move towards things and away from things. And we have physical integrity and safety or not, right? And then if you look at the animal kingdom, you can see brains that are a little more complicated. And at some point, there's a planning system. And then there's an emotional system that's happy about being safe or unhappy about being threatened. And then our brains have massive numbers of structures, like planning and movement and thinking and feeling and drives and emotions. And we seem to have multiple layers of thinking systems. And we have a dream system that nobody understands whatsoever, which I find completely hilarious. And you can think in a way that those systems are more independent. And you can observe the different parts of yourself can observe them. I don't know which one's magical. I don't know which one's not computational. So. Is it possible that it's all computation? Probably. Is there a limit to computation? I don't think so. Do you think the universe is a computer? It seems to be. It's a weird kind of computer. Because if it was a computer, like when they do calculations on how much calculation it takes to describe quantum effects, it's unbelievably high. So if it was a computer, wouldn't you have built it out of something that was easier to compute? That's a funny system. But then the simulation guys pointed out that the rules are kind of interesting. When you look really close, it's uncertain. And the speed of light says you can only look so far. And things can't be simultaneous, except for the odd entanglement problem where they seem to be. The rules are all kind of weird. And somebody said physics is like having 50 equations with 50 variables to define 50 variables. Physics itself has been a shit show for thousands of years. It seems odd when you get to the corners of everything. It's either uncomputable or undefinable or uncertain. It's almost like the designers of the simulation are trying to prevent us from understanding it perfectly. But also, the things that require calculations require so much calculation that our idea of the universe of a computer is absurd, because every single little bit of it takes all the computation in the universe to figure out. So that's a weird kind of computer. You say the simulation is running in a computer, which has, by definition, infinite computation. Not infinite. Oh, you mean if the universe is infinite? Yeah. Well, every little piece of our universe seems to take infinite computation to figure out. Not infinite, just a lot. Well, a lot. Some pretty big number. Compute this little teeny spot takes all the mass in the local one light year by one light year space. It's close enough to infinite. Well, it's a heck of a computer if it is one. I know. It's a weird description, because the simulation description seems to break when you look closely at it. But the rules of the universe seem to imply something's up. That seems a little arbitrary. The universe, the whole thing, the laws of physics, it just seems like, how did it come out to be the way it is? Well, lots of people talk about that. Like I said, the two smartest groups of humans are working on the same problem. From different aspects. And they're both complete failures. So that's kind of cool. They might succeed eventually. Well, after 2,000 years, the trend isn't good. Oh, 2,000 years is nothing in the span of the history of the universe. That's for sure. We have some time. But the next 1,000 years doesn't look good either. That's what everybody says at every stage. But with Moore's law, as you've just described, not being dead, the exponential growth of technology, the future seems pretty incredible. Well, it'll be interesting, that's for sure. That's right. So what are your thoughts on Ray Kurzweil's sense that exponential improvement in technology will continue indefinitely? Is that how you see Moore's law? Do you see Moore's law more broadly, in the sense that technology of all kinds has a way of stacking S curves on top of each other, where it'll be exponential, and then we'll see all kinds of... What does an exponential of a million mean? That's a pretty amazing number. And that's just for a local little piece of silicon. Now let's imagine you, say, decided to get 1,000 tons of silicon to collaborate in one computer at a million times the density. Now you're talking, I don't know, 10 to the 20th more computation power than our current, already unbelievably fast computers. Nobody knows what that's going to mean. The sci fi guys call it computronium, like when a local civilization turns the nearby star into a computer. I don't know if that's true, but... So just even when you shrink a transistor, the... That's only one dimension. The ripple effects of that. People tend to think about computers as a cost problem. So computers are made out of silicon and minor amounts of metals and this and that. None of those things cost any money. There's plenty of sand. You could just turn the beach and a little bit of ocean water into computers. So all the cost is in the equipment to do it. And the trend on equipment is once you figure out how to build the equipment, the trend of cost is zero. Elon said, first you figure out what configuration you want the atoms in, and then how to put them there. His great insight is people are how constrained. I have this thing, I know how it works, and then little tweaks to that will generate something, as opposed to what do I actually want, and then figure out how to build it. It's a very different mindset. And almost nobody has it, obviously. Well, let me ask on that topic, you were one of the key early people in the development of autopilot, at least in the hardware side, Elon Musk believes that autopilot and vehicle autonomy, if you just look at that problem, can follow this kind of exponential improvement. In terms of the how question that we're talking about, there's no reason why you can't. What are your thoughts on this particular space of vehicle autonomy, and your part of it and Elon Musk's and Tesla's vision for vehicle autonomy? Well, the computer you need to build is straightforward. And you could argue, well, does it need to be two times faster or five times or 10 times? But that's just a matter of time or price in the short run. So that's not a big deal. You don't have to be especially smart to drive a car. So it's not like a super hard problem. I mean, the big problem with safety is attention, which computers are really good at, not skills. Well, let me push back on one. You see, everything you said is correct, but we as humans tend to take for granted how incredible our vision system is. So you can drive a car with 20, 50 vision, and you can train a neural network to extract the distance of any object in the shape of any surface from a video and data. Yeah, but that's really simple. No, it's not simple. That's a simple data problem. It's not, it's not simple. It's because it's not just detecting objects, it's understanding the scene, and it's being able to do it in a way that doesn't make errors. So the beautiful thing about the human vision system and our entire brain around the whole thing is we're able to fill in the gaps. It's not just about perfectly detecting cars. It's inferring the occluded cars. It's trying to, it's understanding the physics. I think that's mostly a data problem. So you think what data would compute with improvement of computation with improvement in collection of data? Well, there is a, you know, when you're driving a car and somebody cuts you off, your brain has theories about why they did it. You know, they're a bad person, they're distracted, they're dumb, you know, you can listen to yourself, right? So, you know, if you think that narrative is important to be able to successfully drive a car, then current autopilot systems can't do it. But if cars are ballistic things with tracks and probabilistic changes of speed and direction, and roads are fixed and given, by the way, they don't change dynamically, right? You can map the world really thoroughly. You can place every object really thoroughly. Right, you can calculate trajectories of things really thoroughly, right? But everything you said about really thoroughly has a different degree of difficulty, so. And you could say at some point, computer autonomous systems will be way better at things that humans are lousy at. Like, they'll be better at attention, they'll always remember there was a pothole in the road that humans keep forgetting about, they'll remember that this set of roads has these weirdo lines on it that the computers figured out once, and especially if they get updates, so if somebody changes a given, like, the key to robots and stuff somebody said is to maximize the givens, right? Right. So having a robot pick up this bottle cap is way easier if you put a red dot on the top, because then you'll have to figure out, and if you wanna do a certain thing with it, maximize the givens is the thing. And autonomous systems are happily maximizing the givens. Like, humans, when you drive someplace new, you remember it, because you're processing it the whole time, and after the 50th time you drove to work, you get to work, you don't know how you got there, right? You're on autopilot, right? Autonomous cars are always on autopilot. But the cars have no theories about why they got cut off, or why they're in traffic. So they also never stop paying attention. Right, so I tend to believe you do have to have theories, meta models of other people, especially with pedestrian cyclists, but also with other cars. So everything you said is actually essential to driving. Driving is a lot more complicated than people realize, I think, so to push back slightly, but to... So to cut into traffic, right? Yep. You can't just wait for a gap, you have to be somewhat aggressive. You'll be surprised how simple a calculation for that is. I may be on that particular point, but there's, maybe I actually have to push back. I would be surprised. You know what, yeah, I'll just say where I stand. I would be very surprised, but I think you might be surprised how complicated it is. I tell people, progress disappoints in the short run, and surprises in the long run. It's very possible, yeah. I suspect in 10 years it'll be just taken for granted. Yeah, probably. But you're probably right, not look like... It's gonna be a $50 solution that nobody cares about. It's like GPSes, like, wow, GPSes. We have satellites in space that tell you where your location is. It was a really big deal, now everything has a GPS in it. Yeah, that's true, but I do think that systems that involve human behavior are more complicated than we give them credit for. So we can do incredible things with technology that don't involve humans, but when you... I think humans are less complicated than people. You know, frequently ascribed. Maybe I feel... We tend to operate out of large numbers of patterns and just keep doing it over and over. But I can't trust you because you're a human. That's something a human would say. But my hope is on the point you've made is, even if, no matter who's right, I'm hoping that there's a lot of things that humans aren't good at that machines are definitely good at, like you said, attention and things like that. Well, they'll be so much better that the overall picture of safety and autonomy will be, obviously cars will be safer, even if they're not as good at understanding. I'm a big believer in safety. I mean, there are already the current safety systems, like cruise control that doesn't let you run into people and lane keeping. There are so many features that you just look at the parade of accidents and knocking off like 80% of them is super doable. Just to linger on the autopilot team and the efforts there, it seems to be that there's a very intense scrutiny by the media and the public in terms of safety, the pressure, the bar put before autonomous vehicles. What are your, sort of as a person there working on the hardware and trying to build a system that builds a safe vehicle and so on, what was your sense about that pressure? Is it unfair? Is it expected of new technology? Yeah, it seems reasonable. I was interested, I talked to both American and European regulators, and I was worried that the regulations would write into the rules technology solutions, like modern brake systems imply hydraulic brakes. So if you read the regulations, to meet the letter of the law for brakes, it sort of has to be hydraulic, right? And the regulator said they're interested in the use cases, like a head on crash, an offset crash, don't hit pedestrians, don't run into people, don't leave the road, don't run a red light or a stoplight. They were very much into the scenarios. And they had all the data about which scenarios injured or killed the most people. And for the most part, those conversations were like, what's the right thing to do to take the next step? Now, Elon's very interested also in the benefits of autonomous driving or freeing people's time and attention, as well as safety. And I think that's also an interesting thing, but building autonomous systems so they're safe and safer than people seemed, since the goal is to be 10X safer than people, having the bar to be safer than people and scrutinizing accidents seems philosophically correct. So I think that's a good thing. What are, is different than the things you worked at, Intel, AMD, Apple, with autopilot chip design and hardware design, what are interesting or challenging aspects of building this specialized kind of computing system in the automotive space? I mean, there's two tricks to building like an automotive computer. One is the software team, the machine learning team is developing algorithms that are changing fast. So as you're building the accelerator, you have this, you know, worry or intuition that the algorithms will change enough that the accelerator will be the wrong one, right? And there's the generic thing, which is, if you build a really good general purpose computer, say its performance is one, and then GPU guys will deliver about 5X to performance for the same amount of silicon, because instead of discovering parallelism, you're given parallelism. And then special accelerators get another two to 5X on top of a GPU, because you say, I know the math is always eight bit integers into 32 bit accumulators, and the operations are the subset of mathematical possibilities. So AI accelerators have a claimed performance benefit over GPUs because in the narrow math space, you're nailing the algorithm. Now, you still try to make it programmable, but the AI field is changing really fast. So there's a, you know, there's a little creative tension there of, I want the acceleration afforded by specialization without being over specialized so that the new algorithm is so much more effective that you'd have been better off on a GPU. So there's a tension there. To build a good computer for an application like automotive, there's all kinds of sensor inputs and safety processors and a bunch of stuff. So one of Elon's goals is to make it super affordable. So every car gets an autopilot computer. So some of the recent startups you look at, and they have a server in the trunk, because they're saying, I'm gonna build this autopilot computer, replaces the driver. So their cost budget's 10 or $20,000. And Elon's constraint was, I'm gonna put one in every car, whether people buy autonomous driving or not. So the cost constraint he had in mind was great, right? And to hit that, you had to think about the system design. That's complicated, and it's fun. You know, it's like, it's like, it's craftsman's work. Like, you know, a violin maker, right? You can say, Stradivarius is this incredible thing, the musicians are incredible. But the guy making the violin, you know, picked wood and sanded it, and then he cut it, you know, and he glued it, you know, and he waited for the right day so that when he put the finish on it, it didn't, you know, do something dumb. That's craftsman's work, right? You may be a genius craftsman because you have the best techniques and you discover a new one, but most engineers, craftsman's work. And humans really like to do that. You know the expression? Smart humans. No, everybody. All humans. I don't know. I used to, I dug ditches when I was in college. I got really good at it. Satisfying. Yeah. So. Digging ditches is also craftsman's work. Yeah, of course. So there's an expression called complex mastery behavior. So when you're learning something, that's fine, because you're learning something. When you do something, it's relatively simple. It's not that satisfying. But if the steps that you have to do are complicated and you're good at them, it's satisfying to do them. And then if you're intrigued by it all, as you're doing them, you sometimes learn new things that you can raise your game. But craftsman's work is good. And engineers, like engineering is complicated enough that you have to learn a lot of skills. And then a lot of what you do is then craftsman's work, which is fun. Autonomous driving, building a very resource constrained computer. So a computer has to be cheap enough to put in every single car. That essentially boils down to craftsman's work. It's engineering, it's innovation. Yeah, you know, there's thoughtful decisions and problems to solve and trade offs to make. Do you need 10 camera and ports or eight? You know, you're building for the current car or the next one. You know, how do you do the safety stuff? You know, there's a whole bunch of details. But it's fun. It's not like I'm building a new type of neural network, which has a new mathematics and a new computer to work. You know, that's like, there's more invention than that. But the rejection to practice, once you pick the architecture, you look inside and what do you see? Adders and multipliers and memories and, you know, the basics. So computers is always this weird set of abstraction layers of ideas and thinking that reduction to practice is transistors and wires and, you know, pretty basic stuff. And that's an interesting phenomenon. By the way, like factory work, like lots of people think factory work is road assembly stuff. I've been on the assembly line. Like the people who work there really like it. It's a really great job. It's really complicated. Putting cars together is hard, right? And the car is moving and the parts are moving and sometimes the parts are damaged and you have to coordinate putting all the stuff together and people are good at it. They're good at it. And I remember one day I went to work and the line was shut down for some reason and some of the guys sitting around were really bummed because they had reorganized a bunch of stuff and they were gonna hit a new record for the number of cars built that day. And they were all gung ho to do it. And these were big, tough buggers. And, you know, but what they did was complicated and you couldn't do it. Yeah, and I mean. Well, after a while you could, but you'd have to work your way up because, you know, like putting the bright, what's called the brights, the trim on a car on a moving assembly line where it has to be attached 25 places in a minute and a half is unbelievably complicated. And human beings can do it, it's really good. I think that's harder than driving a car, by the way. Putting together, working at a. Working on a factory. Two smart people can disagree. Yay. I think driving a car. We'll get you in the factory someday and then we'll see how you do. No, not for us humans driving a car is easy. I'm saying building a machine that drives a car is not easy. No, okay. Okay. Driving a car is easy for humans because we've been evolving for billions of years. Drive cars. Yeah, I noticed that. The pale of the cars are super cool. No, now you join the rest of the internet and mocking me. Okay. I wasn't mocking, I was just. Yeah, yeah. Intrigued by your anthropology. Yeah, it's. I'll have to go dig into that. There's some inaccuracies there, yes. Okay, but in general, what have you learned in terms of thinking about passion, craftsmanship, tension, chaos. Jesus. The whole mess of it. What have you learned, have taken away from your time working with Elon Musk, working at Tesla, which is known to be a place of chaos innovation, craftsmanship, and all of those things. I really like the way you thought. You think you have an understanding about what first principles of something is, and then you talk to Elon about it, and you didn't scratch the surface. He has a deep belief that no matter what you do, it's a local maximum, right? And I had a friend, he invented a better electric motor, and it was a lot better than what we were using. And one day he came by, he said, I'm a little disappointed, because this is really great, and you didn't seem that impressed. And I said, when the super intelligent aliens come, are they going to be looking for you? Like, where is he? The guy who built the motor. Yeah. Probably not. You know, like, but doing interesting work that's both innovative and, let's say, craftsman's work on the current thing is really satisfying, and it's good. And that's cool. And then Elon was good at taking everything apart, and like, what's the deep first principle? Oh, no, what's really, no, what's really? You know, that ability to look at it without assumptions and how constraints is super wild. You know, he built a rocket ship, and an electric car, and you know, everything. And that's super fun, and he's into it, too. Like, when they first landed two SpaceX rockets at Tesla, we had a video projector in the big room, and like, 500 people came down, and when they landed, everybody cheered, and some people cried. It was so cool. All right, but how did you do that? Well, it was super hard, and then people say, well, it's chaotic, really? To get out of all your assumptions, you think that's not gonna be unbelievably painful? And is Elon tough? Yeah, probably. Do people look back on it and say, boy, I'm really happy I had that experience to go take apart that many layers of assumptions? Sometimes super fun, sometimes painful. So it could be emotionally and intellectually painful, that whole process of just stripping away assumptions. Yeah, imagine 99% of your thought process is protecting your self conception, and 98% of that's wrong. Now you got the math right. How do you think you're feeling when you get back into that one bit that's useful, and now you're open, and you have the ability to do something different? I don't know if I got the math right. It might be 99.9, but it ain't 50. Imagining it, the 50% is hard enough. Now, for a long time, I've suspected you could get better. Like you can think better, you can think more clearly, you can take things apart. And there's lots of examples of that, people who do that. And Nilan is an example of that, you are an example. I don't know if I am, I'm fun to talk to. Certainly. I've learned a lot of stuff. Well, here's the other thing, I joke, like I read books, and people think, oh, you read books. Well, no, I've read a couple of books a week for 55 years. Well, maybe 50, because I didn't learn to read until I was age or something. And it turns out when people write books, they often take 20 years of their life where they passionately did something, reduce it to 200 pages. That's kind of fun. And then you go online, and you can find out who wrote the best books and who liked, you know, that's kind of wild. So there's this wild selection process, and then you can read it, and for the most part, understand it. And then you can go apply it. Like I went to one company, I thought, I haven't managed much before. So I read 20 management books, and I started talking to them, and basically compared to all the VPs running around, I'd read 19 more management books than anybody else. It wasn't even that hard. And half the stuff worked, like first time. It wasn't even rocket science. But at the core of that is questioning the assumptions, or sort of entering the thinking, first principles thinking, sort of looking at the reality of the situation, and using that knowledge, applying that knowledge. So that's. So I would say my brain has this idea that you can question first assumptions. But I can go days at a time and forget that, and you have to kind of like circle back that observation. Because it is emotionally challenging. Well, it's hard to just keep it front and center, because you operate on so many levels all the time, and getting this done takes priority, or being happy takes priority, or screwing around takes priority. Like how you go through life is complicated. And then you remember, oh yeah, I could really think first principles. Oh shit, that's tiring. But you do for a while, and that's kind of cool. So just as a last question in your sense, from the big picture, from the first principles, do you think, you kind of answered it already, but do you think autonomous driving is something we can solve on a timeline of years? So one, two, three, five, 10 years, as opposed to a century? Yeah, definitely. Just to linger on it a little longer, where's the confidence coming from? Is it the fundamentals of the problem, the fundamentals of building the hardware and the software? As a computational problem, understanding ballistics, roles, topography, it seems pretty solvable. And you can see this, like speech recognition, for a long time people are doing frequency and domain analysis, and all kinds of stuff, and that didn't work at all, right? And then they did deep learning about it, and it worked great. And it took multiple iterations. And autonomous driving is way past the frequency analysis point. Use radar, don't run into things. And the data gathering's going up, and the computation's going up, and the algorithm understanding's going up, and there's a whole bunch of problems getting solved like that. The data side is really powerful, but I disagree with both you and Elon. I'll tell Elon once again, as I did before, that when you add human beings into the picture, it's no longer a ballistics problem. It's something more complicated, but I could be very well proven wrong. Cars are highly damped in terms of rate of change. Like the steering system's really slow compared to a computer. The acceleration of the acceleration's really slow. Yeah, on a certain timescale, on a ballistics timescale, but human behavior, I don't know. I shouldn't say. Human beings are really slow too. Weirdly, we operate half a second behind reality. Nobody really understands that one either. It's pretty funny. Yeah, yeah. We very well could be surprised, and I think with the rate of improvement in all aspects on both the compute and the software and the hardware, there's gonna be pleasant surprises all over the place. Speaking of unpleasant surprises, many people have worries about a singularity in the development of AI. Forgive me for such questions. Yeah. When AI improves the exponential and reaches a point of superhuman level general intelligence, beyond the point, there's no looking back. Do you share this worry of existential threats from artificial intelligence, from computers becoming superhuman level intelligent? No, not really. We already have a very stratified society, and then if you look at the whole animal kingdom of capabilities and abilities and interests, and smart people have their niche, and normal people have their niche, and craftsmen have their niche, and animals have their niche. I suspect that the domains of interest for things that are astronomically different, like the whole something got 10 times smarter than us and wanted to track us all down because what? We like to have coffee at Starbucks? Like, it doesn't seem plausible. No, is there an existential problem that how do you live in a world where there's something way smarter than you, and you based your kind of self esteem on being the smartest local person? Well, there's what, 0.1% of the population who thinks that? Because the rest of the population's been dealing with it since they were born. So the breadth of possible experience that can be interesting is really big. And, you know, superintelligence seems likely, although we still don't know if we're magical, but I suspect we're not. And it seems likely that it'll create possibilities that are interesting for us, and its interests will be interesting for that, for whatever it is. It's not obvious why its interests would somehow want to fight over some square foot of dirt, or, you know, whatever the usual fears are about. So you don't think it'll inherit some of the darker aspects of human nature? Depends on how you think reality's constructed. So for whatever reason, human beings are in, let's say, creative tension and opposition with both our good and bad forces. Like, there's lots of philosophical understanding of that. I don't know why that would be different. So you think the evil is necessary for the good? I mean, the tension. I don't know about evil, but like we live in a competitive world where your good is somebody else's evil. You know, there's the malignant part of it, but that seems to be self limiting, although occasionally it's super horrible. But yes, there's a debate over ideas, and some people have different beliefs, and that debate itself is a process. So the arriving at something. Yeah, and why wouldn't that continue? Yeah. But you don't think that whole process will leave humans behind in a way that's painful? Emotionally painful, yes. For the 0.1%, they'll be. Why isn't it already painful for a large percentage of the population? And it is. I mean, society does have a lot of stress in it, about the 1%, and about the this, and about the that, but you know, everybody has a lot of stress in their life about what they find satisfying, and you know, know yourself seems to be the proper dictum, and pursue something that makes your life meaningful seems proper, and there's so many avenues on that. Like, there's so much unexplored space at every single level, you know. I'm somewhat of, my nephew called me a jaded optimist. And you know, so it's. There's a beautiful tension in that label, but if you were to look back at your life, and could relive a moment, a set of moments, because there were the happiest times of your life, outside of family, what would that be? I don't want to relive any moments. I like that. I like that situation where you have some amount of optimism and then the anxiety of the unknown. So you love the unknown, the mystery of it. I don't know about the mystery. It sure gets your blood pumping. What do you think is the meaning of this whole thing? Of life, on this pale blue dot? It seems to be what it does. Like, the universe, for whatever reason, makes atoms, which makes us, which we do stuff. And we figure out things, and we explore things, and. That's just what it is. It's not just. Yeah, it is. Jim, I don't think there's a better place to end it is a huge honor, and. Well, that was super fun. Thank you so much for talking today. All right, great. Thanks for listening to this conversation, and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast. You'll get $10, and $10 will go to FIRST, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe on YouTube. Give it five stars on Apple Podcast. Follow on Spotify, support it on Patreon, or simply connect with me on Twitter. And now, let me leave you with some words of wisdom from Gordon Moore. If everything you try works, you aren't trying hard enough. Thank you for listening, and hope to see you next time.
Jim Keller: Moore's Law, Microprocessors, and First Principles | Lex Fridman Podcast #70
The following is a conversation with Vladimir Vapnik, part two, the second time we spoke on the podcast. He's the coinventor of support vector machines, support vector clustering, VC theory, and many foundational ideas and statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC labs, Facebook AI research, and now is a professor at Columbia University. His work has been cited over 200,000 times. The first time we spoke on the podcast was just over a year ago, one of the early episodes. This time we spoke after a lecture he gave titled complete statistical theory of learning as part of the MIT series of lectures on deep learning and AI that I organized. I'll release the video of the lecture in the next few days. This podcast and lecture are independent from each other, so you don't need one to understand the other. The lecture is quite technical and math heavy, so if you do watch both, I recommend listening to this podcast first, since the podcast is probably a bit more accessible. This is the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple podcasts, support it on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the app store. When you get it, use code LexPodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC, since Cash App allows you to send and receive money digitally, peer to peer, and security in all digital transactions is very important. Let me mention that PCI data security standard, PCI DSS level one, that Cash App is compliant with. I'm a big fan of standards for safety and security and PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the app store or Google Play and use the code LexPodcast, you get $10 and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now here's my conversation with Vladimir Vapnik. You and I talked about Alan Turing yesterday a little bit and that he, as the father of artificial intelligence, may have instilled in our field, an ethic of engineering and not science, seeking more to build intelligence rather than to understand it. What do you think is the difference between these two paths of engineering intelligence and the science of intelligence? It's a completely different story. Engineering is a mutation of human activity. You have to make a device which behaves as humans behave, have all the functions of humans. It doesn't matter how you do it, but to understand what is intelligence, but to understand what is intelligence about, it's quite a different problem. So I think, I believe that it's somehow related to the predicate we talked yesterday about, because look at the Vladimir Propp's idea. He just found 31 here, predicates, he called it units, which can explain human behavior, at least in Russian tales. You look at Russian tales and derive from that. And then people realize that it's more wide than in Russian tales. It is in TV, in movie serials and so on and so on. So you're talking about Vladimir Propp, who in 1928 published a book, Morphology of the Folktale, describing 31 predicates that have this kind of sequential structure that a lot of the stories, narratives follow in Russian folklore and in other contexts. We'll talk about it. I'd like to talk about predicates in a focused way, but let me, if you allow me to stay zoomed out on our friend, Alan Turing, and, you know, he inspired a generation with the imitation game. Yes. Do you think if we can linger on that a little bit longer, do you think we can learn, do you think learning to imitate intelligence can get us closer to the science, to understanding intelligence? So why do you think imitation is so far from understanding? I think that it is different between you have different goals. So your goal is to create something, something useful. Yeah. And that is great. And you can see how much things was done and I believe that it will be done even more, it's self driving cars and also the business, it is great. And it was inspired by Turing's vision. But understanding is very difficult. It's more or less philosophical category. What means understand the world? I believe in scheme which starts from Plato, that there exists world of ideas. I believe that intelligence, it is world of ideas, but it is world of pure ideas. And when you combine them with reality things, it creates, as in my case, invariants, which is very specific. And that's, I believe, the combination of ideas in way to constructing invariants. Constructing invariant is intelligence. But first of all, predicate, if you know, predicate and hopefully then not too much predicate exists. For example, 31 predicate for human behavior, it is not a lot. Vladimir Propp used 31, you can even call them predicate, 31 predicates to describe stories, narratives. Do you think human behavior, how much of human behavior, how much of our world, our universe, all the things that matter in our existence can be summarized in predicates of the kind that Propp was working with? I think that we have a lot of form of behavior, but I think that predicate is much less because even in this example, which I gave you yesterday, you saw that predicate can be, one predicate can construct many different invariants depending on your data. They're applying to different data and they give different invariants. So, but pure ideas, maybe not so much. Not so many. I don't know about that, but my guess, I hope that's why challenge about digit recognition, how much you need. I think we'll talk about computer vision and 2D images a little bit in your challenge. That's exactly about intelligence. That's exactly, that's exactly about, no, that hopes to be exactly about the spirit of intelligence in the simplest possible way. Yeah, absolutely you should start the simplest way, otherwise you will not be able to do it. Well, there's an open question whether starting at the MNIST digit recognition is a step towards intelligence or it's an entirely different thing. I think that to beat records using say 100, 200 times less examples, you need intelligence. You need intelligence. So let's, because you use this term and it would be nice, I'd like to ask simple, maybe even dumb questions. Let's start with a predicate. In terms of terms and how you think about it, what is a predicate? I don't know. I have a feeling formally they exist, but I believe that predicate for 2D images, one of them is symmetry. Hold on a second. Sorry. Sorry, sorry to interrupt and pull you back. At the simplest level, we're not even, we're not being profound currently. A predicate is a statement of something that is true. Yes. Do you think of predicates as somehow probabilistic in nature or is this binary? This is truly constraints of logical statements about the world. In my definition, the simplest predicate is function. Function, and you can use this function to make inner product that is predicate. What's the input and what's the output of the function? Input is X, something which is input in reality. Say if you consider digit recognition, it pixel space input, but it is function which in pixel space, but it can be any function from pixel space and you choose, and I believe that there are several functions which is important for understanding of images. One of them is symmetry. It's not so simple construction as I described with the derivative, with all this stuff, but another, I believe, I don't know how many, is how well structurized is picture. Structurized? Yeah. What do you mean by structurized? It is formal definition. Say something heavy on the left corner, not so heavy in the middle and so on. You describe in general concept of what you assume. Concepts, some kind of universal concepts. Yeah, but I don't know how to formalize this. Do you? So this is the thing. There's a million ways we can talk about this. I'll keep bringing it up, but we humans have such concepts when we look at digits, but it's hard to put them, just like you're saying now, it's hard to put them into words. You know, that is example, when critics in music, trying to describe music, they use predicate and not too many predicate, but in different combination, but they have some special words for describing music and the same should be for images, but maybe there are critics who understand essence of what this image is about. Do you think there exists critics who can summarize the essence of images, human beings? I hope so, yes, but that... Explicitly state them on paper. The fundamental question I'm asking is, do you think there exists a small set of predicates that will summarize images? It feels to our mind, like it does, that the concept of what makes a two and a three and a four... No, no, no, it's not on this level. It should not describe two, three, four. It describes some construction, which allow you to create invariance. And invariance, sorry to stick on this, but terminology. Invariance, it is property of your image. Say, I can say, looking on my image, it is more or less symmetric. Looking on my image, it is more or less symmetric, and I can give you value of symmetry, say, level of symmetry, using this function which I gave yesterday. And you can describe that your image has these characteristics exactly in the way how musical critics describe music. So, but this is invariant applied to specific data, to specific music, to something. I strongly believe in this plot ideas that there exists world of predicate and world of reality, and predicate and reality is somehow connected, and you have to know that. Let's talk about Plato a little bit. So you draw a line from Plato, to Hegel, to Wigner, to today. So Plato has forms, the theory of forms. So there's a world of ideas and a world of things, as you talk about, and there's a connection. And presumably the world of ideas is very small, and the world of things is arbitrarily big, but they're all what Plato calls them like, it's a shadow. The real world is a shadow from the world of forms. Yeah, you have projection of a world of ideas. Yeah, very poetic. In reality, you can realize this projection using these invariants because it is projection for own specific examples, which create specific features of specific objects. So the essence of intelligence is while only being able to observe the world of things, try to come up with a world of ideas. Exactly. Like in this music story, intelligent musical critics knows all these words and have a feeling about what they mean. I feel like that's a contradiction, intelligent music critics. But I think music is to be enjoyed in all its forms. The notion of critic, like a food critic. No, I don't want touch emotion. That's an interesting question. Does emotion... There's certain elements of the human psychology, of the human experience, which seem to almost contradict intelligence and reason. Like emotion, like fear, like love, all of those things, are those not connected in any way to the space of ideas? That I don't know. I just want to be concentrate on very simple story, on digit recognition. So you don't think you have to love and fear death in order to recognize digits? I don't know. Because it's so complicated. It involves a lot of stuff which I never considered. But I know about digit recognition. And I know that for digit recognition, to get records from small number of observations, you need predicate. But not special predicate for this problem. But universal predicate, which understand world of images. Of visual information. Visual, yes. But on the first step, they understand, say, world of handwritten digits, or characters, or something simple. So like you said, symmetry is an interesting one. No, that's what I think one of the predicate is related to symmetry. The level of symmetry. Okay, degree of symmetry. So you think symmetry at the bottom is a universal notion, and there's degrees of a single kind of symmetry, or is there many kinds of symmetries? Many kinds of symmetries. There is a symmetry, antisymmetry, say, letter S. So it has vertical antisymmetry. And it could be diagonal symmetry, vertical symmetry. So when you cut vertically the letter S... Yeah, then the upper part and lower part in different directions. Inverted, along the Y axis. But that's just like one example of symmetry, right? Isn't there like... Right, but there is a degree of symmetry. If you play all this iterative stuff to do tangent distance, whatever I describe, you can have a degree of symmetry. And that is what describing reason of image. It is the same as you will describe this image. Think about digit S, it has antisymmetry. Digit three is symmetric. More or less, look for symmetry. Do you think such concepts like symmetry, predicates like symmetry, is it a hierarchical set of concepts? Or are these independent, distinct predicates that we want to discover as some set of... No, there is an idea of symmetry. And you can, this idea of symmetry, make very general. Like degree of symmetry. If degree of symmetry can be zero, no symmetry at all. Or degree of symmetry, say, more or less symmetrical. But you have one of these descriptions. And symmetry can be different. As I told, horizontal, vertical, diagonal, and antisymmetry is also concept of symmetry. What about shape in general? I mean, symmetry is a fascinating notion, but... No, no, I'm talking about digit. I would like to concentrate on all I would like to know, predicate for digit recognition. Yes, but symmetry is not enough for digit recognition, right? It is not necessarily for digit recognition. It helps to create invariant, which you can use when you will have examples for digit recognition. You have regular problem of digit recognition. You have examples of the first class or second class. Plus, you know that there exists concept of symmetry. And you apply, when you're looking for decision rule, you will apply concept of symmetry, of this level of symmetry, which you estimate from... So let's talk. Everything comes from weak convergence. What is convergence? What is weak convergence? What is strong convergence? I'm sorry, I'm gonna do this to you. What are we converging from and to? You're converging, you would like to have a function. The function which, say, indicator function, which indicate your digit five, for example. A classification task. Let's talk only about classification. So classification means you will say whether this is a five or not, or say which of the 10 digits it is. Right, right. I would like to have these functions. Then, I have some examples. I can consider property of these examples. Say, symmetry. And I can measure level of symmetry for every digit. And then I can take average from my training data. And I will consider only functions of conditional probability, which I'm looking for my decision rule. Which applying to digits will give me the same average as I observe on training data. So, actually, this is different level of description of what you want. You want not just, you show not one digit. You show, this predicate, show general property of all digits which you have in mind. If you have in mind digit three, it gives you property of digit three. And you select as admissible set of function, only function, which keeps this property. You will not consider other functions. So, you immediately looking for smaller subset of function. That's what you mean by admissible functions. Admissible function, exactly. Which is still a pretty large, for the number three, is a large. It is pretty large, but if you have one predicate. But according to, there is a strong and weak convergence. Strong convergence is convergence in function. You're looking for the function on one function, and you're looking for another function. And square difference from them should be small. If you take difference in any points, make a square, make an integral, and it should be small. That is convergence in function. Suppose you have some function, any function. So, I would say, I say that some function converge to this function. If integral from square difference between them is small. That's the definition of strong convergence. That definition of strong convergence. Two functions, the integral, the difference, is small. Yeah, it is convergence in functions. Yeah. But you have different convergence in functionals. You take any function, you take some function, phi, and take inner product, this function, this f function. f0 function, which you want to find. And that gives you some value. So, you say that set of functions converge in inner product to this function, if this value of inner product converge to value f0. That is for one phi. But weak convergence requires that it converge for any function of Hilbert space. If it converge for any function of Hilbert space, then you will say that this is weak convergence. You can think that when you take integral, that is integral property of function. For example, if you will take sine or cosine, it is coefficient of, say, Fourier expansion. So, if it converge for all coefficients of Fourier expansion, so under some condition, it converge to function you're looking for. But weak convergence means any property. Convergence not point wise, but integral property of function. So, weak convergence means integral property of functions. When I'm talking about predicate, I would like to formulate which integral properties I would like to have for convergence. So, and if I will take one predicated function, which I measure property, if I will use one predicate and say, I will consider only function which give me the same value as this predicate, I selecting set of functions from functions which is admissible in the sense that function which I'm looking for in this set of functions because I checking in training data, it gives the same. Yeah, so it always has to be connected to the training data in terms of? Yeah, but property, you can know independent on training data. And this guy, prop, says that there is formal property, 31 property. A fairy tale, a Russian fairy tale. But Russian fairy tale is not so interesting. More interesting that people apply this to movies, to theater, to different things. And the same works, they're universal. Well, so I would argue that there's a little bit of a difference between the kinds of things that were applied to which are essentially stories and digit recognition. It is the same story. You're saying digits, there's a story within the digit. Yeah. And so but my point is why I hope that it possible to beat record using not 60,000, but say 100 times less. Because instead, you will give predicates. And you will select your decision not from wide set of functions, but from set of functions which keeps this predicates. But predicate is not related just to digit recognition. Right. Like in Plato's case. Do you think it's possible to automatically discover the predicates? So you basically said that the essence of intelligence is the discovery of good predicates. Yeah. Now, the natural question is that's what Einstein was good at doing in physics. Can we make machines do these kinds of discovery of good predicates? Or is this ultimately a human endeavor? That I don't know. I don't think that machine can do. Because according to theory about weak convergence, any function from Hilbert space can be predicated. So you have infinite number of predicate in upper. And before, you don't know which predicate is good and which. But whatever prop show and why people call it breakthrough, that there is not too many predicate which cover most of situation happened in the world. Right. So there's a sea of predicates. And most of the only a small amount are useful for the kinds of things that happen in the world. I think that I would say only small part of predicate very useful. Useful all of them. Only very few are what we should let's call them good predicates. Very good predicates. Very good predicates. So can we linger on it? What's your intuition? Why is it hard for a machine to discover good predicates? Even in my talk described how to do predicate. How to find new predicate. I'm not sure that it is very good. What did you propose in your talk? No. In my talk, I gave example for diabetes. Diabetes, yeah. When we achieve some percent. So then we're looking for area where some sort of predicate, which I formulate, does not keeps invariant. So if it doesn't keep, I retrain my data. I select only function which keeps this invariant. And when I did it, I improved my performance. I can looking for this predicate. I know technically how to do that. And you can, of course, do it using machine. But I'm not sure that we will construct the smartest predicate. But this is the, allow me to linger on it. Because that's the essence. That's the challenge. That is artificial. That's the human level intelligence that we seek is the discovery of these good predicates. You've talked about deep learning as a way to, the predicates they use and the functions are mediocre. You can find better ones. Let's talk about deep learning. Sure, let's do it. I know only Jan's Likun convolutional network. And what else? I don't know. And it's a very simple convolution. There's not much else to know. To pixel left and right. I can do it like that with one predicate. Convolution is a single predicate. It's single. It's single predicate. Yes, but that's it. You know exactly. You take the derivative for translation and predicate. This should be kept. So that's a single predicate. But humans discovered that one. Or at least. Not it. That is a risk. Not too many predicates. And that is big story because Jan did it 25 years ago and nothing so clear was added to deep network. And then I don't understand why we should talk about deep network instead of talking about piecewise linear functions which keeps this predicate. Well, a counter argument is that maybe the amount of predicates necessary to solve general intelligence, say in the space of images, doing efficient recognition of handwritten digits is very small. And so we shouldn't be so obsessed about finding. We'll find other good predicates like convolution, for example. There has been other advancements like if you look at the work with attention, there's intentional mechanisms in especially used in natural language focusing the network's ability to learn at which part of the input to look at. The thing is, there's other things besides predicates that are important for the actual engineering mechanism of showing how much you can really do given these predicates. I mean, that's essentially the work of deep learning is constructing architectures that are able to be, given the training data, to be able to converge towards a function that can generalize well. It's an engineering problem. Yeah, I understand. But let's talk not on emotional level, but on a mathematical level. You have set of piecewise linear functions. It is all possible neural networks. It's just piecewise linear functions. It's many, many pieces. Large number of piecewise linear functions. Exactly. Very large. Very large. Almost feels like too large. It's still simpler than, say, convolution, than reproducing kernel Hilbert space, which have a Hilbert set of functions. What's Hilbert space? It's space with infinite number of coordinates, say, or function for expansion, something like that. So it's much richer. And when I'm talking about closed form solution, I'm talking about this set of function, not piecewise linear set, which is particular case of it is small part. So neural networks is a small part of the space of functions you're talking about. Say, small set of functions. Let me take that. But it is fine. It is fine. I don't want to discuss the small or big. You take advantage. So you have some set of functions. So now, when you're trying to create architecture, you would like to create admissible set of functions, which all your tricks to use not all functions, but some subset of this set of functions. Say, when you're introducing convolutional net, it is way to make this subset useful for you. But from my point of view, convolutional, it is something you want to keep some invariants, say, translation invariants. But now, if you understand this and you cannot explain on the level of ideas what neural network does, you should agree that it is much better to have a set of functions. And they say, this set of functions should be admissible. It must keep this invariant, this invariant, and that invariant. You know that as soon as you incorporate new invariant set of function, because smaller and smaller and smaller. But all the invariants are specified by you, the human. Yeah, but what I hope that there is a standard predicate, like PROPSHOW, that's what I want to find for digit recognition. If we start, it is completely new area, what is intelligence about on the level, starting from Plato's idea, what is world of ideas. And I believe that is not too many. But it is amusing that mathematicians doing something, a neural network in general function, but people from literature, from art, they use this all the time. That's right. Invariants saying, it is great how people describe music. We should learn from that. And something on this level. But so why Vladimir Propp, who was just theoretical, who studied theoretical literature, he found that. You know what? Let me throw that right back at you, because there's a little bit of a, that's less mathematical and more emotional, philosophical, Vladimir Propp. I mean, he wasn't doing math. No. And you just said another emotional statement, which is you believe that this Plato world of ideas is small. I hope. I hope. Do you, what's your intuition, though? If we can linger on it. You know, it is not just small or big. I know exactly. Then when I introducing some predicate, I decrease set of functions. But my goal to decrease set of function much. By as much as possible. By as much as possible. Good predicate, which does this, then I should choose next predicate, which decrease set as much as possible. So set of good predicate, it is such that they decrease this amount of admissible function. So if each good predicate significantly reduces the set of admissible functions, that there naturally should not be that many good predicates. No, but if you reduce very well the VC dimension of the function, of admissible set of function, it's small. And you need not too much training data to do well. And VC dimension, by the way, is some measure of capacity of this set of functions. Right. Roughly speaking, how many function in this set. So you're decreasing, decreasing. And it makes easy for you to find function you're looking for. But the most important part, to create good admissible set of functions. And it probably, there are many ways. But the good predicates such that they can do that. So for this duck, you should know a little bit about duck. Because what are the three fundamental laws of ducks? Looks like a duck, swims like a duck, and quacks like a duck. You should know something about ducks to be able to. Not necessarily. Looks like, say, horse. It's also good. So it's not, it generalizes from ducks. And talk like, and make sound like horse or something. And run like horse, and moves like horse. It is general, it is general predicate that this applied to duck. But for duck, you can say, play chess like duck. You cannot say play chess like duck. Why not? So you're saying you can, but that would not be a good. No, you will not reduce a lot of functions. You would not do, yeah, you would not reduce the set of functions. So you can, the story is formal story, mathematical story. Is that you can use any function you want as a predicate. But some of them are good, some of them are not, because some of them reduce a lot of functions to admissible set of some of them. But the question is, and I'll probably keep asking this question, but how do we find such, what's your intuition? Handwritten recognition. How do we find the answer to your challenge? Yeah, I understand it like that. I understand what. What defined? What it means, I knew predicate. Yeah. Like guy who understand music can say this word, which he described when he listened to music. He understand music. He use not too many different, oh, you can do like prop. You can make collection. What he talking about music, about this, about that. It's not too many different situation he described. Because we mentioned Vladimir prop a bunch. Let me just mention, there's a sequence of 31 structural notions that are common in stories. And I think. You call it units. Units. And I think they resonate. I mean, it starts just to give an example, obsession, a member of the hero's community, a family leaves the security of the home environment. Then it goes to the interdiction, a forbidding edict or command is passed upon the hero. Don't go there. Don't do this. The hero is warned against some action. Then step three, violation of interdiction. Break the rules, break out on your own. Then reconnaissance. The villain makes an effort to attain knowledge, needing to fulfill their plan, so on. It goes on like this, ends in a wedding, number 31. Happily ever after. No, he just gave description of all situations. He understands this world. Of folktales. Yeah, not folktales, but stories. And these stories not in just folktales. These stories in detective serials as well. And probably in our lives. We probably live. Read this. And then they wrote that this predicate is good for different situation. From movie, for theater. By the way, there's also criticism, right? There's an other way to interpret narratives from Claude Levi Strauss. I don't know. I am not in this business. No, I know, it's theoretical literature, but it's looking at paradigms behind things. It's always the discussion, yeah. But at least there is units. It's not too many units that can describe. But this guy probably gives another units. Or another way of... Exactly, another set of units. Another set of predicates. It doesn't matter how. But they exist. Probably. My question is, whether given those units, whether without our human brains to interpret these units, they would still hold as much power as they have. Meaning, are those units enough when we give them to an alien species? Let me ask you. Do you understand digit images? No, I don't understand. No, no, no. When you can recognize these digit images, it means that you understand. Yes, exactly. You understand characters, you understand... No, no, no, no. It's the imitation versus understanding question, because I don't understand the mechanism by which I understand. No, no, no. I'm not talking about, I'm talking about predicates. You understand that it involves symmetry, maybe structure, maybe something else. I cannot formulate. I just was able to find symmetries, degree of symmetries. That's really good. So this is a good line. I feel like I understand the basic elements of what makes a good hand recognition system my own. Like symmetry connects with me. It seems like that's a very powerful predicate. My question is, is there a lot more going on that we're not able to introspect? Maybe I need to be able to understand a huge amount in the world of ideas, thousands of predicates, millions of predicates in order to do hand recognition. I don't think so. So both your hope and your intuition are such that very few predicates are enough. You're using digits, you're using examples as well. Theory says that if you will use all possible functions from Hilbert space, all possible predicate, you don't need training data. You just will have admissible set of function which contain one function. Yes. So the trade off is when you're not using all predicates, you're only using a few good predicates you need to have some training data. Yes, exactly. The more good predicates you have, the less training data you need. Exactly. That is intelligent. Still, okay, I'm gonna keep asking the same dumb question, handwritten recognition to solve the challenge. You kind of propose a challenge that says we should be able to get state of the art MNIST error rates by using very few, 60, maybe fewer examples per digit. What kind of predicates do you think it will look like? That is the challenge. So people who will solve this problem, they will answer. Do you think they'll be able to answer it in a human explainable way? They just need to write function, that's it. But so can that function be written, I guess, by an automated reasoning system? Whether we're talking about a neural network learning a particular function or another mechanism? No, I'm not against neural network. I'm against admissible set of function which create neural network. You did it by hand. You don't do it by invariance, by predicate, by reason. But neural networks can then reverse, do the reverse step of helping you find a function that just, the task of a neural network is to find a disentangled representation, for example, that they call, is to find that one predicate function that's really capture some kind of essence. One, not the entire essence, but one very useful essence of this particular visual space. Do you think that's possible? Listen, I'm grasping, hoping there's an automated way to find good predicates, right? So the question is what are the mechanisms of finding good predicates, ideas that you think we should pursue? A young grad student listening right now. I gave example. So find situation where predicate which you're suggesting don't create invariant. It's like in physics. Find situation where existing theory cannot explain it. Find situation where the existing theory can't explain it. So you're finding contradictions. Find contradiction, and then remove this contradiction. But in my case, what means contradiction, you find function which, if you will use this function, you're not keeping invariants. This is really the process of discovering contradictions. Yeah. It is like in physics. Find situation where you have contradiction for one of the property, for one of the predicate. Then include this predicate, making invariants, and solve again this problem. Now you don't have contradiction. But it is not the best way, probably, I don't know, to looking for predicate. That's just one way, okay. That, no, no, it is brute force way. The brute force way. What about the ideas of what, big umbrella term of symbolic AI? There's what in the 80s with expert systems, sort of logic reasoning based systems. Is there hope there to find some, through sort of deductive reasoning, to find good predicates? I don't think so. I think that just logic is not enough. It's kind of a compelling notion, though. You know, that when smart people sit in a room and reason through things, it seems compelling. And making our machines do the same is also compelling. So, everything is very simple. When you have infinite number of predicate, you can choose the function you want. You have invariants and you can choose the function you want. But you have to have not too many invariants to solve the problem. So, and have from infinite number of function to select finite number and hopefully small number of functions, which is good enough to extract small set of admissible functions. So, they will be admissible, it's for sure, because every function just decrease set of function and leaving it admissible. But it will be small. But why do you think logic based systems don't, can't help, intuition, not? Because you should know reality. You should know life. This guy like Propp, he knows something. And he tried to put in invariant his understanding. That's the human, yeah, but see, you're putting too much value into Vladimir Propp knowing something. No, it is, in the story, what means you know life? What it means? You know common sense. No, no, you know something. Common sense, it is some rules. You think so? Common sense is simply rules? Common sense is every, it's mortality, it's fear of death, it's love, it's spirituality, it's happiness and sadness. All of it is tied up into understanding gravity, which is what we think of as common sense. I don't really need to discuss so wide. I want to discuss, understand digit recognition. Anytime I bring up love and death, you bring it back to digit recognition, I like it. No, you know, it is durable because there is a challenge. Yeah. Which I see how to solve it. If I will have a student concentrate on this work, I will suggest something to solve. You mean handwritten record? Yeah, it's a beautifully simple, elegant, and yet. I think that I know invariants which will solve this. You do? I think so, yes. But it is not universal, it is maybe, I want some universal invariants which are good not only for digit recognition, for image understanding. So let me ask, how hard do you think is 2D image understanding? So if we, we can kind of intuit handwritten recognition. How big of a step, leap, journey is it from that? If I gave you good, if I solved your challenge for handwritten recognition, how long would my journey then be from that to understanding more general, natural images? Immediately, you will understand this as soon as you will make a record. Because it is not for free. As soon as you will create several invariants which will help you to get the same performance that the best neural net did using 100, there might be more than 100 times less examples, you have to have something smart to do that. And you're saying? That is invariant, it is predicate. Because you should put some idea how to do that. But okay, let me just pause. Maybe it's a trivial point, maybe not. But handwritten recognition feels like a 2D, two dimensional problem. And it seems like how much complicated is the fact that most images are projection of a three dimensional world onto a 2D plane. It feels like for a three dimensional world, we need to start understanding common sense in order to understand an image. It's no longer visual shape and symmetry. It's having to start to understand concepts of, understand life. Yeah, you're talking that there are different invariant, different predicate, yeah. And potentially much larger number. You know, maybe, but let's start from simple. Yeah, but you said that it would be immediate. No, you know, I cannot think about things which I don't understand. This I understand, but I'm sure that I don't understand everything there. Yeah, that's the difference. Do as simple as possible, but not simpler. And that is exact case. With handwritten. With handwritten. Yeah, but that's the difference between you and I. I welcome and enjoy thinking about things I completely don't understand. Because to me, it's a natural extension without having solved handwritten recognition to wonder how difficult is the next step of understanding 2D, 3D images. Because ultimately, while the science of intelligence is fascinating, it's also fascinating to see how that maps to the engineering of intelligence. And recognizing handwritten digits is not, doesn't help you, it might, it may not help you with the problem of general intelligence. We don't know. It'll help you a little bit. We don't know how much. It's unclear. It's unclear. Yeah. It might very much. But I would like to make a remark. Yes. I start not from very primitive problem, make a challenge problem. I start with very general problem, with PLATO. So you understand, and it comes from PLATO to digit recognition. So you basically took PLATO and the world of forms and ideas and mapped and projected into the clearest, simplest formulation of that big world. You know, I would say that I did not understand PLATO until recently, and until I consider the convergence and then predicate, and then, oh, this is what PLATO told. So. Can you linger on that? Like why, how do you think about this world of ideas and world of things in PLATO? No, it is metaphor. It is. It's a metaphor, for sure. Yeah. It's a compelling, it's a poetic and a beautiful metaphor. Yeah, yeah, yeah. But what, can you? But it is a way how you should try to understand how to talk ideas in the world. So from my point of view, it is very clear, but it is lying. All the time, people looking for that. Say, PLATO, then Hegel, whatever reasonable it exists, whatever exists, it is reasonable. I don't know what he have in mind reasonable. Right, this philosophers again, their words. No, no, no, no, no, no, no. It is next stop of Wigner. That mathematics understand something of reality. It is the same PLATO line. And then it comes suddenly to Vladimir Propp. Look, 31 ideas, 31 units, and this corrects everything. There's abstractions, ideas that represent our world. Our world, and we should always try to reach into that. Yeah, but you should make a projection on reality. But understanding is, it is abstract ideas. You have in your mind several abstract ideas which you can apply to reality. And reality in this case, so if you look at machine learning as data. This example, data. Data. Okay, let me put this on you because I'm an emotional creature. I'm not a mathematical creature like you. I find compelling the idea, forget the space, the sea of functions. There's also a sea of data in the world. And I find compelling that there might be, like you said, teacher, small examples of data that are most useful for discovering good, whether it's predicates or good functions, that the selection of data may be a powerful journey, a useful, you know, coming up with a mechanism for selecting good data might be useful too. Do you find this idea of finding the right data set interesting at all? Or do you kind of take the data set as a given? I think that it is, you know, my theme is very simple. You have huge set of functions. If you will apply, and you have not too many data, if you pick up function which describes this data, you will do not very well. You will. Like randomly pick up. Yeah, you will overfit. Yeah, it will be overfitting. So you should decrease set of function from which you're picking up one. So you should go somehow to admissible set of function. And this, what about weak conversions? So, but from another point of view, to make admissible set of function, you need just a DG, just function which you will take in inner product, which you will measure property of your function. And that is how it works. No, I get it, I get it, I understand it, but do you, the reality is. But let's think about examples. You have huge set of function, and you have several examples. If you just trying to keep, take function which satisfies these examples, you still will overfit. You need decrease, you need admissible set of function. Absolutely, but what, say you have more data than functions. So sort of consider the, I mean, maybe not more data than functions, because that's impossible. But what, I was trying to be poetic for a second. I mean, you have a huge amount of data, a huge amount of examples. But amount of function can be even bigger. It can get bigger, I understand. Everything is. There's always a bigger boat. Full Hilbert space. I got you, but okay. But you don't find the world of data to be an interesting optimization space. Like the optimization should be in the space of functions. Creating admissible set of functions. Admissible set of functions. No, you know, even from the classical business theory, from structure risk minimization, you should organize function in the way that they will be useful for you. Right. And that is admissible set. The way you're thinking about useful is you're given a small set of examples. Useful small, small set of function which contain function I'm looking for. Yeah, but looking for based on the empirical set of small examples. Yeah, but that is another story. I don't touch it. Because I believe that this small examples is not too small. Say 60 per class. Law of large numbers works. I don't need uniform law. The story is that in statistics there are two law. Law of large numbers and uniform law of large numbers. So I want to be in situation where I use law of large numbers but not uniform law of large numbers. Right, so 60 is law of large, it's large enough. I hope, no, it still need some evaluations, some bonds. But the idea is the following that if you trust that say this average gives you something close to expectations so you can talk about that, about this predicate. And that is basis of human intelligence. Good predicates is the, the discovery of good predicates is the basis of human intelligence. It is discoverer of your understanding world. Of your methodology of understanding world. Because you have several function which you will apply to reality. Can you say that again? So you're... You have several functions predicate. But they're abstract. Yes. Then you will apply them to reality, to your data. And you will create in this way predicate. Which is useful for your task. But predicate are not related specifically to your task. To this your task. It is abstract functions. Which being applying, applied to... Many tasks that you might be interested in. It might be many tasks, I don't know. Or... Different tasks. Well they should be many tasks, right? I believe like, like in prop case. It was for fairytales, but it's happened everywhere. Okay, so we talked about images a little bit. But, can we talk about Noam Chomsky for a second? No, I believe I... I don't know him very well. Personally, well... Not personally, I don't know. His ideas. His ideas. Well let me just say, do you think language, human language, is essential to expressing ideas? As Noam Chomsky believes. So like, language is at the core of our formation of predicates. The human language. For me, language and all the story of language is very complicated. I don't understand this. And I am not... I thought about... Nobody does. I am not ready to work on that. Because it's so huge. It is not for me, and I believe not for our century. The 21st century. Not for 21st century. You should learn something, a lot of stuff, from simple task like digit recognition. So you think, okay, you think digital recognition, 2D image, how would you more abstractly define digit recognition? It's 2D image, symbol recognition, essentially. I mean, I'm trying to get a sense, sort of thinking about it now, having worked with MNIST forever, how small of a subset is this of the general vision recognition problem and the general intelligence problem? Is it... Yeah. Is it a giant subset? Is it not? And how far away is language? You know, let me refer to Einstein. Take the simplest problem, as simple as possible, but not simpler. And this is challenge, this simple problem. But it's simple by idea, but not simple to get it. When you will do this, you will find some predicate, which helps it a bit. Well, yeah, I mean, with Einstein, you can, you look at general relativity, but that doesn't help you with quantum mechanics. That's another story. You don't have any universal instrument. Yes, so I'm trying to wonder which space we're in, whether handwritten recognition is like general relativity, and then language is like quantum mechanics. So you're still gonna have to do a lot of mess to universalize it. But I'm trying to see, so what's your intuition why handwritten recognition is easier than language? Just, I think a lot of people would agree with that, but if you could elucidate sort of the intuition of why. I don't know, no, I don't think in this direction. I just think in directions that this is problem, which if we will solve it well, we will create some abstract understanding of images. Maybe not all images. I would like to talk to guys who doing in real images in Columbia University. What kind of images, unreal? Real images. Real images. Yeah, what they're ready, is there a predicate, what can be predicate? I still symmetry will play role in real life images, in any real life images, 2D images. Let's talk about 2D images. Because that's what we know. A neural network was created for 2D images. So the people I know in vision science, for example, the people who study human vision, that they usually go to the world of symbols and like handwritten recognition, but not really, it's other kinds of symbols to study our visual perception system. As far as I know, not much predicate type of thinking is understood about our vision system. They did not think in this direction. They don't, yeah, but how do you even begin to think in that direction? That's a, I would like to discuss with them. Yeah. Because if we will be able to show that it is what working, and theoretical scheme, it's not so bad. So the unfortunate, so if we compare to language, language is like letters, finite set of letters, and a finite set of ways you can put together those letters. So it feels more amenable to kind of analysis. With natural images, there is so many pixels. No, no, no, letter, language is much, much more complicated. It's involved a lot of different stuff. It's not just understanding of very simple class of tasks. I would like to see list of task with language involved. Yes, so there's a lot of nice benchmarks now in natural language processing from the very trivial, like understanding the elements of a sentence, to question answering, to much more complicated where you talk about open domain dialogue. The natural question is, with handwritten recognition, is really the first step of understanding visual information. Right. But even our records show that we go in the wrong direction because we need 60,000 digits. So even this first step, so forget about talking about the full journey, this first step should be taking in the right direction. No, no, wrong direction because 60,000 is unacceptable. No, I'm saying it should be taken in the right direction because 60,000 is not acceptable. If you can talk, it's great, we have half percent of error. And hopefully the step from doing hand recognition using very few examples, the step towards what babies do when they crawl and understand their physical environment. I know you don't know about babies. If you will do from very small examples, you will find principles which are different from what we're using now. And so it's more or less clear. That means that you will use weak convergence, not just strong convergence. Do you think these principles will naturally be human interpretable? Oh, yeah. So like when we'll be able to explain them and have a nice presentation to show what those principles are, or are they very, going to be very kind of abstract kinds of functions? For example, I talked yesterday about symmetry. Yes. And I gave very simple examples. The same will be like that. You gave like a predicate of a basic for? For symmetries. Yes, for different symmetries and you have for? Degree of symmetries, that is important. Not just symmetry. Existence doesn't exist, degree of symmetry. Yeah, for handwritten recognition. No, it's not for handwritten, it's for any images. But I would like apply to handwritten. Right, in theory it's more general, okay, okay. So a lot of the things we've been talking about falls, we've been talking about philosophy a little bit, but also about mathematics and statistics. A lot of it falls into this idea, a universal idea of statistical theory of learning. What is the most beautiful and sort of powerful or essential idea you've come across, even just for yourself personally in the world of statistics or statistic theory of learning? Probably uniform convergence, which we did with Alexei Chilvonenkis. Can you describe universal convergence? You have law of large numbers. So for any function, expectation of function, average of function converged to expectation. But if you have set of functions, for any function it is true. But it should converge simultaneously for all set of functions. And for learning, you need uniform convergence. Just convergence is not enough. Because when you pick up one which gives minimum, you can pick up one function which does not converge and it will give you the best answer for this function. So you need uniform convergence to guarantee learning. So learning does not rely on trivial law of large numbers, it relies on universal law. But idea of convergence exists in statistics for a long time. But it is interesting that as I think about myself, how stupid I was 50 years, I did not see weak convergence. I work on strong convergence. But now I think that most powerful is weak convergence. Because it makes admissible set of functions. And even in all proverbs, when people try to understand recognition about dog law, looks like a dog and so on, they use weak convergence. People in language, they understand this. But when we're trying to create artificial intelligence, we want event in different way. We just consider strong convergence arguments. So reducing the set of admissible functions, you think there should be effort put into understanding the properties of weak convergence? You know, in classical mathematics, in Gilbert space, there are only two ways, two form of convergence, strong and weak. Now we can use both. That means that we did everything. And it so happened that when we use Hilbert space, which is very rich space, space of continuous functions, which has integral and square. So we can apply weak and strong convergence for learning and have closed form solution. So for computationally simple. For me, it is sign that it is right way. Because you don't need any heuristic here, just do whatever you want. But now the only what left is this concept of what is predicate, but it is not statistics. By the way, I like the fact that you think that heuristics are a mess that should be removed from the system. So closed form solution is the ultimate goal. No, it so happened that when you're using right instrument, you have closed form solution. Do you think intelligence, human level intelligence, when we create it, will have something like a closed form solution? You know, now I'm looking on bounds, which I gave bounds for convergence. And when I'm looking for bounds, I'm thinking what is the most appropriate kernel for this bound would be. So we know that in say, all our businesses, we use radial basis function. But looking on the bound, I think that I start to understand that maybe we need to make corrections to radial basis function to be closer to work better for this bounds. So I'm again trying to understand what type of kernel have best approximation, best fit to this bound. Sure, so there's a lot of interesting work that could be done in discovering better functions than radial basis functions for bounds you find. It still comes from, you're looking to mass and trying to understand what. From your own mind, looking at the, I don't know. Then I'm trying to understand what will be good for that. Yeah, but to me, there's still a beauty. Again, maybe I'm a descendant of Alan Turing to heuristics. To me, ultimately, intelligence will be a mess of heuristics. And that's the engineering answer, I guess. Absolutely. When you're doing say, self driving cars, the great guy who will do this. It doesn't matter what theory behind that. Who has a better feeling how to apply it. But by the way, it is the same story about predicates. Because you cannot create rule for, situation is much more than you have rule for that. But maybe you can have more abstract rule than it will be less literal. It is the same story about ideas and ideas applied to specific cases. But still you should reach. You cannot avoid this. Yes, of course. But you should still reach for the ideas to understand the science. Okay, let me kind of ask, do you think neural networks or functions can be made to reason? So what do you think, we've been talking about intelligence, but this idea of reasoning, there's an element of sequentially disassembling, interpreting the images. So when you think of handwritten recognition, we kind of think that there'll be a single, there's an input and output. There's not a recurrence. What do you think about sort of the idea of recurrence, of going back to memory and thinking through this sort of sequentially mangling the different representations over and over until you arrive at a conclusion? Or is ultimately all that can be wrapped up into a function? No, you're suggesting that let us use this type of algorithm. When I started thinking, I first of all, starting to understand what I want. Can I write down what I want? And then I'm trying to formalize. And when I do that, I think I have to solve this problem. And till now I did not see a situation where you need recurrence. But do you observe human beings? Yeah. You try to, it's the imitation question, right? It seems that human beings reason this kind of sequentially sort of, does that inspire in you a thought that we need to add that into our intelligence systems? You're saying, okay, I mean, you've kind of answered saying until now I haven't seen a need for it. And so because of that, you don't see a reason to think about it. You know, most of things I don't understand. In reasoning in human, it is for me too complicated. For me, the most difficult part is to ask questions, to good questions, how it works, how people asking questions, I don't know this. You said that machine learning is not only about technical things, speaking of questions, but it's also about philosophy. So what role does philosophy play in machine learning? We talked about Plato, but generally thinking in this philosophical way, does it have, how does philosophy and math fit together in your mind? First ideas and then their implementation. It's like predicate, like say admissible set of functions. It comes together, everything. Because the first iteration of theory was done 50 years ago. I told that, this is theory. So everything's there, if you have data you can, and your set of function has not big capacity. So low VC dimension, you can do that. You can make structural risk minimization, control capacity. But you was not able to make admissible set of function good. Now when suddenly realize that we did not use another idea of convergence, which we can, everything comes together. But those are mathematical notions. Philosophy plays a role of simply saying that we should be swimming in the space of ideas. Let's talk what is philosophy. Philosophy means understanding of life. So understanding of life, say people like Plata, they understand on very high abstract level of life. So, and whatever I doing, just implementation of my understanding of life. But every new step, it is very difficult. For example, to find this idea that we need big convergence was not simple for me. So that required thinking about life a little bit. Hard to trace, but there was some thought process. I'm working, I'm thinking about the same problem for 50 years or more, and again, and again, and again. I'm trying to be honest and that is very important. Not to be very enthusiastic, but concentrate on whatever we was not able to achieve, for example. And understand why. And now I understand that because I believe in math, I believe that in Wigner's idea. But now when I see that there are only two way of convergence and we're using both, that means that we must do as well as people doing. But now, exactly in philosophy and what we know about predicate, how we understand life, can we describe as a predicate. I thought about that and that is more or less obvious level of symmetry. But next, I have a feeling, it's something about structures. But I don't know how to formulate, how to measure measure of structure and all this stuff. And the guy who will solve this challenge problem, then when we were looking how he did it, probably just only symmetry is not enough. But something like symmetry will be there. Structure will be there. Oh yeah, absolutely. Symmetry will be there and level of symmetry will be there. And level of symmetry, antisymmetry, diagonal, vertical. And I even don't know how you can use in different direction idea of symmetry, it's very general. But it will be there. I think that people very sensitive to idea of symmetry. But there are several ideas like symmetry. As I would like to learn. But you cannot learn just thinking about that. You should do challenging problems and then analyze them, why it was able to solve them. And then you will see. Very simple things, it's not easy to find. But even with talking about this every time. I was surprised, I tried to understand. These people describe in language strong convergence mechanism for learning. I did not see, I don't know. But weak convergence, this dark story and story like that when you will explain to kid, you will use weak convergence argument. It looks like it does like it does that. But when you try to formalize, you're just ignoring this. Why, why 50 years from start of machine learning? And that's the role of philosophy, thinking about life. I think that maybe, I don't know. Maybe this is theory also, we should blame for that because empirical risk minimization and all this stuff. And if you read now textbooks, they just about bound about empirical risk minimization. They don't looking for another problem like admissible set. But on the topic of life, perhaps we, you could talk in Russian for a little bit. What's your favorite memory from childhood? What's your favorite memory from childhood? Oh, music. How about, can you try to answer in Russian? Music? It was very cool when... What kind of music? Classic music. What's your favorite? Well, different composers. At first, it was Vivaldi, I was surprised that it was possible. And then when I understood Bach, I was absolutely shocked. By the way, from him I think that there is a predicate, like a structure. In Bach? Well, of course. Because you can just feel the structure. And I don't think that different elements of life are very much divided, in the sense of predicates. Everywhere structure, in painting structure, in human relations structure. Here's how to find these high level predicates, it's... In Bach and in life, everything is connected. Now that we're talking about Bach, let's switch back to English, because I like Beethoven and Chopin, so... Well, Chopin, it's another amusing story. But Bach, if we talk about predicates, Bach probably has the most sort of well defined predicates that underlie it. It is very interesting to read what critics are writing about Bach, which words they're using. They're trying to describe predicates. And then Chopin, it is very different vocabulary, very different predicates. And I think that if you will make collection of that, so maybe from this you can describe predicate for digit recognition as well. From Bach and Chopin. No, no, no, not from Bach and Chopin. From the critic interpretation of the music, yeah. When they're trying to explain you music, what they use. As they use, they describe high level ideas of platos ideas, what behind this music. That's brilliant. So art is not self explanatory in some sense. So you have to try to convert it into ideas. It is ill post problems. When you go from ideas to the representation, it is easy way. But when you're trying to go Bach, it is ill post problems. But nevertheless, I believe that when you're looking from that, even from art, you will be able to find predicates for digit recognition. That's such a fascinating and powerful notion. Do you ponder your own mortality? Do you think about it? Do you fear it? Do you draw insight from it? About mortality, no, yeah. Are you afraid of death? Not too much, not too much. It is pity that I will not be able to do something which I think I have a feeling to do that. For example, I will be very happy to work with guys theoretician from music to write this collection of description, how they describe music, how they use that predicate, and from art as well. Then take what is in common and try to understand predicate which is absolute for everything. And then use that for visual recognition and see if there is a connection. Yeah, exactly. Ah, there's still time. We got time. Ha ha ha ha. Yeah. We got time. It take years and years and years. Yes, yeah, it's a long way. Well, see, you've got the patient mathematicians mind. I think it could be done very quickly and very beautifully. I think it's a really elegant idea. Yeah, but also. Some of many. Yeah, you know, the most time, it is not to make this collection to understand what is the common to think about that once again and again and again. Again and again and again, but I think sometimes, especially just when you say this idea now, even just putting together the collection and looking at the different sets of data, language, trying to interpret music, criticize music, and images, I think there'll be sparks of ideas that'll come. Of course, again and again, you'll come up with better ideas, but even just that notion is a beautiful notion. I even have some example. Yes, so I have friend who was specialist in Russian poetry. She is professor of Russian poetry. He did not write poems, but she know a lot of stuff. She make book, several books, and one of them is a collection of Russian poetry. She have images of Russian poetry. She collect all images of Russian poetry. And I ask her to do following. You have NIPS, digit recognition, and we get 100 digits, or maybe less than 100. I don't remember, maybe 50 digits. And try from poetical point of view, describe every image which she see, using only words of images of Russian poetry. And she did it. And then we tried to, I call it learning using privileged information. I call it privileged information. You have on two languages. One language is just image of digit, and another language, poetic description of this image. And this is privileged information. And there is an algorithm when you're working using privileged information, you're doing better. Much better, so. So there's something there. Something there. And there is a, in NEC, she unfortunately died. The collection of digits in poetic descriptions of these digits. Yeah. So there's something there in that poetic description. But I think that there is a abstract ideas on the plot of level of ideas. Yeah, that they're there. That could be discovered. And music seems to be a good entry point. But as soon as we start with this challenge problem. The challenge problem. Listen. It immediately connected to all this stuff. Especially with your talk and this podcast, and I'll do whatever I can to advertise it. It's such a clean, beautiful Einstein like formulation of the challenge before us. Right. Let me ask another absurd question. We talked about mortality. We talked about philosophy of life. What do you think is the meaning of life? What's the predicate for mysterious existence here on earth? I don't know. It's very interesting how we have, in Russia, I don't know if you know the guy Strugatsky. They are writing fiction. They're thinking about human, what's going on. And they have idea that there are developing two type of people, common people and very smart people. They just started. And these two branches of people will go in different direction very soon. So that's what they're thinking about that. So the purpose of life is to create two paths. Two paths. Of human societies. Yes. Simple people and more complicated people. Which do you like best? The simple people or the complicated ones? I don't know that it is just his fantasy, but you know, every week we have guy who is just a writer and also a theorist of literature. And he explain how he understand literature and human relationship. How he see life. And I understood that I'm just small kids comparing to him. He's very smart guy in understanding life. He knows this predicate. He knows big blocks of life. I am used every time when I listen to him. And he just talking about literature. And I think that I was surprised. So the managers in big companies, most of them are guys who study English language and English literature. So why? Because they understand life. They understand models. And among them, maybe many talented critics just analyzing this. And this is big science like property. This is blocks. That's very smart. It amazes me that you are and continue to be humbled by the brilliance of others. I'm very modest about myself. I see so smart guys around. Well, let me be immodest for you. You're one of the greatest mathematicians, statisticians of our time. It's truly an honor. Thank you for talking again. And let's talk. It is not. I know my limits. Let's talk again when your challenge is taken on and solved by grad student. Especially when they use it. It happens. Maybe music will be involved. Latimer, thank you so much. It's been an honor. Thank you very much. Thanks for listening to this conversation with Latimer Vapnik. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast. You'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give us five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now, let me leave you with some words from Latimer Vapnik. When solving a problem of interest, do not solve a more general problem as an intermediate step. Thank you for listening. I hope to see you next time.
Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | Lex Fridman Podcast #71
The following is a conversation with Scott Aaronson, a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally. He is an excellent writer and one of my favorite communicators of computer science in the world. We only had about an hour and a half of this conversation, so I decided to focus on quantum computing. But I can see us talking again in the future on this podcast at some point about computational complexity theory and all the complexity classes that Scott catalogs in his amazing Complexity Zoo Wiki. As a quick aside, based on questions and comments I've received, my goal with these conversations is to try to be in the background without ego and do three things. One, let the guests shine and try to discover together the most beautiful insights in their work and in their mind. Two, try to play devil's advocate just enough to provide a creative tension in exploring ideas through conversation. And three, to ask very basic questions about terminology, about concepts, about ideas. Many of the topics we talk about in the podcast I've been studying for years as a grad student, as a researcher, and generally as a curious human who loves to read. But frankly, I see myself in these conversations as the main character for one of my favorite novels by Dostoevsky called The Idiot. I enjoy playing dumb. Clearly, it comes naturally. But the basic questions don't come from my ignorance of the subject but from an instinct that the fundamentals are simple. And if we linger on them from almost a naive perspective, we can draw an insightful thread from computer science to neuroscience to physics to philosophy and to artificial intelligence. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two supporters today. First, get Cash App and use the code LEX PODCAST. Second, listen to the Tech Meme Ride Home podcast for tech news. Search Ride Home, two words, in your podcast app. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Broker services are provided by Cash App Investing, a subsidiary of Square, a member SIPC. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code LEX PODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. This episode is also supported by the Tech Meme Ride Home Podcast. It's a technology podcast I've been listening to for a while and really enjoying. It goes straight to the point, gives you the tech news you need to know and provides minimal but essential context. It's released every day by 5 p.m. Eastern and is only about 15 to 20 minutes long. For fun, I like building apps on smartphones, most on Android, so I'm always a little curious about new flagship phones that come out. I saw that Samsung announced the new Galaxy S20 and of course, right away, Tech Meme Ride Home has a new episode that summarizes all that I needed to know about this new device. They've also started to do weekend bonus episodes with interviews of people like AWOL founder Steve Case on investing and Gary Marcus on AI, who I've also interviewed on this podcast. You can find the Tech Meme Ride Home Podcast if you search your podcast app for Ride Home, two words. Then subscribe, enjoy, and keep up to date with the latest tech news. And now, here's my conversation with Scott Aaronson. I sometimes get criticism from a listener here and there that while having a conversation with a world class mathematician, physicist, neurobiologist, aerospace engineer, or a theoretical computer scientist like yourself, I waste time by asking philosophical questions about free will, consciousness, mortality, love, nature of truth, super intelligence, whether time travel is possible, whether space time is emergent and fundamental, even the crazier questions like whether aliens exist, what their language might look like, what their math might look like, whether math is invented or discovered, and of course, whether we live in a simulation or not. So I try. Out with it. Out with it. I try to dance back and forth from the deep technical to the philosophical, so I've done that quite a bit. So you're a world class computer scientist, and yet you've written about this very point, the philosophy is important for experts in any technical discipline, though they somehow seem to avoid this. So I thought it'd be really interesting to talk to you about this point. Why should we computer scientists, mathematicians, physicists care about philosophy, do you think? Well, I would reframe the question a little bit. I mean, philosophy almost by definition is the subject that's concerned with the biggest questions that you could possibly ask, right? So the ones you mentioned, right? Are we living in a simulation? Are we alone in the universe? How should we even think about such questions? Is the future determined, and what do we even mean by it being determined? Why are we alive at the time we are and not at some other time? And when you sort of contemplate the enormity of those questions, I think you could ask, well, then why be concerned with anything else, right? Why not spend your whole life on those questions? I think in some sense, that is the right way to phrase the question. And actually, what we learned, I mean, throughout history, but really starting with the scientific revolution with Galileo and so on, is that there is a good reason to focus on narrower questions, more technical, mathematical or empirical questions. And that is that you can actually make progress on them, and you can actually often answer them. And sometimes they actually tell you something about the philosophical questions that sort of maybe motivated your curiosity as a child. They don't necessarily resolve the philosophical questions, but sometimes they reframe your whole understanding of them, right? And so for me, philosophy is just the thing that you have in the background from the very beginning that you want to, these are sort of the reasons why you went into intellectual life in the first place, at least the reasons why I did, right? But math and science are tools that we have for actually making progress. And hopefully even changing our understanding of these philosophical questions, sometimes even more than philosophy itself does. Why do you think computer scientists avoid these questions? We'll run away from them a little bit, at least in a technical scientific discourse. Well, I'm not sure if they do so more than any other scientists do. I mean, Alan Turing was famously interested and his most famous, one of his two most famous papers was in a philosophy journal mind. It was the one where he proposed the Turing test. He took a Wittgenstein's course at Cambridge, argued with him. I just recently learned that little bit and it's actually fascinating. I was trying to look for resources in trying to understand where the sources of disagreement and debates between Wittgenstein and Turing were. That's interesting that these two minds have somehow met in the arc of history. Yeah, well, the transcript of the course, which was in 1939, right, is one of the more fascinating documents that I've ever read because Wittgenstein is trying to say, well, all of these formal systems are just complete irrelevancies, right? If a formal system is irrelevant, who cares? Why does that matter in real life, right? And Turing is saying, well, look, if you use an inconsistent formal system to design a bridge, the bridge may collapse, right? And so Turing, in some sense, is thinking decades ahead, you know, I think, of where Wittgenstein is, to where the formal systems are actually going to be used in computers, right, to actually do things in the world. You know, and it's interesting that Turing actually dropped the course halfway through. Why? Because he had to go to Bletchley Park and work on something of more immediate importance. That's fascinating. Take a step from philosophy to actual, like the biggest possible step to actual engineering with actual real impact. Yeah, and I would say more generally, right, a lot of scientists are interested in philosophy, but they're also busy, right? And they have a lot on their plate, and there are a lot of sort of very concrete questions that are already not answered, but look like they might be answerable, right? And so then you could say, well, then why break your brain over these metaphysically unanswerable questions when there were all of these answerable ones instead? So I think, you know, for me, I enjoy talking about philosophy. I even go to philosophy conferences sometimes, such as the FQXI conferences. I enjoy interacting with philosophers. I would not want to be a professional philosopher because I like being in a field where I feel like, you know, if I get too confused about the sort of eternal questions, then I can actually make progress on something. Can you maybe link on that for just a little longer? What do you think is the difference? So like the corollary of the criticism that I mentioned previously, that why ask the philosophical questions of the mathematician is if you want to ask philosophical questions, then invite a real philosopher on and ask them. So what's the difference between the way a computer scientist or mathematician ponders a philosophical question and a philosopher ponders a philosophical question? Well, I mean, a lot of it just depends on the individual, right? It's hard to make generalizations about entire fields, but, you know, I think if we tried to, if we tried to stereotype, you know, we would say that scientists very often will be less careful in their use of words. You know, I mean, philosophers are really experts in sort of, you know, like when I talk to them, they will just pounce if I, you know, use the wrong phrase for something. Experts is a very nice word. You could say sticklers. Sticklers, yeah, yeah, yeah, or, you know, they will sort of interrogate my word choices, let's say, to a much greater extent than scientists would, right? And scientists, you know, will often, if you ask them about a philosophical problem, like the hard problem of consciousness or free will or whatever, they will try to relate it back to, you know, recent research, you know, research about neurobiology or, you know, the best of all is research that they personally are involved with, right? And, you know, of course they will want to talk about that, you know, and it is what they will think of, you know, and of course you could have an argument that maybe, you know, it's all interesting as it goes, but maybe none of it touches the philosophical question, right? But, you know, but maybe, you know, a science, you know, at least it, as I said, it does tell us concrete things. And, you know, even if like a deep dive into neurobiology will not answer the hard problem of consciousness, you know, maybe it can take us about as far as we can get toward, you know, expanding our minds about it, you know, toward thinking about it in a different way. Well, I mean, I think neurobiology can do that, but, you know, with these profound philosophical questions, I mean, also art and literature do that, right? They're all different ways of trying to approach these questions that, you know, we don't, for which we don't even know really what an answer would look like, but, and yet somehow we can't help, but keep returning to the questions. And you have a kind of mathematical, beautiful mathematical way of discussing this with the idea of Q prime. Oh, right. You write that usually the only way to make progress on the big questions, like the philosophical questions we're talking about now is to pick off smaller sub questions. Ideally sub questions that you can attack using math, empirical observation, or both. You define the idea of a Q prime. So given an unanswerable philosophical riddle Q, replace it with a merely, in quotes, scientific or mathematical question Q prime, which captures part of what people have wanted to know when they first asked Q. Then with luck, one solves Q prime. So you described some examples of such Q prime sub questions in your long essay titled, Why Philosophers Should Care About Computational Complexity. So you catalog the various Q primes on which you think theoretical computer science has made progress. Can you mention a few favorites, if any pop to mind, or do you remember some? Well, yeah. So, I mean, I would say some of the most famous examples in history of that sort of replacement were, I mean, to go back to Alan Turing, right? What he did in his computing machinery and intelligence paper was exactly, he explicitly started with the question, can machines think? And then he said, sorry, I think that question is too meaningless, but here's a different question. Could you program a computer so that you couldn't tell the difference between it and a human, right? And yeah. So in the very first few sentences, he in fact just formulates the Q prime question. He does precisely that. Or we could look at Gödel, right? Where you had these philosophers arguing for centuries about the limits of mathematical reasoning, right? The limits of formal systems. And then by the early 20th century, logicians, starting with Frege, Russell, and then most spectacularly Gödel, managed to reframe those questions as, look, we have these formal systems. They have these definite rules. Are there questions that we can phrase within the rules of these systems that are not provable within the rules of the systems? And can we prove that fact, right? And so that would be another example. You know, I had this essay called The Ghost in the Quantum Turing Machine. That was one of the crazier things I've written, but I tried to do something, or to advocate doing something similar there for free will, where instead of talking about is free will real, where we get hung up on the meaning of, what exactly do we mean by freedom? And can you have, can you be, or do we mean compatibilist free will, libertarian free will? What do these things mean? You know, I suggested just asking the question, how well in principle, consistently with the laws of physics, could a person's behavior be predicted? You know, without, so let's say, destroying the person's brain, you know, taking it apart in the process of trying to predict them. And, you know, and that actually, asking that question gets you into all sorts of meaty and interesting issues, you know, issues of, what is the computational substrate of the brain? You know, or can you understand the brain, you know, just at the sort of level of the neurons, you know, at sort of the abstraction of a neural network, or do you need to go deeper to the, you know, molecular level and ultimately even to the quantum level? Right, and of course, that would put limits on predictability if you did. So you need to reduce, you need to reduce the mind to a computational device, like formalize it so then you can make predictions about what, you know, whether you could predict the behavior of the system. Well, if you were trying to predict a person, yeah, then presumably, you would need some model of their brain, right? And now the question becomes one of, how accurate can such a model become? Can you make a model that will be accurate enough to really seriously threaten people's sense of free will? You know, not just metaphysically, but like really, I have written in this envelope what you were going to say next. Is accuracy the right term here? So it's also a level of abstraction has to be right. So if you're accurate at the, somehow at the quantum level, that may not be convincing to us at the human level. Well, right, but the question is what accuracy at the sort of level of the underlying mechanisms do you need in order to predict the behavior, right? At the end of the day, the test is just, can you, you know, foresee what the person is going to do? Right, I am, you know, and in discussions of free will, you know, it seems like both sides wanna, you know, very quickly dismiss that question as irrelevant. Well, to me, it's totally relevant. Okay, because, you know, if someone says, oh, well, you know, a Laplace demon that knew the complete state of the universe, you know, could predict everything you're going to do, therefore you don't have free will. You know, it doesn't trouble me that much because, well, you know, I've never met such a demon, right? You know, and we, you know, we even have some reasons to think, you know, maybe, you know, it could not exist as part of our world, you know, it's only an abstraction, a thought experiment. On the other hand, if someone said, well, you know, I have this brain scanning machine, you know, you step into it and then, you know, every paper that you will ever write, it will write, you know, every thought that you will have, you know, even right now about the machine itself, it will foresee. You know, well, if you can actually demonstrate that, then I think, you know, that sort of threatens my internal sense of having free will in a much more visceral way. You know, but now you notice that we're asking a much more empirical question. We're asking, is such a machine possible or isn't it? We're asking, if it's not possible, then what in the laws of physics or what about the behavior of the brain, you know, prevents it from existing? So if you could philosophize a little bit within this empirical question, where do you think would enter the, by which mechanism would enter the possibility that we can't predict the outcome? So there would be something that would be akin to a free will. Yeah, well, you could say the sort of obvious possibility, which was, you know, recognized by Eddington and many others about as soon as quantum mechanics was discovered in the 1920s, was that if, you know, let's say a sodium ion channel, you know, in the brain, right? You know, its behavior is chaotic, right? It's sort of, it's governed by these Hodgley–Huckskin equations in neuroscience, right? Which are differential equations that have a stochastic component, right? Now, where does, you know, and this ultimately governs, let's say whether a neuron will fire or not fire, right? So that's the basic chemical process or electrical process by which signals are sent in the brain. Exactly, exactly. And, you know, and so you could ask, well, where does the randomness in the process, you know, that neuroscientists, or what neuroscientists would treat as randomness, where does it come from? You know, ultimately it's thermal noise, right? Where does thermal noise come from? But ultimately, you know, there were some quantum mechanical events at the molecular level that are getting sort of chaotically amplified by, you know, a sort of butterfly effect. And so, you know, even if you knew the complete quantum state of someone's brain, you know, at best you could predict the probabilities that they would do one thing or do another thing, right? I think that part is actually relatively uncontroversial, right? The controversial question is whether any of it matters for the sort of philosophical questions that we care about. Because you could say, if all it's doing is just injecting some randomness into an otherwise completely mechanistic process, well, then who cares, right? And more concretely, if you could build a machine that, you know, could just calculate even just the probabilities of all of the possible things that you would do, right? And, you know, of all the things that said you had a 10% chance of doing, you did exactly a 10th of them, you know, and so on and so on. And that somehow also takes away the feeling of free will. Exactly. I mean, to me, it seems essentially just as bad as if the machine deterministically predicted you. It seems, you know, hardly different from that. So then, but a more subtle question is could you even learn enough about someone's brain to do that, okay? Because, you know, another central fact about quantum mechanics is that making a measurement on a quantum state is an inherently destructive operation. Okay, so, you know, if I want to measure the, you know, position of a particle, right? It was, well, before I measured, it had a superposition over many different positions. As soon as I measure, I localize it, right? So now I know the position, but I've also fundamentally changed the state. And so you could say, well, maybe in trying to build a model of someone's brain that was accurate enough to actually, you know, make, let's say, even well calibrated probabilistic predictions of their future behavior, maybe you would have to make measurements that were just so accurate that you would just fundamentally alter their brain, okay? Or maybe not, maybe you only, you know, it would suffice to just make some nanorobots that just measured some sort of much larger scale, you know, macroscopic behavior, like, you know, what is this neuron doing? What is that neuron doing? Maybe that would be enough. See, but now, you know, what I claim is that we're now asking a question, you know, in which, you know, it is possible to envision what progress on it would look like. Yeah, but just as you said, that question may be slightly detached from the philosophical question in the sense if consciousness somehow has a role to the experience of free will. Because ultimately, when we're talking about free will, we're also talking about not just the predictability of our actions, but somehow the experience of that predictability. Yeah, well, I mean, a lot of philosophical questions ultimately, like, feedback to the hard problem of consciousness, you know, and as much as you can try to sort of talk around it or not, right? And, you know, and there is a reason why people try to talk around it, which is that, you know, Democritus talked about the hard problem of consciousness, you know, in 400 BC in terms that would be totally recognizable to us today, right? And it's really not clear if there's been progress since or what progress could possibly consist of. Is there a Q prime type of subquestion that could help us get at consciousness? It's something about consciousness. Well, I mean, well, I mean, there is the whole question of, you know, of AI, right? Of, you know, can you build a human level or superhuman level AI? And, you know, can it work in a completely different substrate from the brain? I mean, you know, and of course, that was Alan Turing's point. And, you know, and even if that was done, it's, you know, maybe people would still argue about the hard problem of consciousness, right? And yet, you know, my claim is a little different. My claim is that in a world where, you know, there were, you know, human level AIs or we'd been even overtaken by such AIs, the entire discussion of the hard problem of consciousness would have a different character, right? It would take place in different terms in such a world, even if we hadn't answered the question. And my claim about free will would be similar, right? That if this prediction machine that I was talking about could actually be built, well, now the entire discussion of the, you know, of free will is sort of transformed by that, you know, even if in some sense the metaphysical question hasn't been answered. Yeah, exactly, it transforms it fundamentally because say that machine does tell you that it can predict perfectly and yet there is this deep experience of free will and then that changes the question completely. And it starts actually getting to the question of the AGI, the touring questions of the demonstration of free will, the demonstration of intelligence, the demonstration of consciousness, does that equal consciousness, intelligence and free will? But see, Alex, if every time I was contemplating a decision, you know, this machine had printed out an envelope, you know, where I could open it and see that it knew my decision, I think that actually would change my subjective experience of making decisions, right? I mean, it would. Does knowledge change your subjective experience? Well, you know, I mean, the knowledge that this machine had predicted everything I would do, I mean, it might drive me completely insane, right? But at any rate, it would change my experience to act, you know, to not just discuss such a machine as a thought experiment, but to actually see it. Yeah. I mean, you know, you could say at that point, you know, you could say, you know, why not simply call this machine a second instantiation of me and be done with it, right? What, you know, why even privilege the original me over this perfect duplicate that exists in the machine? Yeah, or there could be a religious experience with it too. It's kind of what God throughout the generations is supposed to have. That God kind of represents that perfect machine, is able to, I guess, actually, well, I don't even know what are the religious interpretations of free will. So if God knows perfectly everything in religion, in the various religions, where does free will fit into that? Do you know? That has been one of the big things that theologians have argued about for thousands of years. Yeah. You know, I am not a theologian, so maybe I shouldn't go there. So there's not a clear answer in a book like... I mean, this is, you know, the Calvinists debated this, the, you know, this has been, you know, I mean, different religious movements have taken different positions on that question, but that is how they think about it. You know, meanwhile, you know, a large part of sort of what animates, you know, theoretical computer science, you could say is, you know, we're asking sort of, what are the ultimate limits of, you know, what you can know or, you know, calculate or figure out by, you know, entities that you can actually build in the physical world, right? And if I were trying to explain it to a theologian, maybe I would say, you know, we are studying, you know, to what extent, you know, gods can be made manifest in the physical world. I'm not sure my colleagues would like that. So let's talk about quantum computers for a second. Yeah, sure, sure. As you've said, quantum computing, at least in the 1990s, was a profound story at the intersection of computer science, physics, engineering, math, and philosophy. So there's this broad and deep aspect to quantum computing that represents more than just the quantum computer. But can we start at the very basics? What is quantum computing? Yeah, so it's a proposal for a new type of computation, or let's say a new way to harness nature to do computation that is based on the principles of quantum mechanics. Okay, now the principles of quantum mechanics have been in place since 1926. You know, they haven't changed. You know, what's new is, you know, how we wanna use them. Okay, so what does quantum mechanics say about the world? You know, the physicists, I think, over the generations, you know, convinced people that that is an unbelievably complicated question and, you know, just give up on trying to understand it. I can let you in, not being a physicist, I can let you in on a secret, which is that it becomes a lot simpler if you do what we do in quantum information theory and sort of take the physics out of it. So the way that we think about quantum mechanics is sort of as a generalization of the rules of probability themselves. So, you know, you might say there was a 30% chance that it was going to snow today or something. You would never say that there was a negative 30% chance, right, that would be nonsense. Much less would you say that there was, you know, an I% chance, you know, square root of minus 1% chance. Now, the central discovery that sort of quantum mechanics made is that fundamentally the world is described by, or, you know, the sort of, let's say the possibilities for, you know, what a system could be doing are described using numbers called amplitudes, okay, which are like probabilities in some ways, but they are not probabilities. They can be positive. For one thing, they can be positive or negative. In fact, they can even be complex numbers. Okay, and if you've heard of a quantum superposition, this just means some state of affairs where you assign an amplitude, one of these complex numbers, to every possible configuration that you could see a system in on measuring it. So for example, you might say that an electron has some amplitude for being here and some other amplitude for being there, right? Now, if you look to see where it is, you will localize it, right? You will sort of force the amplitudes to be converted into probabilities. That happens by taking their squared absolute value, okay, and then, you know, you can say either the electron will be here or it will be there. And, you know, knowing the amplitudes, you can predict at least the probabilities that you'll see each possible outcome, okay? But while a system is isolated from the whole rest of the universe, the rest of its environment, the amplitudes can change in time by rules that are different from the normal rules of probability and that are, you know, alien to our everyday experience. So anytime anyone ever tells you anything about the weirdness of the quantum world, you know, or assuming that they're not lying to you, right, they are telling you, you know, yet another consequence of nature being described by these amplitudes. So most famously, what amplitudes can do is that they can interfere with each other, okay? So in the famous double slit experiment, what happens is that you shoot a particle, like an electron, let's say, at a screen with two slits in it, and you find that there are, you know, on a second screen, now there are certain places where that electron will never end up, you know, after it passes through the first screen. And yet, if I close off one of the slits, then the electron can appear in that place, okay? So by decreasing the number of paths that the electron could take to get somewhere, you can increase the chance that it gets there, okay? Now, how is that possible? Well, it's because, you know, as we would say now, the electron has a superposition state, okay? It has some amplitude for reaching this point by going through the first slit. It has some other amplitude for reaching it by going through the second slit. But now, if one amplitude is positive and the other one is negative, then, you know, I have to add them all up, right? I have to add the amplitudes for every path that the electron could have taken to reach this point. And those amplitudes, if they're pointing in different directions, they can cancel each other out. That would mean the total amplitude is zero and the thing never happens at all. I close off one of the possibilities, then the amplitude is positive or it's negative, and now the thing can happen. Okay, so that is sort of the one trick of quantum mechanics. And now I can tell you what a quantum computer is. Okay, a quantum computer is a computer that tries to exploit, you know, exactly these phenomena, superposition, amplitudes, and interference, in order to solve certain problems much faster than we know how to solve them otherwise. So the basic building block of a quantum computer is what we call a quantum bit or a qubit. That just means a bit that has some amplitude for being zero and some other amplitude for being one. So it's a superposition of zero and one states, right? But now the key point is that if I've got, let's say, a thousand qubits, the rules of quantum mechanics are completely unequivocal that I do not just need one ampli... You know, I don't just need amplitudes for each qubit separately. Okay, in general, I need an amplitude for every possible setting of all thousand of those bits, okay? So that what that means is two to the one thousand power amplitudes. Okay, if I had to write those down, or let's say in the memory of a conventional computer, if I had to write down two to the one thousand complex numbers, that would not fit within the entire observable universe. Okay, and yet, you know, quantum mechanics is unequivocal that if these qubits can all interact with each other, and in some sense, I need two to the one thousand parameters, you know, amplitudes to describe what is going on. Now, you know, now I can tell, you know, where all the popular articles, you know, about quantum computing go off the rails is that they say, you know, they sort of say what I just said, and then they say, oh, so the way a quantum computer works is just by trying every possible answer in parallel. You know, that sounds too good to be true, and unfortunately, it kind of is too good to be true. The problem is I could make a superposition over every possible answer to my problem, you know, even if there are two to the one thousand of them, right? I can easily do that. The trouble is for a computer to be useful, you've got to, at some point, you've got to look at it and see an output, right? And if I just measure a superposition over every possible answer, then the rules of quantum mechanics tell me that all I'll see will be a random answer. You know, if I just wanted a random answer, well, I could have picked one myself with a lot less trouble, right? So the entire trick with quantum computing, with every algorithm for a quantum computer, is that you try to choreograph a pattern of interference of amplitudes, and you try to do it so that for each wrong answer, some of the paths leading to that wrong answer have positive amplitudes and others have negative amplitudes. So on the whole, they cancel each other out, okay? Whereas all the paths leading to the right answer should reinforce each other, you know, should have amplitudes pointing the same direction. So the design of algorithms in the space is the choreography of the interferences. Precisely. That's precisely what it is. Can we take a brief step back? And you mentioned information. Yes. So in which part of this beautiful picture that you've painted is information contained? Oh, well, information is at the core of everything that we've been talking about, right? I mean, the bit is, you know, the basic unit of information since, you know, Claude Shannon's paper in 1948. You know, and, you know, of course, you know, people had the concept even before that, you know, he popularized the name, right? But I mean... But a bit is zero or one. That's right. So that's a basic element of information. That's right. And what we would say is that the basic unit of quantum information is the qubit, is, you know, the object, any object that can be maintained in this, or manipulated, in a superposition of zero and one states. Now, you know, sometimes people ask, well, but what is a qubit physically, right? And there are all these different, you know, proposals that are being pursued in parallel for how you implement qubits. There is, you know, superconducting quantum computing that was in the news recently because of Google's quantum supremacy experiment, right? Where you would have some little coils where a current can flow through them in two different energy states, one representing a zero, another representing a one. And if you cool these coils to just slightly above absolute zero, like a hundredth of a degree, then they superconduct. And then the current can actually be in a superposition of the two different states. So that's one kind of qubit. Another kind would be, you know, just an individual atomic nucleus, right? It has a spin. It could be spinning clockwise. It could be spinning counterclockwise, or it could be in a superposition of the two spin states. That is another qubit. But see, just like in the classical world, right? You could be a virtuoso programmer without having any idea of what a transistor is, right? Or how the bits are physically represented inside the machine, even that the machine uses electricity, right? You just care about the logic. It's sort of the same with quantum computing, right? Qubits could be realized by many, many different quantum systems. And yet all of those systems will lead to the same logic, you know, the logic of qubits and how, you know, how you measure them, how you change them over time. And so, you know, the subject of, you know, how qubits behave and what you can do with qubits, that is quantum information. So just to linger on that. Sure. So the physical design implementation of a qubit does not interfere with the, that next level of abstraction that you can program over it. So it truly is, the idea of it is, okay. Well, to be honest with you, today they do interfere with each other. That's because all the quantum computers we can build today are very noisy, right? And so sort of the, you know, the qubits are very far from perfect. And so the lower level sort of does affect the higher levels. And we sort of have to think about all of them at once. Okay, but eventually where we hope to get is to what are called error corrected quantum computers, where the qubits really do behave like perfect abstract qubits for as long as we want them to. And in that future, you know, a future that we can already sort of prove theorems about or think about today. But in that future, the logic of it really does become decoupled from the hardware. So if noise is currently like the biggest problem for quantum computing, and then the dream is error correcting quantum computers, can you just maybe describe what does it mean for there to be noise in the system? Absolutely, so yeah, so the problem is even a little more specific than noise. So the fundamental problem, if you're trying to actually build a quantum computer, you know, of any appreciable size, is something called decoherence. Okay, and this was recognized from the very beginning, you know, when people first started thinking about this in the 1990s. Now, what decoherence means is sort of the unwanted interaction between, you know, your qubits, you know, the state of your quantum computer and the external environment. Okay, and why is that such a problem? Well, I talked before about how, you know, when you measure a quantum system, so let's say if I measure a qubit that's in a superposition of zero and one states to ask it, you know, are you zero or are you one? Well, now I force it to make up its mind, right? And now, probabilistically, it chooses one or the other and now, you know, it's no longer a superposition, there's no longer amplitudes, there's just, there's some probability that I get a zero and there's some that I get a one. And now, the trouble is that it doesn't have to be me who's looking, okay? Or in fact, it doesn't have to be any conscious entity. Any kind of interaction with the external world that leaks out the information about whether this qubit was a zero or a one, sort of that causes the zerowness or the oneness of the qubit to be recorded in, you know, the radiation in the room, in the molecules of the air, in the wires that are connected to my device, any of that, as soon as the information leaks out, it is as if that qubit has been measured, okay? It is, you know, the state has now collapsed. You know, another way to say it is that it's become entangled with its environment, okay? But, you know, from the perspective of someone who's just looking at this qubit, it is as though it has lost its quantum state. And so, what this means is that if I want to do a quantum computation, I have to keep the qubits sort of fanatically well isolated from their environment. But then at the same time, they can't be perfectly isolated because I need to tell them what to do. I need to make them interact with each other, for one thing, and not only that, but in a precisely choreographed way, okay? And, you know, that is such a staggering problem, right? How do I isolate these qubits from the whole universe but then also tell them exactly what to do? I mean, you know, there were distinguished physicists and computer scientists in the 90s who said, this is fundamentally impossible, you know? The laws of physics will just never let you control qubits to the degree of accuracy that you're talking about. Now, what changed the views of most of us was a profound discovery in the mid to late 90s which was called the theory of quantum error correction and quantum fault tolerance, okay? And the upshot of that theory is that if I want to build a reliable quantum computer and scale it up to, you know, an arbitrary number of as many qubits as I want, you know, and doing as much on them as I want, I do not actually have to get the qubits perfectly isolated from their environment. It is enough to get them really, really, really well isolated, okay? And even if every qubit is sort of leaking, you know, its state into the environment at some rate, as long as that rate is low enough, okay, I can sort of encode the information that I care about in very clever ways across the collective states of multiple qubits, okay? In such a way that even if, you know, a small percentage of my qubits leak, well, I'm constantly monitoring them to see if that leak happened. I can detect it and I can correct it. I can recover the information I care about from the remaining qubits, okay? And so, you know, you can build a reliable quantum computer even out of unreliable parts, right? Now, in some sense, you know, that discovery is what set the engineering agenda for quantum computing research from the 1990s until the present, okay? The goal has been, you know, engineer qubits that are not perfectly reliable but reliable enough that you can then use these error correcting codes to have them simulate qubits that are even more reliable than they are, right? The error correction becomes a net win rather than a net loss, right? And then once you reach that sort of crossover point, then, you know, your simulated qubits could in turn simulate qubits that are even more reliable and so on until you've just, you know, effectively, you have arbitrarily reliable qubits. So long story short, we are not at that breakeven point yet. We're a hell of a lot closer than we were when people started doing this in the 90s, like orders of magnitude closer. But the key ingredient there is the more qubits, the better because... Ah, well, the more qubits, the larger the computation you can do, right? I mean, qubits are what constitute the memory of your quantum computer, right? But also for the, sorry, for the error correcting mechanism. Ah, yes. So the way I would say it is that error correction imposes an overhead in the number of qubits. And that is actually one of the biggest practical problems with building a scalable quantum computer. If you look at the error correcting codes, at least the ones that we know about today, and you look at, you know, what would it take to actually use a quantum computer to, you know, hack your credit card number, which is, you know, maybe, you know, the most famous application people talk about, right? Let's say to factor huge numbers and thereby break the RSA cryptosystem. Well, what that would take would be thousands of, several thousand logical qubits. But now with the known error correcting codes, each of those logical qubits would need to be encoded itself using thousands of physical qubits. So at that point, you're talking about millions of physical qubits. And in some sense, that is the reason why quantum computers are not breaking cryptography already. It's because of these immense overheads involved. So that overhead is additive or multiplicative? Well, it's multiplicative. I mean, it's like you take the number of logical qubits that you need in your abstract quantum circuit, you multiply it by a thousand or so. So, you know, there's a lot of work on, you know, inventing better, trying to invent better error correcting codes. Okay, that is the situation right now. In the meantime, we are now in, what the physicist John Preskill called the noisy intermediate scale quantum or NISQ era. And this is the era, you can think of it as sort of like the vacuum, you know, we're now entering the very early vacuum tube era of quantum computers. The quantum computer analog of the transistor has not been invented yet, right? That would be like true error correction, right? Where, you know, we are not or something else that would achieve the same effect, right? We are not there yet. But where we are now, let's say as of a few months ago, you know, as of Google's announcement of quantum supremacy, you know, we are now finally at the point where even with a non error corrected quantum computer, with, you know, these noisy devices, we can do something that is hard for classical computers to simulate, okay? So we can eke out some advantage. Now, will we in this noisy era be able to do something beyond what a classical computer can do that is also useful to someone? That we still don't know. People are going to be racing over the next decade to try to do that. By people, I mean, Google, IBM, you know, a bunch of startup companies. And research labs. Yeah, and research labs and governments. And yeah. You just mentioned a million things. Well, I'll backtrack for a second. Yeah, sure, sure. So we're in these vacuum tube days. Yeah, just entering them. And just entering, wow. Okay, so how do we escape the vacuum? So how do we get to, how do we get to where we are now with the CPU? Is this a fundamental engineering challenge? Is there breakthroughs on the physics side that are needed on the computer science side? Or is it a financial issue where much larger just sheer investment and excitement is needed? So, you know, those are excellent questions. My guess might, well, no, no. My guess would be all of the above. I mean, my guess, you know, I mean, you could say fundamentally it is an engineering issue, right? The theory has been in place since the 90s. You know, at least, you know, this is what, you know, error correction would look like. You know, we do not have the hardware that is at that level. But at the same time, you know, so you could just, you know, try to power through, you know, maybe even like, you know, if someone spent a trillion dollars on some quantum computing Manhattan project, right? Then conceivably they could just, you know, build an error corrected quantum computer as it was envisioned back in the 90s, right? I think the more plausible thing to happen is that there will be further theoretical breakthroughs and there will be further insights that will cut down the cost of doing this. So let's take a brief step to the philosophical. I just recently talked to Jim Keller who's sort of like the famed architect in the microprocessor world. And he's been told for decades, every year that the Moore's law is going to die this year. And he tries to argue that the Moore's law is still alive and well, and it'll be alive for quite a long time to come. How long? How long did he say? Well, the main point is it's still alive, but he thinks there's still a thousand X improvement just on shrinking the transition that's possible. Whatever. The point is that the exponential growth we see is actually a huge number of these S curves, just constant breakthroughs. At the philosophical level, why do you think we as descendants of apes were able to just keep coming up with these new breakthroughs on the CPU side is this something unique to this particular endeavor or will it be possible to replicate in the quantum computer space? Okay. All right. There was a lot there, but to break off something, I mean, I think we are in an extremely special period of human history, right? I mean, it is, you could say, obviously special in many ways, right? There are way more people alive than there have been and the whole future of the planet is in question in a way that it hasn't been for the rest of human history. But in particular, we are in the era where we finally figured out how to build universal machines, you could say, the things that we call computers, machines that you program to simulate the behavior of whatever machine you want. And once you've sort of crossed this threshold of universality, you've built, you could say, touring, you've instantiated touring machines in the physical world. Well, then the main questions are ones of numbers. They are ones of how much memory can you access? How fast does it run? How many parallel processors? At least until quantum computing. Quantum computing is the one thing that changes what I just said, right? But as long as it's classical computing, then it's all questions of numbers. And you could say at a theoretical level, the computers that we have today are the same as the ones in the 50s. They're just millions of times faster and with millions of times more memory. And I think there's been an immense economic pressure to get more and more transistors, get them smaller and smaller, add more and more cores. And in some sense, a huge fraction of all of the technological progress that there is in all of civilization has gotten concentrated just more narrowly into just those problems, right? And so it has been one of the biggest success stories in the history of technology, right? There's, I mean, it is, I am as amazed by it as anyone else is, right? But at the same time, we also know that it, and I really do mean we know that it cannot continue indefinitely, okay? Because you will reach fundamental limits on how small you can possibly make a processor. And if you want a real proof that would justify my use of the word, we know that Moore's law has to end. I mean, ultimately you will reach the limits imposed by quantum gravity. If you tried to build a computer that operated at 10 to the 43 Hertz, so did 10 to the 43 operations per second, that computer would use so much energy that it would simply collapse through a black hole, okay? So in reality, we're going to reach the limits long before that, but that is a sufficient proof. That there's a limit. Yes, yes. But it would be interesting to try to understand the mechanism, the economic pressure that you said, just like the Cold War was a pressure on getting us, getting us, because my us is both the Soviet Union and the United States, but getting us, the two countries to get to hurry up, to get to space, to the moon, there seems to be that same kind of economic pressure that somehow created a chain of engineering breakthroughs that resulted in the Moore's law. And it'd be nice to replicate. Yeah, well, I mean, some people are sort of, get depressed about the fact that technological progress may seem to have slowed down in many, many realms outside of computing, right? And there was this whole thing of we wanted flying cars and we only got Twitter instead, right? Yeah, good old Peter Thiel, yeah. Yeah, yeah, yeah, right, right, right. So then jumping to another really interesting topic that you mentioned, so Google announced with their work in the paper in Nature with quantum supremacy. Yes. Can you describe, again, back to the basic, what is perhaps not so basic, what is quantum supremacy? Absolutely, so quantum supremacy is a term that was coined by, again, by John Preskill in 2012. Not everyone likes the name, but it sort of stuck. We don't, we sort of haven't found a better alternative. It's technically quantum computational supremacy. Yeah, yeah, supremacy, that's right, that's right. But the basic idea is actually one that goes all the way back to the beginnings of quantum computing when Richard Feynman and David Deutsch, people like that, were talking about it in the early 80s. And quantum supremacy just refers to sort of the point in history when you can first use a quantum computer to do some well defined task much faster than any known algorithm running on any of the classical computers that are available, okay? So notice that I did not say a useful task, okay? It could be something completely artificial, but it's important that the task be well defined. So in other words, it is something that has right and wrong answers that are knowable independently of this device, right? And we can then run the device, see if it gets the right answer or not. Can you clarify a small point? You said much faster than a classical implementation. What about sort of what about the space with where the class, there's no, there's not, it doesn't even exist, a classical algorithm to show the power? So maybe I should clarify. Everything that a quantum computer can do, a classical computer can also eventually do, okay? And the reason why we know that is that a classical computer could always, you know, if it had no limits of time and memory, it could always just store the entire quantum state, you know, of your, you know, of the quantum, store a list of all the amplitudes, you know, in the state of the quantum computer, and then just, you know, do some linear algebra to just update that state, right? And so anything that quantum computers can do can also be done by classical computers, albeit exponentially slower in some cases. So quantum computers don't go into some magical place outside of Alan Turing's definition of computation. Precisely. They do not solve the halting problem. They cannot solve anything that is uncomputable in Alan Turing's sense. What we think they do change is what is efficiently computable, okay? And, you know, since the 1960s, you know, the word efficiently, you know, as well has been a central word in computer science, but it's sort of a code word for something technical, which is basically with polynomial scaling, you know, that as you get to larger and larger inputs, you would like an algorithm that uses an amount of time that scales only like the size of the input raised to some power and not exponentially with the size of the input, right? Yeah, so I do hope we get to talk again because one of the many topics that there's probably several hours worth of conversation on is complexity, which we probably won't even get a chance to touch today, but you briefly mentioned it, but let's maybe try to continue. So you said the definition of quantum supremacy is basically achieving a place where much faster on a formal, that quantum computer is much faster on a formal well defined problem that is or isn't useful. Yeah, yeah, yeah, right, right. And I would say that we really want three things, right? We want, first of all, the quantum computer to be much faster just in the literal sense of like number of seconds, you know, it's a solving this, you know, well defined, you know, problem. Secondly, we want it to be sort of, you know, for a problem where we really believe that a quantum computer has better scaling behavior, right? So it's not just an incidental, you know, matter of hardware, but it's that, you know, as you went to larger and larger inputs, you know, the classical scaling would be exponential and the scaling for the quantum algorithm would only be polynomial. And then thirdly, we want the first thing, the actual observed speed up to only be explainable in terms of the scaling behavior, right? So, you know, I want, you know, a real world, you know, a real problem to get solved, let's say by a quantum computer with 50 qubits or so, and for no one to be able to explain that in any way other than, well, you know, this computer involved a quantum state with two to the 50th power amplitudes. And, you know, a classical simulation, at least any that we know today, would require keeping track of two to the 50th numbers. And this is the reason why it was faster. So the intuition is that then if you demonstrate on 50 qubits, then once you get to 100 qubits, then it'll be even much more faster. Precisely, precisely. Yeah, and, you know, and quantum supremacy does not require error correction, right? We don't, you know, we don't have, you could say, true scalability yet or true, you know, error correction yet. But you could say quantum supremacy is already enough by itself to refute the skeptics who said a quantum computer will never outperform a classical computer for anything. But one, how do you demonstrate quantum supremacy? And two, what's up with these news articles I'm reading that Google did so? Yeah, all right, well, great, great questions, because now you get into actually, you know, a lot of the work that I've, you know, I and my students have been doing for the last decade, which was precisely about how do you demonstrate quantum supremacy using technologies that, you know, we thought would be available in the near future. And so one of the main things that we realized around 2011, and this was me and my student, Alex Arkhipov at MIT at the time, and independently of some others, including Bremner, Joseph, and Shepherd, okay? And the realization that we came to was that if you just want to prove that a quantum computer is faster, you know, and not do something useful with it, then there are huge advantages to sort of switching your attention from problems like factoring numbers that have a single right answer to what we call sampling problems. So these are problems where the goal is just to output a sample from some probability distribution, let's say over strings of 50 bits, right? So there are, you know, many, many, many possible valid outputs. You know, your computer will probably never even produce the same output twice, you know, if it's running as, even, you know, assuming it's running perfectly, okay? But the key is that some outputs are supposed to be likelier than other ones. So, sorry, to clarify, is there a set of outputs that are valid and set they're not, or is it more that the distribution of a particular kind of output is more, is like there's a specific distribution of a particular kind of output? Yeah, there's a specific distribution that you're trying to hit, right? Or, you know, that you're trying to sample from. Now, there are a lot of questions about this, you know, how do you do that, right? Now, how you do it, you know, it turns out that with a quantum computer, even with the noisy quantum computers that we have now, that we have today, what you can do is basically just apply a randomly chosen sequence of operations, right? So we, you know, in some of the, you know, that part is almost trivial, right? We just sort of get the qubits to interact in some random way, although a sort of precisely specified random way so we can repeat the exact same random sequence of interactions again and get another sample from that same distribution. And what this does is it basically, well, it creates a lot of garbage, but, you know, very specific garbage, right? So, you know, of all of the, so we're gonna talk about Google's device that were 53 qubits there, okay? And so there were two to the 53 power possible outputs. Now, for some of those outputs, you know, there was a little bit more destructive interference in their amplitude, okay? So their amplitudes were a little bit smaller. And for others, there was a little more constructive interference. You know, the amplitudes were a little bit more aligned with each other, you know, and so those were a little bit likelier, okay? All of the outputs are exponentially unlikely, but some are, let's say, two times or three times, you know, unlikelier than others, okay? And so you can define, you know, this sequence of operations that gives rise to this probability distribution. Okay, now the next question would be, well, how do you, you know, even if you're sampling from it, how do you verify that, right? How do you know? And so my students and I, and also the people at Google were doing the experiment, came up with statistical tests that you can apply to the outputs in order to try to verify, you know, what is, you know, that at least that some hard problem is being solved. The test that Google ended up using was something that they called the linear cross entropy benchmark, okay? And it's basically, you know, so the drawback of this test is that it requires, like, it requires you to do a two to the 53 time calculation with your classical computer, okay? So it's very expensive to do the test on a classical computer. The good news is... How big of a number is two to the 53? It's about nine quadrillion, okay? That doesn't help. Well, you know, it's, you want it in like scientific notation. No, no, no, what I mean is... Yeah, it is just... It's impossible to run on a... Yeah, so we will come back to that. It is just barely possible to run, we think, on the largest supercomputer that currently exists on Earth, which is called Summit at Oak Ridge National Lab, okay? Great, this is exciting. That's the short answer. So ironically, for this type of experiment, we don't want 100 qubits, okay? Because with 100 qubits, even if it works, we don't know how to verify the results, okay? So we want, you know, a number of qubits that is enough that, you know, the biggest classical computers on Earth will have to sweat, you know, and we'll just barely, you know, be able to keep up with the quantum computer, you know, using much more time, but they will still be able to do it in order that we can verify the results. Which is where the 53 comes from for the number of qubits? Basically, well, I mean, that's also, that's sort of, you know, I mean, that's sort of where they are now in terms of scaling, you know? And then, you know, soon, you know, that point will be passed. And then when you get to larger numbers of qubits, then, you know, these types of sampling experiments will no longer be so interesting because we won't even be able to verify the results and we'll have to switch to other types of computation. So with the sampling thing, you know, so the test that Google applied with this linear cross entropy benchmark was basically just take the samples that were generated, which are, you know, a very small subset of all the possible samples that there are. But for those, you calculate with your classical computer the probabilities that they should have been output. And you see, are those probabilities like larger than the mean? You know, so is the quantum computer biased toward outputting the strings that it's, you know, that you want it to be biased toward? Okay, and then finally, we come to a very crucial question, which is supposing that it does that. Well, how do we know that a classical computer could not have quickly done the same thing, right? How do we know that, you know, this couldn't have been spoofed by a classical computer, right? And so, well, the first answer is we don't know for sure because, you know, this takes us into questions of complexity theory. You know, I mean, questions of the magnitude of the P versus NP question and things like that, right? You know, we don't know how to rule out definitively that there could be fast classical algorithms for, you know, even simulating quantum mechanics and for, you know, simulating experiments like these, but we can give some evidence against that possibility. And that was sort of the, you know, the main thrust of a lot of the work that my colleagues and I did, you know, over the last decade, which is then sort of in around 2015 or so, what led to Google deciding to do this experiment. So is the kind of evidence here, first of all, the hard P equals NP problem that you mentioned and the kind of evidence that you were looking at, is that something you come to on a sheet of paper or is this something, are these empirical experiments? It's math for the most part. I mean, you know, it's also, you know, we have a bunch of methods that are known for simulating quantum circuits or quantum computations with classical computers. And so we have to try them all out and make sure that, you know, they don't work, you know, make sure that they have exponential scaling on, you know, these problems and not just theoretically, but with the actual range of parameters that are actually, you know, arising in Google's experiment. Okay, so there is an empirical component to it, right? But now on the theoretical side, you know, basically what we know how to do in theoretical computer science and computational complexity is, you know, we don't know how to prove that most of the problems we care about are hard, but we know how to pass the blame to someone else, okay? We know how to say, well, look, you know, I can't prove that this problem is hard, but if it is easy, then all these other things that, you know, you probably were much more confident or were hard, then those would be easy as well, okay? So we can give what are called reductions. This has been the basic strategy in, you know, NP completeness, right, in all of theoretical computer science and cryptography since the 1970s, really. And so we were able to give some reduction evidence for the hardness of simulating these sampling experiments, these sampling based quantum supremacy experiments. So reduction evidence is not as satisfactory as it should be. One of the biggest open problems in this area is to make it better. But, you know, we can do something. You know, certainly we can say that, you know, if there is a fast classical algorithm to spoof these experiments, then it has to be very, very unlike any of the algorithms that we know. TREVOR Which is kind of in the same kind of space of reasoning that people say P not equals NP. BENJAMIN Yeah, it's in the same spirit. TREVOR Okay, so Andrew Yang, a very intelligent and a presidential candidate with a lot of interesting ideas in all kinds of technological fields, tweeted that because of quantum computing, no code is uncrackable. Is he wrong or right? BENJAMIN He was premature, let's say. So, well, okay, wrong. Look, I'm actually, you know, I'm a fan of Andrew Yang. I like his ideas. I like his candidacy. I think that, you know, he may be ahead of his time with, you know, the universal basic income and so forth. And he may also be ahead of his time in that tweet that you referenced. So regarding using quantum computers to break cryptography, so the situation is this, okay? So the famous discovery of Peter Shor, you know, 26 years ago that really started quantum computing, you know, as an autonomous field was that if you built a full scalable quantum computer, then you could use it to efficiently find the prime factors of huge numbers and calculate discrete logarithms and solve a few other problems that are very, very special in character, right? They're not NP complete problems. We're pretty sure they're not, okay? But it so happens that most of the public key cryptography that we currently use to protect the internet is based on the belief that these problems are hard. Okay, what Shor showed is that once you get scalable quantum computers, then that's no longer true, okay? But now, you know, before people panic, there are two important points to understand here. Okay, the first is that quantum supremacy, the milestone that Google just achieved, is very, very far from the kind of scalable quantum computer that would be needed to actually threaten public key cryptography. Okay, so, you know, we touched on this earlier, right? But Google's device has 53 physical qubits, right? To threaten cryptography, you're talking, you know, with any of the known error correction methods, you're talking millions of physical qubits. Because error correction would be required to threaten cryptography. Yes, yes, yes, it certainly would, right? And, you know, how much, you know, how great will the overhead be from the error correction? That we don't know yet. But with the known codes, you're talking millions of physical qubits and of a much higher quality than any that we have now, okay? So, you know, I don't think that that is, you know, coming soon, although people who have secrets that, you know, need to stay secret for 20 years, you know, are already worried about this, you know, for the good reason that, you know, we presume that intelligence agencies are already scooping up data, you know, in the hope that eventually they'll be able to decode it once quantum computers become available, okay? So this brings me to the second point I wanted to make, which is that there are other public key cryptosystems that are known that we don't know how to break even with quantum computers, okay? And so there's a whole field devoted to this now, which is called post quantum cryptography, okay? And so there is already, so we have some good candidates now. The best known being what are called lattice based cryptosystems. And there is already some push to try to migrate to these cryptosystems. So NIST in the US is holding a competition to create standards for post quantum cryptography, which will be the first step in trying to get every web browser and every router to upgrade, you know, and use, you know, something like SSL that would be based on, you know, what we think is quantum secure cryptography. But, you know, this will be a long process. But, you know, it is something that people are already starting to do. And so, you know, I'm sure this algorithm is sort of a dramatic discovery. You know, it could be a big deal for whatever intelligence agency first gets a scalable quantum computer, if no, at least certainly if no one else knows that they have it, right? But eventually we think that we could migrate the internet to the post quantum cryptography and then we'd be more or less back where we started. Okay, so this is sort of not the application of quantum computing. I think that's really gonna change the world in a sustainable way, right? The big, by the way, the biggest practical application of quantum computing that we know about by far, I think is simply the simulation of quantum mechanics itself. In order to, you know, learn about chemical reactions, you know, design maybe new chemical processes, new materials, new drugs, new solar cells, new superconductors, all kinds of things like that. What's the size of a quantum computer that would be able to simulate the, you know, quantum mechanical systems themselves that would be impactful for the real world for the kind of chemical reactions and that kind of work? What scale are we talking about? Now you're asking a very, very current question, a very big question. People are going to be racing over the next decade to try to do useful quantum simulations even with, you know, 100 or 200 qubit quantum computers of the sort that we expect to be able to build over the next decade. Okay, so that might be, you know, the first application of quantum computing that we're able to realize, you know, or maybe it will prove to be too difficult and maybe even that will require fault tolerance or, you know, will require error correction. So there's an aggressive race to come up with the one case study kind of like Peter Schor with the idea that would just capture the world's imagination of like, look, we can actually do something very useful here. Right, but I think, you know, within the next decade, the best shot we have is certainly not, you know, using Schor's algorithm to break cryptography, you know, just because it requires, you know, too much in the way of error correction. The best shot we have is to do some quantum simulation that tells the material scientists or chemists or nuclear physicists, you know, something that is useful to them and that they didn't already know, you know, and you might only need one or two successes in order to change some, you know, billion dollar industries, right? Like, you know, the way that people make fertilizer right now is still based on the Haber Bosch process from a century ago. And it is some many body quantum mechanics problem that no one really understands, right? If you could design a better way to make fertilizer, right? That's, you know, billions of dollars right there. So those are sort of the applications that people are going to be aggressively racing toward over the next decade. Now, I don't know if they're gonna realize it or not, but, you know, they certainly at least have a shot. So it's gonna be a very, very interesting next decade. But just to clarify, what's your intuition? If a breakthrough like that comes with, is it possible for that breakthrough to be on 50 to 100 qubits or is scale a fundamental thing like 500, 1000 plus qubits? Yeah, so I can tell you what the current studies are saying. You know, I think probably better to rely on that than on my intuition. But, you know, there was a group at Microsoft had a study a few years ago that said even with only about 100 qubits, you know, you could already learn something new about the chemical reaction that makes fertilizer, for example. The trouble is they're talking about 100 qubits and about a million layers of quantum gates. Okay, so basically they're talking about 100 nearly perfect qubits. So the logical qubits, as you mentioned before. Yeah, exactly, 100 logical qubits. And now, you know, the hard part for the next decade is gonna be, well, what can we do with 100 to 200 noisy qubits? Yeah, is there error correction breakthroughs that might come without the need to do thousands or millions of physical qubits? Yeah, so people are gonna be pushing simultaneously on a bunch of different directions. One direction, of course, is just making the qubits better, right? And, you know, there is tremendous progress there. I mean, you know, the fidelity is like the accuracy of the qubits has improved by several orders of magnitude, you know, in the last decade or two. Okay, the second thing is designing better error, you know, let's say lower overhead error correcting codes and even short of doing the full recursive error correction. You know, there are these error mitigation strategies that you can use, you know, that may, you know, allow you to eke out a useful speed up in the near term. And then the third thing is just taking the quantum algorithms for simulating quantum chemistry or materials and making them more efficient. You know, and those algorithms are already dramatically more efficient than they were, let's say, five years ago. And so when, you know, I quoted these estimates like, you know, circuit depth of one million. And so, you know, I hope that because people will care enough that these numbers are gonna come down. So you're one of the world class researchers in this space. There's a few groups like you mentioned, Google and IBM working at this. There's other research labs, but you put also, you have an amazing blog. You just, you put a lot, you paid me to say it. You put a lot of effort sort of to communicating the science of this and communicating, exposing some of the BS and sort of the natural, just like in the AI space, the natural charlatanism, if that's a word in this, in the quantum mechanics in general, but quantum computers and so on. Can you give some notes about people or ideas that people like me or listeners in general from outside the field should be cautious of when they're taking in news headings that Google achieved quantum supremacy? So what should we look out for? Where's the charlatans in the space? Where's the BS? Yeah, so good question. Unfortunately, quantum computing is a little bit like cryptocurrency or deep learning. Like there is a core of something that is genuinely revolutionary and exciting. And because of that core, it attracts this sort of vast penumbra of people making just utterly ridiculous claims. And so with quantum computing, I mean, I would say that the main way that people go astray is by not focusing on sort of the question of, are you getting a speed up over a classical computer or not? And so people have like dismissed quantum supremacy because it's not useful, right? Or it's not itself, let's say, obviously useful for anything. Okay, but ironically, these are some of the same people who will go and say, well, we care about useful applications. We care about solving traffic routing and financial optimization and all these things. And that sounds really good, but their entire spiel is sort of counting on nobody asking the question, yes, but how well could a classical computer do the same thing, right? I really mean the entire thing is they say, well, a quantum computer can do this, a quantum computer can do that. A quantum computer can do that, right? And they just avoid the question, are you getting a speed up over a classical computer or not? And if so, how do you know? Have you really thought carefully about classical algorithms to solve the same problem, right? And a lot of the application areas that the companies and investors are most excited about that the popular press is most excited about where quantum computers have been things like machine learning, AI, optimization, okay? And the problem with that is that since the very beginning, even if you have a perfect fault tolerant, scalable quantum computer, we have known of only modest speed ups that you can get for these problems, okay? So there is a famous quantum algorithm called Grover's algorithm, okay? And what it can do is it can solve many, many of the problems that arise in AI, machine learning, optimization, including NP complete problems, okay? But it can solve them in about the square root of the number of steps that a classical computer would need for the same problems, okay? Now a square root speed up is important, it's impressive. It is not an exponential speed up, okay? So it is not the kind of game changer that let's say Shor's algorithm for factoring is, or for that matter that simulation of quantum mechanics is, okay, it is a more modest speed up. And let's say roughly, in theory, it could roughly double the size of the optimization problems that you could handle, right? And so because people found that I guess too boring or too unimpressive, they've gone on to like invent all of these heuristic algorithms where because no one really understands them, you can just project your hopes onto them, right? That, well, maybe it gets an exponential speed up. You can't prove that it doesn't, and the burden is on you to prove that it doesn't get a speed up, right? And so they've done an immense amount of that kind of thing. And a really worrying amount of the case for building a quantum computer has come to rest on this stuff that those of us in this field know perfectly well is on extremely shaky foundations. So the fundamental question is, show that there's a speed up over the classical. Absolutely. And in this space that you're referring to, which is actually really interesting, it's the area that a lot of people excited about is machine learning. So your sense is, do you think it will, I know that there's a lot of smoke currently, but do you think there actually eventually might be breakthroughs where you do get exponential speed ups in the machine learning space? Absolutely, there might be. I mean, I think we know of modest speed ups that you can get for these problems. I think, you know, whether you can get bigger speed ups is one of the biggest questions for quantum computing theory, you know, for people like me to be thinking about. Now, you know, we had actually recently a really, you know, a super exciting candidate for an exponential quantum speed up for a machine learning problem that people really care about. This is basically the Netflix problem, the problem of recommending products to users given some sparse data about their preferences. Karinidis and Prakash in 2016 had an algorithm for sampling recommendations that was exponentially faster than any known classical algorithm, right? And so, you know, a lot of people were excited. I was excited about it. I had an 18 year old undergrad by the name of Yilin Tang, and she was obviously brilliant. She was looking for a project. I gave her as a project, can you prove that this speed up is real? Can you prove that, you know, any classical algorithm would need to access exponentially more data, right? And, you know, this was a case where if that was true, this was not like a P versus NP type of question, right? This might well have been provable, but she worked on it for a year. She couldn't do it. Eventually she figured out why she couldn't do it. And the reason was that that was false. There is a classical algorithm with a similar performance to the quantum algorithm. So Yilin succeeded in dequantizing that machine learning algorithm. And then in the last couple of years, building on Yilin's breakthrough, a bunch of the other quantum machine learning algorithms that were proposed have now also been dequantized. Yeah. Okay, and so I would say, yeah. That's a kind of important backwards step. Yes. Like a forward step for science, but a step for quantum machine learning that precedes the big next forward step. Right, right, right. If it's possible. Right, now some people will say, well, you know, there's a silver lining in this cloud. They say, well, thinking about quantum computing has led to the discovery of potentially useful new classical algorithms. That's true. And so, you know, so you get these spinoff applications, but if you want a quantum speed up, you really have to think carefully about that. You know, Yilin's work was a perfect illustration of why. Right, and I think that, you know, the challenge, you know, the field is now open, right? Find a better example, find, you know, where quantum computers are going to deliver big gains for machine learning. You know, I am, you know, not only do I ardently support, you know, people thinking about that, I'm trying to think about it myself and have my students and postdocs think about it, but we should not pretend that those speed ups are already established. And the problem comes when so many of the companies and, you know, and journalists in this space are pretending that. Like all good things, like life itself, this conversation must soon come to an end. Let me ask the most absurdly philosophical last question. What is the meaning of life? What gives your life fulfillment, purpose, happiness, and yeah, meaning? I would say, you know, number one, trying to discover new things about the world and share them and, you know, communicate and learn what other people have discovered. You know, number two, you know, my friends, my family, my kids, my students, you know, just the people around me. Number three, you know, trying, you know, when I can to, you know, make the world better in some small ways. And, you know, it's depressing that I can't do more and that, you know, the world is, you know, facing crises over, you know, the climate and over, you know, sort of resurgent authoritarianism and all these other things, but, you know, trying to stand against the things that I find horrible when I can. Let me ask you one more absurd question. What makes you smile? Well, yeah, I guess your question just did. I don't know. I thought I tried that absurd one on you. Well, it was a huge honor to talk to you. We'll probably talk to you for many more hours, Scott. Thank you so much. Well, thank you. Thank you. It was great. Thank you for listening to this conversation with Scott Aaronson. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. Now, let me leave you with some words from a funny and insightful blog post Scott wrote over 10 years ago on the ever present Malthusianisms in our daily lives. Quote, again and again, I've undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that it's not sucking wouldn't have been a Nash equilibrium. Thank you for listening. I hope to see you next time.
Scott Aaronson: Quantum Computing | Lex Fridman Podcast #72
The following is a conversation with Andrew Ng, one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He cofounded Coursera and Google Brain, launched Deep Learning AI, Landing AI, and the AI Fund, and was the chief scientist at Baidu. As a Stanford professor and with Coursera and Deep Learning AI, he has helped educate and inspire millions of students, including me. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square, a member SIPC. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book on this history. Debits and credits on ledgers started over 30,000 years ago. The US dollar was created over 200 years ago, and Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it's still aiming to and just might redefine the nature of money. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Andrew Ng. The courses you taught on machine learning at Stanford and later on Coursera that you cofounded have educated and inspired millions of people. So let me ask you, what people or ideas inspired you to get into computer science and machine learning when you were young? When did you first fall in love with the field, is another way to put it. Growing up in Hong Kong and Singapore, I started learning to code when I was five or six years old. At that time, I was learning the basic programming language, and they would take these books and they'll tell you, type this program into your computer, so type that program to my computer. And as a result of all that typing, I would get to play these very simple shoot them up games that I had implemented on my little computer. So I thought it was fascinating as a young kid that I could write this code. I was really just copying code from a book into my computer to then play these cool little video games. Another moment for me was when I was a teenager and my father, who's a doctor, was reading about expert systems and about neural networks. So he got me to read some of these books, and I thought it was really cool. You could write a computer that started to exhibit intelligence. Then I remember doing an internship while I was in high school, this was in Singapore, where I remember doing a lot of photocopying and as an office assistant. And the highlight of my job was when I got to use the shredder. So the teenager me, remote thinking, boy, this is a lot of photocopying. If only we could write software, build a robot, something to automate this, maybe I could do something else. So I think a lot of my work since then has centered on the theme of automation. Even the way I think about machine learning today, we're very good at writing learning algorithms that can automate things that people can do. Or even launching the first MOOCs, Mass Open Online Courses, that later led to Coursera. I was trying to automate what could be automatable in how I was teaching on campus. Process of education, trying to automate parts of that to make it more, sort of to have more impact from a single teacher, a single educator. Yeah, I felt, you know, teaching at Stanford, teaching machine learning to about 400 students a year at the time. And I found myself filming the exact same video every year, telling the same jokes in the same room. And I thought, why am I doing this? Why don't we just take last year's video? And then I can spend my time building a deeper relationship with students. So that process of thinking through how to do that, that led to the first MOOCs that we launched. And then you have more time to write new jokes. Are there favorite memories from your early days at Stanford, teaching thousands of people in person and then millions of people online? You know, teaching online, what not many people know was that a lot of those videos were shot between the hours of 10 p.m. and 3 a.m. A lot of times, we were launching the first MOOCs at Stanford. We had already announced the course, about 100,000 people signed up. We just started to write the code and we had not yet actually filmed the videos. So a lot of pressure, 100,000 people waiting for us to produce the content. So many Fridays, Saturdays, I would go out, have dinner with my friends, and then I would think, OK, do you want to go home now? Or do you want to go to the office to film videos? And the thought of being able to help 100,000 people potentially learn machine learning, fortunately, that made me think, OK, I want to go to my office, go to my tiny little recording studio. I would adjust my Logitech webcam, adjust my Wacom tablet, make sure my lapel mic was on, and then I would start recording often until 2 a.m. or 3 a.m. I think unfortunately, that doesn't show that it was recorded that late at night, but it was really inspiring the thought that we could create content to help so many people learn about machine learning. How did that feel? The fact that you're probably somewhat alone, maybe a couple of friends recording with a Logitech webcam and kind of going home alone at 1 or 2 a.m. at night and knowing that that's going to reach sort of thousands of people, eventually millions of people, what's that feeling like? I mean, is there a feeling of just satisfaction of pushing through? I think it's humbling. And I wasn't thinking about what I was feeling. I think one thing that I'm proud to say we got right from the early days was I told my whole team back then that the number one priority is to do what's best for learners, do what's best for students. And so when I went to the recording studio, the only thing on my mind was what can I say? How can I design my slides? What I need to draw right to make these concepts as clear as possible for learners? I think I've seen sometimes instructors is tempting to, hey, let's talk about my work. Maybe if I teach you about my research, someone will cite my papers a couple more times. And I think one of the things we got right, launching the first few MOOCs and later building Coursera, was putting in place that bedrock principle of let's just do what's best for learners and forget about everything else. And I think that that is a guiding principle turned out to be really important to the rise of the MOOC movement. And the kind of learner you imagined in your mind is as broad as possible, as global as possible. So really try to reach as many people interested in machine learning and AI as possible. I really want to help anyone that had an interest in machine learning to break into the field. And I think sometimes I've actually had people ask me, hey, why are you spending so much time explaining gradient descent? And my answer was, if I look at what I think the learner needs and what benefit from, I felt that having that a good understanding of the foundations coming back to the basics would put them in a better stead to then build on a long term career. So try to consistently make decisions on that principle. So one of the things you actually revealed to the narrow AI community at the time and to the world is that the amount of people who are actually interested in AI is much larger than we imagined. By you teaching the class and how popular it became, it showed that, wow, this isn't just a small community of sort of people who go to NeurIPS and it's much bigger. It's developers, it's people from all over the world. I mean, I'm Russian, so everybody in Russia is really interested. There's a huge number of programmers who are interested in machine learning, India, China, South America, everywhere. There's just millions of people who are interested in machine learning. So how big do you get a sense that the number of people is that are interested from your perspective? I think the number has grown over time. I think it's one of those things that maybe it feels like it came out of nowhere, but it's an insight that building it, it took years. It's one of those overnight successes that took years to get there. My first foray into this type of online education was when we were filming my Stanford class and sticking the videos on YouTube and some other things. We had uploaded the horrors and so on, but it's basically the one hour, 15 minute video that we put on YouTube. And then we had four or five other versions of websites that I had built, most of which you would never have heard of because they reached small audiences, but that allowed me to iterate, allowed my team and me to iterate, to learn what are the ideas that work and what doesn't. For example, one of the features I was really excited about and really proud of was build this website where multiple people could be logged into the website at the same time. So today, if you go to a website, if you are logged in and then I want to log in, you need to log out because it's the same browser, the same computer. But I thought, well, what if two people say you and me were watching a video together in front of a computer? What if a website could have you type your name and password, have me type my name and password, and then now the computer knows both of us are watching together and it gives both of us credit for anything we do as a group. Influencers feature rolled it out in a high school in San Francisco. We had about 20 something users. Where's the teacher there? Sacred Heart Cathedral Prep, the teacher is great. I mean, guess what? Zero people use this feature. It turns out people studying online, they want to watch the videos by themselves. So you can play back, pause at your own speed rather than in groups. So that was one example of a tiny lesson learned out of many that allowed us to hone into the set of features. It sounds like a brilliant feature. So I guess the lesson to take from that is there's something that looks amazing on paper and then nobody uses it. It doesn't actually have the impact that you think it might have. And so, yeah, I saw that you really went through a lot of different features and a lot of ideas to arrive at Coursera, the final kind of powerful thing that showed the world that MOOCs can educate millions. And I think with the whole machine learning movement as well, I think it didn't come out of nowhere. Instead, what happened was as more people learn about machine learning, they will tell their friends and their friends will see how it's applicable to their work. And then the community kept on growing. And I think we're still growing. I don't know in the future what percentage of all developers will be AI developers. I could easily see it being north of 50%, right? Because so many AI developers broadly construed, not just people doing the machine learning modeling, but the people building infrastructure, data pipelines, all the software surrounding the core machine learning model maybe is even bigger. I feel like today almost every software engineer has some understanding of the cloud. Not all, but maybe this is my microcontroller developer that doesn't need to deal with the cloud. But I feel like the vast majority of software engineers today are sort of having an appreciation of the cloud. I think in the future, maybe we'll approach nearly 100% of all developers being in some way an AI developer or at least having an appreciation of machine learning. And my hope is that there's this kind of effect that there's people who are not really interested in being a programmer or being into software engineering, like biologists, chemists, and physicists, even mechanical engineers, all these disciplines that are now more and more sitting on large data sets. And here they didn't think they're interested in programming until they have this data set and they realize there's this set of machine learning tools that allow you to use the data set. So they actually become, they learn to program and they become new programmers. So like the, not just because you've mentioned a larger percentage of developers become machine learning people. So it seems like more and more the kinds of people who are becoming developers is also growing significantly. Yeah, I think once upon a time, only a small part of humanity was literate, could read and write. And maybe you thought, maybe not everyone needs to learn to read and write. You just go listen to a few monks read to you and maybe that was enough. Or maybe you just need a few handful of authors to write the bestsellers and no one else needs to write. But what we found was that by giving as many people, in some countries, almost everyone, basic literacy, it dramatically enhanced human to human communications. And we can now write for an audience of one, such as if I send you an email or you send me an email. I think in computing, we're still in that phase where so few people know how to code that the coders mostly have to code for relatively large audiences. But if everyone, or most people became developers at some level, similar to how most people in developed economies are somewhat literate, I would love to see the owners of a mom and pop store be able to write a little bit of code to customize the TV display for their special this week. And I think it will enhance human to computer communications, which is becoming more and more important today as well. So you think it's possible that machine learning becomes kind of similar to literacy, where like you said, the owners of a mom and pop shop, is basically everybody in all walks of life would have some degree of programming capability? I could see society getting there. There's one other interesting thing. If I go talk to the mom and pop store, if I talk to a lot of people in their daily professions, I previously didn't have a good story for why they should learn to code. We could give them some reasons. But what I found with the rise of machine learning and data science is that I think the number of people with a concrete use for data science in their daily lives, in their jobs, may be even larger than the number of people who have concrete use for software engineering. For example, if you run a small mom and pop store, I think if you can analyze the data about your sales, your customers, I think there's actually real value there, maybe even more than traditional software engineering. So I find that for a lot of my friends in various professions, be it recruiters or accountants or people that work in the factories, which I deal with more and more these days, I feel if they were data scientists at some level, they could immediately use that in their work. So I think that data science and machine learning may be an even easier entree into the developer world for a lot of people than the software engineering. That's interesting. And I agree with that, but that's beautifully put. But we live in a world where most courses and talks have slides, PowerPoint, keynote, and yet you famously often still use a marker and a whiteboard. The simplicity of that is compelling, and for me at least, fun to watch. So let me ask, why do you like using a marker and whiteboard, even on the biggest of stages? I think it depends on the concepts you want to explain. For mathematical concepts, it's nice to build up the equation one piece at a time, and the whiteboard marker or the pen and stylus is a very easy way to build up the equation, to build up a complex concept one piece at a time while you're talking about it, and sometimes that enhances understandability. The downside of writing is that it's slow, and so if you want a long sentence, it's very hard to write that. So I think there are pros and cons, and sometimes I use slides, and sometimes I use a whiteboard or a stylus. The slowness of a whiteboard is also its upside, because it forces you to reduce everything to the basics. Some of your talks involve the whiteboard. I mean, you go very slowly, and you really focus on the most simple principles, and that's a beautiful, that enforces a kind of a minimalism of ideas that I think is surprising at least for me is great for education. Like a great talk, I think, is not one that has a lot of content. A great talk is one that just clearly says a few simple ideas, and I think the whiteboard somehow enforces that. Peter Abbeel, who's now one of the top roboticists and reinforcement learning experts in the world, was your first PhD student. So I bring him up just because I kind of imagine this must have been an interesting time in your life, and do you have any favorite memories of working with Peter, since you were your first student in those uncertain times, especially before deep learning really sort of blew up? Any favorite memories from those times? Yeah, I was really fortunate to have had Peter Abbeel as my first PhD student, and I think even my long term professional success builds on early foundations or early work that Peter was so critical to. So I was really grateful to him for working with me. What not a lot of people know is just how hard research was, and still is. Peter's PhD thesis was using reinforcement learning to fly helicopters. And so, even today, the website heli.stanford.edu, heli.stanford.edu is still up. You can watch videos of us using reinforcement learning to make a helicopter fly upside down, fly loose roses, so it's cool. It's one of the most incredible robotics videos ever, so people should watch it. Oh yeah, thank you. It's inspiring. That's from like 2008 or seven or six, like that range. Yeah, something like that. Yeah, so it was over 10 years old. That was really inspiring to a lot of people, yeah. What not many people see is how hard it was. So Peter and Adam Coase and Morgan Quigley and I were working on various versions of the helicopter, and a lot of things did not work. For example, it turns out one of the hardest problems we had was when the helicopter's flying around upside down, doing stunts, how do you figure out the position? How do you localize the helicopter? So we wanted to try all sorts of things. Having one GPS unit doesn't work because you're flying upside down, the GPS unit's facing down, so you can't see the satellites. So we experimented trying to have two GPS units, one facing up, one facing down. So if you flip over, that didn't work because the downward facing one couldn't synchronize if you're flipping quickly. Morgan Quigley was exploring this crazy, complicated configuration of specialized hardware to interpret GPS signals. Looking at the FPG is completely insane. Spent about a year working on that, didn't work. So I remember Peter, great guy, him and me, sitting down in my office looking at some of the latest things we had tried that didn't work and saying, done it, what now? Because we tried so many things and it just didn't work. In the end, what we did, and Adam Coles was crucial to this, was put cameras on the ground and use cameras on the ground to localize the helicopter. And that solved the localization problem so that we could then focus on the reinforcement learning and inverse reinforcement learning techniques so it didn't actually make the helicopter fly. And I'm reminded, when I was doing this work at Stanford, around that time, there was a lot of reinforcement learning theoretical papers, but not a lot of practical applications. So the autonomous helicopter work for flying helicopters was one of the few practical applications of reinforcement learning at the time, which caused it to become pretty well known. I feel like we might have almost come full circle with today. There's so much buzz, so much hype, so much excitement about reinforcement learning. But again, we're hunting for more applications of all of these great ideas that David Kuhnke has come up with. What was the drive sort of in the face of the fact that most people are doing theoretical work? What motivates you in the uncertainty and the challenges to get the helicopter sort of to do the applied work, to get the actual system to work? Yeah, in the face of fear, uncertainty, sort of the setbacks that you mentioned for localization. I like stuff that works. In the physical world. So like, it's back to the shredder. You know, I like theory, but when I work on theory myself, and this is personal taste, I'm not saying anyone else should do what I do. But when I work on theory, I personally enjoy it more if I feel that the work I do will influence people, have positive impact, or help someone. I remember when many years ago, I was speaking with a mathematics professor, and it kind of just said, hey, why do you do what you do? It kind of just said, hey, why do you do what you do? And then he said, he had stars in his eyes when he answered. And this mathematician, not from Stanford, different university, he said, I do what I do because it helps me to discover truth and beauty in the universe. He had stars in his eyes when he said that. And I thought, that's great. I don't want to do that. I think it's great that someone does that, fully support the people that do it, a lot of respect for people that do that. But I am more motivated when I can see a line to how the work that my teams and I are doing helps people. The world needs all sorts of people. I'm just one type. I don't think everyone should do things the same way as I do. But when I delve into either theory or practice, if I personally have conviction that here's a pathway to help people, I find that more satisfying to have that conviction. That's your path. You were a proponent of deep learning before it gained widespread acceptance. What did you see in this field that gave you confidence? What was your thinking process like in that first decade of the, I don't know what that's called, 2000s, the aughts? Yeah, I can tell you the thing we got wrong and the thing we got right. The thing we really got wrong was the importance of, the early importance of unsupervised learning. So early days of Google Brain, we put a lot of effort into unsupervised learning rather than supervised learning. And there was this argument, I think it was around 2005 after NeurIPS, at that time called NIPS, but now NeurIPS had ended. And Jeff Hinton and I were sitting in the cafeteria outside the conference. We had lunch, we were just chatting. And Jeff pulled up this napkin. He started sketching this argument on a napkin. It was very compelling, as I'll repeat it. Human brain has about a hundred trillion. So there's 10 to the 14 synaptic connections. You will live for about 10 to the nine seconds. That's 30 years. You actually live for two by 10 to the nine, maybe three by 10 to the nine seconds. So just let's say 10 to the nine. So if each synaptic connection, each weight in your brain's neural network has just a one bit parameter, that's 10 to the 14 bits you need to learn in up to 10 to the nine seconds. 10 to the nine seconds of your life. So via this simple argument, which is a lot of problems, it's very simplified. That's 10 to the five bits per second you need to learn in your life. And I have a one year old daughter. I am not pointing out 10 to five bits per second of labels to her. And I think I'm a very loving parent, but I'm just not gonna do that. So from this very crude, definitely problematic argument, there's just no way that most of what we know is through supervised learning. But where you get so many bits of information is from sucking in images, audio, those experiences in the world. And so that argument, and there are a lot of known forces argument you should go into, really convinced me that there's a lot of power to unsupervised learning. So that was the part that we actually maybe got wrong. I still think unsupervised learning is really important, but in the early days, 10, 15 years ago, a lot of us thought that was the path forward. Oh, so you're saying that that perhaps was the wrong intuition for the time. For the time, that was the part we got wrong. The part we got right was the importance of scale. So Adam Coates, another wonderful person, fortunate to have worked with him, he was in my group at Stanford at the time and Adam had run these experiments at Stanford showing that the bigger we train a learning algorithm, the better its performance. And it was based on that. There was a graph that Adam generated where the X axis, Y axis lines going up into the right. So the bigger you make this thing, the better its performance accuracy is the vertical axis. So it's really based on that chart that Adam generated that he gave me the conviction that you could scale these models way bigger than what we could on a few CPUs, which is where we had at Stanford that we could get even better results. And it was really based on that one figure that Adam generated that gave me the conviction to go with Sebastian Thrun to pitch starting a project at Google, which became the Google Brain project. The Brain, you go find a Google Brain. And there the intuition was scale will bring performance for the system. So we should chase a larger and larger scale. And I think people don't realize how groundbreaking of it. It's simple, but it's a groundbreaking idea that bigger data sets will result in better performance. It was controversial at the time. Some of my well meaning friends, senior people in the machine learning community, I won't name, but some of whom we know, my well meaning friends came and were trying to give me friendly, I was like, hey, Andrew, why are you doing this? This is crazy. It's in the near natural architecture. Look at these architectures of building. You just want to go for scale? Like this is a bad career move. So my well meaning friends, some of them were trying to talk me out of it. But I find that if you want to make a breakthrough, you sometimes have to have conviction and do something before it's popular, since that lets you have a bigger impact. Let me ask you just a small tangent on that topic. I find myself arguing with people saying that greater scale, especially in the context of active learning, so very carefully selecting the data set, but growing the scale of the data set is going to lead to even further breakthroughs in deep learning. And there's currently pushback at that idea that larger data sets are no longer, so you want to increase the efficiency of learning. You want to make better learning mechanisms. And I personally believe that bigger data sets will still, with the same learning methods we have now, will result in better performance. What's your intuition at this time on this dual side? Do we need to come up with better architectures for learning or can we just get bigger, better data sets that will improve performance? I think both are important and it's also problem dependent. So for a few data sets, we may be approaching a Bayes error rate or approaching or surpassing human level performance and then there's that theoretical ceiling that we will never surpass, so Bayes error rate. But then I think there are plenty of problems where we're still quite far from either human level performance or from Bayes error rate and bigger data sets with neural networks without further algorithmic innovation will be sufficient to take us further. But on the flip side, if we look at the recent breakthroughs using transforming networks or language models, it was a combination of novel architecture but also scale had a lot to do with it. If we look at what happened with GP2 and BERTZ, I think scale was a large part of the story. Yeah, that's not often talked about is the scale of the data set it was trained on and the quality of the data set because there's some, so it was like reddit threads that had, they were operated highly. So there's already some weak supervision on a very large data set that people don't often talk about, right? I find that today we have maturing processes to managing code, things like Git, right? Version control. It took us a long time to evolve the good processes. I remember when my friends and I were emailing each other C++ files in email, but then we had, was it CVS or version Git? Maybe something else in the future. We're very mature in terms of tools for managing data and think about the clean data and how to solve down very hot, messy data problems. I think there's a lot of innovation there to be had still. I love the idea that you were versioning through email. I'll give you one example. When we work with manufacturing companies, it's not at all uncommon for there to be multiple labels that disagree with each other, right? And so we would do the work in visual inspection. We will take, say, a plastic part and show it to one inspector and the inspector, sometimes very opinionated, they'll go, clearly, that's a defect. This scratch, unacceptable. Gotta reject this part. Take the same part to different inspector, different, very opinionated. Clearly, the scratch is small. It's fine. Don't throw it away. You're gonna make us, you know. And then sometimes you take the same plastic part, show it to the same inspector in the afternoon, I suppose, in the morning, and very opinionated go, in the morning, they say, clearly, it's okay. In the afternoon, equally confident. Clearly, this is a defect. And so what is an AI team supposed to do if sometimes even one person doesn't agree with himself or herself in the span of a day? So I think these are the types of very practical, very messy data problems that my teams wrestle with. In the case of large consumer internet companies where you have a billion users, you have a lot of data. You don't worry about it. Just take the average. It kind of works. But in a case of other industry settings, we don't have big data. If just a small data, very small data sets, maybe around 100 defective parts or 100 examples of a defect. If you have only 100 examples, these little labeling errors, if 10 of your 100 labels are wrong, that actually is 10% of your data set has a big impact. So how do you clean this up? What are you supposed to do? This is an example of the types of things that my teams, this is a landing AI example, are wrestling with to deal with small data, which comes up all the time once you're outside consumer internet. Yeah, that's fascinating. So then you invest more effort and time in thinking about the actual labeling process. What are the labels? What are the how are disagreements resolved and all those kinds of like pragmatic real world problems. That's a fascinating space. Yeah, I find that actually when I'm teaching at Stanford, I increasingly encourage students at Stanford to try to find their own project for the end of term project, rather than just downloading someone else's nicely clean data set. It's actually much harder if you need to go and define your own problem and find your own data set, rather than you go to one of the several good websites, very good websites with clean scoped data sets that you could just work on. You're now running three efforts, the AI Fund, Landing AI, and deeplearning.ai. As you've said, the AI Fund is involved in creating new companies from scratch. Landing AI is involved in helping already established companies do AI and deeplearning.ai is for education of everyone else or of individuals interested in getting into the field and excelling in it. So let's perhaps talk about each of these areas. First, deeplearning.ai. How, the basic question, how does a person interested in deep learning get started in the field? Deep learning.ai is working to create courses to help people break into AI. So my machine learning course that I taught through Stanford is one of the most popular courses on Coursera. To this day, it's probably one of the courses, sort of, if I asked somebody, how did you get into machine learning or how did you fall in love with machine learning or would get you interested, it always goes back to Andrew Ng at some point. I see, yeah, I'm sure. You've influenced, the amount of people you've influenced is ridiculous. So for that, I'm sure I speak for a lot of people say big thank you. No, yeah, thank you. I was once reading a news article, I think it was tech review and I'm gonna mess up the statistic, but I remember reading an article that said something like one third of all programmers are self taught. I may have the number one third, around me was two thirds, but when I read that article, I thought this doesn't make sense. Everyone is self taught. So, cause you teach yourself. I don't teach people. That's well put. Yeah, so how does one get started in deep learning and where does deeplearning.ai fit into that? So the deep learning specialization offered by deeplearning.ai is I think it was Coursera's top specialization. It might still be. So it's a very popular way for people to take that specialization to learn about everything from neural networks to how to tune in your network to what is a ConvNet to what is a RNN or a sequence model or what is an attention model. And so the deep learning specialization steps everyone through those algorithms so you deeply understand it and can implement it and use it for whatever application. From the very beginning. So what would you say are the prerequisites for somebody to take the deep learning specialization in terms of maybe math or programming background? Yeah, need to understand basic programming since there are programming exercises in Python and the math prereq is quite basic. So no calculus is needed. If you know calculus is great, you get better intuitions but deliberately try to teach that specialization without requiring calculus. So I think high school math would be sufficient. If you know how to multiply two matrices, I think that's great. So a little basic linear algebra is great. Basic linear algebra, even very, very basic linear algebra in some programming. I think that people that have done the machine learning course will find a deep learning specialization a bit easier but it's also possible to jump into the deep learning specialization directly but it will be a little bit harder since we tend to go over faster concepts like how does gradient descent work and what is the objective function which is covered more slowly in the machine learning course. Could you briefly mention some of the key concepts in deep learning that students should learn that you envision them learning in the first few months in the first year or so? So if you take the deep learning specialization, you learn the foundations of what is a neural network. How do you build up a neural network from a single logistic unit to a stack of layers to different activation functions. You learn how to train the neural networks. One thing I'm very proud of in that specialization is we go through a lot of practical knowhow of how to actually make these things work. So what are the differences between different optimization algorithms? What do you do if the algorithm overfits or how do you tell if the algorithm is overfitting? When do you collect more data? When should you not bother to collect more data? I find that even today, unfortunately, there are engineers that will spend six months trying to pursue a particular direction such as collect more data because we heard more data is valuable but sometimes you could run some tests and could have figured out six months earlier that for this particular problem, collecting more data isn't going to cut it. So just don't spend six months collecting more data. Spend your time modifying the architecture or trying something else. So go through a lot of the practical knowhow so that when someone, when you take the deep learning specialization, you have those skills to be very efficient in how you build these networks. So dive right in to play with the network, to train it, to do the inference on a particular data set, to build intuition about it without building it up too big to where you spend, like you said, six months learning, building up your big project without building any intuition of a small aspect of the data that could already tell you everything you need to know about that data. Yes, and also the systematic frameworks of thinking for how to go about building practical machine learning. Maybe to make an analogy, when we learn to code, we have to learn the syntax of some programming language, right? Be it Python or C++ or Octave or whatever. But the equally important or maybe even more important part of coding is to understand how to string together these lines of code into coherent things. So when should you put something in a function column? When should you not? How do you think about abstraction? So those frameworks are what makes a programmer efficient even more than understanding the syntax. I remember when I was an undergrad at Carnegie Mellon, one of my friends would debug their code by first trying to compile it, and then it was C++ code. And then every line in the syntax error, they want to get rid of the syntax errors as quickly as possible. So how do you do that? Well, they would delete every single line of code with a syntax error. So really efficient for getting rid of syntax errors for horrible debugging errors. So I think we learn how to debug. And I think in machine learning, the way you debug a machine learning program is very different than the way you do binary search or whatever, or use a debugger, trace through the code in traditional software engineering. So it's an evolving discipline, but I find that the people that are really good at debugging machine learning algorithms are easily 10x, maybe 100x faster at getting something to work. And the basic process of debugging is, so the bug in this case, why isn't this thing learning, improving, sort of going into the questions of overfitting and all those kinds of things? That's the logical space that the debugging is happening in with neural networks. Yeah, often the question is, why doesn't it work yet? Or can I expect it to eventually work? And what are the things I could try? Change the architecture, more data, more regularization, different optimization algorithm, different types of data. So to answer those questions systematically, so that you don't spend six months hitting down the blind alley before someone comes and says, why did you spend six months doing this? What concepts in deep learning do you think students struggle the most with? Or sort of is the biggest challenge for them was to get over that hill. It hooks them and it inspires them and they really get it. Similar to learning mathematics, I think one of the challenges of deep learning is that there are a lot of concepts that build on top of each other. If you ask me what's hard about mathematics, I have a hard time pinpointing one thing. Is it addition, subtraction? Is it a carry? Is it multiplication? There's just a lot of stuff. I think one of the challenges of learning math and of learning certain technical fields is that there are a lot of concepts and if you miss a concept, then you're kind of missing the prerequisite for something that comes later. So in the deep learning specialization, try to break down the concepts to maximize the odds of each component being understandable. So when you move on to the more advanced thing, we learn confidence, hopefully you have enough intuitions from the earlier sections to then understand why we structure confidence in a certain way and then eventually why we built RNNs and LSTMs or attention models in a certain way building on top of the earlier concepts. Actually, I'm curious, you do a lot of teaching as well. Do you have a favorite, this is the hard concept moment in your teaching? Well, I don't think anyone's ever turned the interview on me. I'm glad you get first. I think that's a really good question. Yeah, it's really hard to capture the moment when they struggle. I think you put it really eloquently. I do think there's moments that are like aha moments that really inspire people. I think for some reason, reinforcement learning, especially deep reinforcement learning is a really great way to really inspire people and get what the use of neural networks can do. Even though neural networks really are just a part of the deep RL framework, but it's a really nice way to paint the entirety of the picture of a neural network being able to learn from scratch, knowing nothing and explore the world and pick up lessons. I find that a lot of the aha moments happen when you use deep RL to teach people about neural networks, which is counterintuitive. I find like a lot of the inspired sort of fire in people's passion, people's eyes, it comes from the RL world. Do you find reinforcement learning to be a useful part of the teaching process or no? I still teach reinforcement learning in one of my Stanford classes and my PhD thesis was on reinforcement learning. So I clearly loved a few. I find that if I'm trying to teach students the most useful techniques for them to use today, I end up shrinking the amount of time I talk about reinforcement learning. It's not what's working today. Now, our world changes so fast. Maybe this will be totally different in a couple of years. But I think we need a couple more things for reinforcement learning to get there. One of my teams is looking to reinforcement learning for some robotic control tasks. So I see the applications, but if you look at it as a percentage of all of the impact of the types of things we do, it's at least today outside of playing video games, right? In a few of the games, the scope. Actually, at NeurIPS, a bunch of us were standing around saying, hey, what's your best example of an actual deploy reinforcement learning application? And among like senior machine learning researchers, right? And again, there are some emerging ones, but there are not that many great examples. I think you're absolutely right. The sad thing is there hasn't been a big impactful real world application of reinforcement learning. I think its biggest impact to me has been in the toy domain, in the game domain, in the small example. That's what I mean for educational purpose. It seems to be a fun thing to explore in your networks with. But I think from your perspective, and I think that might be the best perspective is if you're trying to educate with a simple example in order to illustrate how this can actually be grown to scale and have a real world impact, then perhaps focusing on the fundamentals of supervised learning in the context of a simple data set, even like an MNIST data set is the right way, is the right path to take. The amount of fun I've seen people have with reinforcement learning has been great, but not in the applied impact in the real world setting. So it's a trade off, how much impact you want to have versus how much fun you want to have. Yeah, that's really cool. And I feel like the world actually needs all sorts. Even within machine learning, I feel like deep learning is so exciting, but the AI team shouldn't just use deep learning. I find that my teams use a portfolio of tools. And maybe that's not the exciting thing to say, but some days we use a neural net, some days we use a PCA. Actually, the other day, I was sitting down with my team looking at PCA residuals, trying to figure out what's going on with PCA applied to manufacturing problem. And some days we use a probabilistic graphical model, some days we use a knowledge draft, which is one of the things that has tremendous industry impact. But the amount of chatter about knowledge drafts in academia is really thin compared to the actual real world impact. So I think reinforcement learning should be in that portfolio. And then it's about balancing how much we teach all of these things. And the world should have diverse skills. It'd be sad if everyone just learned one narrow thing. Yeah, the diverse skill help you discover the right tool for the job. What is the most beautiful, surprising or inspiring idea in deep learning to you? Something that captivated your imagination. Is it the scale that could be, the performance that could be achieved with scale? Or is there other ideas? I think that if my only job was being an academic researcher, if an unlimited budget and didn't have to worry about short term impact and only focus on long term impact, I'd probably spend all my time doing research on unsupervised learning. I still think unsupervised learning is a beautiful idea. At both this past NeurIPS and ICML, I was attending workshops or listening to various talks about self supervised learning, which is one vertical segment maybe of unsupervised learning that I'm excited about. Maybe just to summarize the idea, I guess you know the idea about describing fleet. No, please. So here's the example of self supervised learning. Let's say we grab a lot of unlabeled images off the internet. So with infinite amounts of this type of data, I'm going to take each image and rotate it by a random multiple of 90 degrees. And then I'm going to train a supervised neural network to predict what was the original orientation. So it has to be rotated 90 degrees, 180 degrees, 270 degrees, or zero degrees. So you can generate an infinite amounts of labeled data because you rotated the image so you know what's the ground truth label. And so various researchers have found that by taking unlabeled data and making up labeled data sets and training a large neural network on these tasks, you can then take the hidden layer representation and transfer it to a different task very powerfully. Learning word embeddings where we take a sentence, delete a word, predict the missing word, which is how we learn. One of the ways we learn word embeddings is another example. And I think there's now this portfolio of techniques for generating these made up tasks. Another one called jigsaw would be if you take an image, cut it up into a three by three grid, so like a nine, three by three puzzle piece, jump up the nine pieces and have a neural network predict which of the nine factorial possible permutations it came from. So many groups, including OpenAI, Peter B has been doing some work on this too, Facebook, Google Brain, I think DeepMind, oh actually, Aaron van der Oort has great work on the CPC objective. So many teams are doing exciting work and I think this is a way to generate infinite label data and I find this a very exciting piece of unsupervised learning. So long term you think that's going to unlock a lot of power in machine learning systems is this kind of unsupervised learning. I don't think there's a whole enchilada, I think it's just a piece of it and I think this one piece unsupervised, self supervised learning is starting to get traction. We're very close to it being useful. Well, word embedding is really useful. I think we're getting closer and closer to just having a significant real world impact maybe in computer vision and video but I think this concept and I think there'll be other concepts around it. You know, other unsupervised learning things that I worked on I've been excited about. I was really excited about sparse coding and ICA, slow feature analysis. I think all of these are ideas that various of us were working on about a decade ago before we all got distracted by how well supervised learning was doing. So we would return we would return to the fundamentals of representation learning that really started this movement of deep learning. I think there's a lot more work that one could explore around this theme of ideas and other ideas to come up with better algorithms. So if we could return to maybe talk quickly about the specifics of deep learning.ai the deep learning specialization perhaps how long does it take to complete the course would you say? The official length of the deep learning specialization is I think 16 weeks so about four months but it's go at your own pace. So if you subscribe to the deep learning specialization there are people that finished it in less than a month by working more intensely and studying more intensely so it really depends on on the individual. When we created the deep learning specialization we wanted to make it very accessible and very affordable. And with you know Coursera and deep learning.ai education mission one of the things that's really important to me is that if there's someone for whom paying anything is a financial hardship then just apply for financial aid and get it for free. If you were to recommend a daily schedule for people in learning whether it's through the deep learning.ai specialization or just learning in the world of deep learning what would you recommend? How do they go about day to day sort of specific advice about learning about their journey in the world of deep learning machine learning? I think getting the habit of learning is key and that means regularity. So for example we send out a weekly newsletter the batch every Wednesday so people know it's coming Wednesday you can spend a little bit of time on Wednesday catching up on the latest news catching up on the latest news through the batch on Wednesday and for myself I've picked up a habit of spending some time every Saturday and every Sunday reading or studying and so I don't wake up on the Saturday and have to make a decision do I feel like reading or studying today or not it's just what I do and the fact is a habit makes it easier. So I think if someone can get into that habit it's like you know just like we brush our teeth every morning I don't think about it if I thought about it it's a little bit annoying to have to spend two minutes doing that but it's a habit that it takes no cognitive load but this would be so much harder if we have to make a decision every morning and actually that's the reason why I wear the same thing every day as well it's just one less decision I just get up and wear my blue shirt so but I think if you can get that habit that consistency of studying then it actually feels easier. So yeah it's kind of amazing in my own life like I play guitar every day for I force myself to at least for five minutes play guitar it's just it's a ridiculously short period of time but because I've gotten into that habit it's incredible what you can accomplish in a period of a year or two years you can become you know exceptionally good at certain aspects of a thing by just doing it every day for a very short period of time it's kind of a miracle that that's how it works it adds up over time. Yeah and I think this is often not about the bursts of sustained efforts and the all nighters because you could only do that a limited number of times it's the sustained effort over a long time I think you know reading two research papers is a nice thing to do but the power is not reading two research papers it's reading two research papers a week for a year then you read a hundred papers and you actually learn a lot when you read a hundred papers. So regularity and making learning a habit do you have general other study tips for particularly deep learning that people should in their process of learning is there some kind of recommendations or tips you have as they learn? One thing I still do when I'm trying to study something really deeply is take handwritten notes it varies I know there are a lot of people that take the deep learning courses during a commute or something where it may be more awkward to take notes so I know it may not work for everyone but when I'm taking courses on Coursera and I still take some every now and then the most recent one I took was a course on clinical trials because I was interested about that I got out my little Moleskine notebook and what I was seeing on my desk was just taking down notes so what the instructor was saying and that act we know that that act of taking notes preferably handwritten notes increases retention. So as you're sort of watching the video just kind of pausing maybe and then taking the basic insights down on paper. Yeah so there have been a few studies if you search online you find some of these studies that taking handwritten notes because handwriting is slower as we're saying just now it causes you to recode the knowledge in your own words more and that process of recoding promotes long term retention this is as opposed to typing which is fine again typing is better than nothing or in taking a class and not taking notes is better than not taking any class at all but comparing handwritten notes and typing you can usually type faster for a lot of people you can handwrite notes and so when people type they're more likely to just transcribe verbatim what they heard and that reduces the amount of recoding and that actually results in less long term retention. I don't know what the psychological effect there is but so true there's something fundamentally different about writing hand handwriting I wonder what that is I wonder if it is as simple as just the time it takes to write it slower yeah and because you can't write as many words you have to take whatever they said and summarize it into fewer words and that summarization process requires deeper processing of the meaning which then results in better retention that's fascinating oh and I think because of Coursera I spent so much time studying pedagogy this is actually one of my passions I really love learning how to more efficiently help others learn you know one of the things I do both when creating videos or when we write the batch is I try to think is one minute spent of us going to be a more efficient learning experience than one minute spent anywhere else and we really try to you know make it time efficient for the learners because you know everyone's busy so when when we're editing I often tell my teams every word needs to fight for its life and if you can delete a word let's just delete it and not wait let's not waste the learning time let's not waste the learning time oh that's so it's so amazing that you think that way because there is millions of people that are impacted by your teaching and sort of that one minute spent has a ripple effect right through years of time which is it's just fascinating to think about how does one make a career out of an interest in deep learning do you have advice for people we just talked about sort of the beginning early steps but if you want to make it an entire life's journey or at least a journey of a decade or two how do you how do you do it so most important thing is to get started right and and I think in the early parts of a career coursework um like the deep learning specialization or it's a very efficient way to master this material so because you know instructors uh be it me or someone else or you know Lawrence Maroney teaches our TensorFlow specialization or other things we're working on spend effort to try to make it time efficient for you to learn a new concept so coursework is actually a very efficient way for people to learn concepts and the beginning parts of breaking into a new field in fact one thing I see at Stanford some of my PhD students want to jump in the research right away and I actually tend to say look in your first couple years of PhD and spend time taking courses because it lays a foundation it's fine if you're less productive in your first couple years you'll be better off in the long term beyond a certain point there's materials that doesn't exist in courses because it's too cutting edge the course hasn't been created yet there's some practical experience that we're not yet that good as teaching in a course and I think after exhausting the efficient coursework then most people need to go on to either ideally work on projects and then maybe also continue their learning by reading blog posts and research papers and things like that doing projects is really important and again I think it's important to start small and just do something today you read about deep learning feels like oh all these people doing such exciting things what if I'm not building a neural network that changes the world then what's the point? Well the point is sometimes building that tiny neural network you know be it MNIST or upgrade to a fashion MNIST to whatever so doing your own fun hobby project that's how you gain the skills to let you do bigger and bigger projects I find this to be true at the individual level and also at the organizational level for a company to become good at machine learning sometimes the right thing to do is not to tackle the giant project is instead to do the small project that lets the organization learn and then build out from there but this is true both for individuals and for companies taking the first step and then taking small steps is the key should students pursue a PhD do you think you can do so much that's one of the fascinating things in machine learning you can have so much impact without ever getting a PhD so what are your thoughts should people go to grad school should people get a PhD? I think that there are multiple good options of which doing a PhD could be one of them I think that if someone's admitted to a top PhD program you know at MIT, Stanford, top schools I think that's a very good experience or if someone gets a job at a top organization at the top AI team I think that's also a very good experience there are some things you still need a PhD to do if someone's aspiration is to be a professor you know at the top academic university you just need a PhD to do that but if it goes to you know start a company, build a company do great technical work I think a PhD is a good experience but I would look at the different options available to someone you know where are the places where you can get a job where are the places to get a PhD program and kind of weigh the pros and cons of those So just to linger on that for a little bit longer what final dreams and goals do you think people should have so what options should they explore so you can work in industry so for a large company like Google, Facebook, Baidu all these large sort of companies that already have huge teams of machine learning engineers you can also do with an industry sort of more research groups that kind of like Google Research, Google Brain then you can also do like we said a professor in academia and what else oh you can build your own company you can do a startup is there anything that stands out between those options or are they all beautiful different journeys that people should consider I think the thing that affects your experience more is less are you in this company versus that company or academia versus industry I think the thing that affects your experience most is who are the people you're interacting with in a daily basis so even if you look at some of the large companies the experience of individuals in different teams is very different and what matters most is not the logo above the door when you walk into the giant building every day what matters the most is who are the 10 people who are the 30 people you interact with every day so I actually tend to advise people if you get a job from a company ask who is your manager who are your peers who are you actually going to talk to we're all social creatures we tend to become more like the people around us and if you're working with great people you will learn faster or if you get admitted if you get a job at a great company or a great university maybe the logo you walk in is great but you're actually stuck on some team doing really work that doesn't excite you and then that's actually a really bad experience so this is true both for universities and for large companies for small companies you can kind of figure out who you'll be working with quite quickly and I tend to advise people if a company refuses to tell you who you will work with someone say oh join us the rotation system will figure it out I think that that's a worrying answer because it because it means you may not get sent to you may not actually get to a team with great peers and great people to work with it's actually a really profound advice that we kind of sometimes sweep we don't consider too rigorously or carefully the people around you are really often especially when you accomplish great things it seems the great things are accomplished because of the people around you so that's a it's not about the the where whether you learn this thing or that thing or like you said the logo that hangs up top it's the people that's a fascinating and it's such a hard search process of finding just like finding the right friends and somebody to get married with and that kind of thing it's a very hard search it's a people search problem yeah but I think when someone interviews you know at a university or the research lab or the large corporation it's good to insist on just asking who are the people who is my manager and if you refuse to tell me I'm gonna think well maybe that's because you don't have a good answer it may not be someone I like and if you don't particularly connect if something feels off with the people then don't stick to it you know that's a really important signal to consider yeah yeah and actually I actually in my standard class CS230 as well as an ACM talk I think I gave like a hour long talk on career advice including on the job search process and then some of these so you can find those videos online awesome and I'll point them I'll point people to them beautiful so the AI fund helps AI startups get off the ground or perhaps you can elaborate on all the fun things it's involved with what's your advice and how does one build a successful AI startup you know in Silicon Valley a lot of startup failures come from building other products that no one wanted so when you know cool technology but who's going to use it so I think I tend to be very outcome driven and customer obsessed ultimately we don't get to vote if we succeed or fail it's only the customer that they're the only one that gets a thumbs up or thumbs down vote in the long term in the short term you know there are various people that get various votes but in the long term that's what really matters so as you build the startup you have to constantly ask the question will the customer give a thumbs up on this I think so I think startups that are very customer focused customer obsessed deeply understand the customer and are oriented to serve the customer are more likely to succeed with the provisional I think all of us should only do things that we think create social good and moves the world forward so I personally don't want to build addictive digital products just to sell a lot of ads or you know there are things that could be lucrative that I won't do but if we can find ways to serve people in meaningful ways I think those can be great things to do either in the academic setting or in a corporate setting or a startup setting so can you give me the idea of why you started the AI fund I remember when I was leading the AI group at Baidu I had two jobs two parts of my job one was to build an AI engine to support the existing businesses and that was running just ran just performed by itself there was a second part of my job at the time which was to try to systematically initiate new lines of businesses using the company's AI capabilities so you know the self driving car team came out of my group the smart speaker team similar to what is Amazon Echo Alexa in the US but we actually announced it before Amazon did so Baidu wasn't following Amazon that came out of my group and I found that to be actually the most fun part of my job so what I wanted to do was to build AI fund as a startup studio to systematically create new startups from scratch with all the things we can now do with AI I think the ability to build new teams to go after this rich space of opportunities is a very important way to very important mechanism to get these projects done that I think will move the world forward so I've been fortunate to build a few teams that had a meaningful positive impact and I felt that we might be able to do this in a more systematic repeatable way so a startup studio is a relatively new concept there are maybe dozens of startup studios you know right now but I feel like all of us many teams are still trying to figure out how do you systematically build companies with a high success rate so I think even a lot of my you know venture capital friends are seem to be more and more building companies rather than investing in companies but I find a fascinating thing to do to figure out the mechanisms by which we could systematically build successful teams, successful businesses in areas that we find meaningful so a startup studio is something is a place and a mechanism for startups to go from zero to success to try to develop a blueprint it's actually a place for us to build startups from scratch so we often bring in founders and work with them or maybe even have existing ideas that we match founders with and then this launches you know hopefully into successful companies so how close are you to figuring out a way to automate the process of starting from scratch and building a successful AI startup yeah I think we've been constantly improving and iterating on our processes how we do that so things like you know how many customer calls do we need to make in order to get customer validation how do we make sure this technology can be built quite a lot of our businesses need cutting edge machine learning algorithms so you know kind of algorithms have developed in the last one or two years and even if it works in a research paper it turns out taking the production is really hard there are a lot of issues for making these things work in the real life that are not widely addressed in academia so how do we validate that this is actually doable how do you build a team get the specialized domain knowledge be it in education or health care whatever sector we're focusing on so I think we've actually getting we've been getting much better at giving the entrepreneurs a high success rate but I think we're still I think the whole world is still in the early phases of figuring this out but do you think there is some aspects of that process that are transferable from one startup to another to another to another yeah very much so you know starting from scratch you know starting a company to most entrepreneurs is a really lonely thing and I've seen so many entrepreneurs not know how to make certain decisions like when do you need to how do you do B2B sales right if you don't know that it's really hard or how do you market this efficiently other than you know buying ads which is really expensive are there more efficient tactics for that or for a machine learning project you know basic decisions can change the course of whether machine learning product works or not and so there are so many hundreds of decisions that entrepreneurs need to make and making a mistake and a couple key decisions can have a huge impact on the fate of the company so I think a startup studio provides a support structure that makes starting a company much less of a lonely experience and also when facing with these key decisions like trying to hire your first uh the VP of engineering what's a good selection criteria how do you solve should I hire this person or not by helping by having a ecosystem around the entrepreneurs the founders to help I think we help them at the key moments and hopefully significantly make them more enjoyable and then higher success rate so there's somebody to brainstorm with in these very difficult decision points and also to help them recognize what they may not even realize is a key decision point that's that's the first and probably the most important part yeah actually I can say one other thing um you know I think building companies is one thing but I feel like it's really important that we build companies that move the world forward for example within the AI Fund team there was once an idea for a new company that if it had succeeded would have resulted in people watching a lot more videos in a certain narrow vertical type of video um I looked at it the business case was fine the revenue case was fine but I looked and just said I don't want to do this like you know I don't actually just want to have a lot more people watch this type of video wasn't educational it's an educational baby and so and so I I I I code the idea on the basis that I didn't think it would actually help people so um whether building companies or working enterprises or doing personal projects I think um it's up to each of us to figure out what's the difference we want to make in the world With landing AI you help already established companies grow their AI and machine learning efforts how does a large company integrate machine learning into their efforts? AI is a general purpose technology and I think it will transform every industry our community has already transformed to a large extent the software internet sector most software internet companies outside the top right five or six or three or four already have reasonable machine learning capabilities or or getting there it's still room for improvement but when I look outside the software internet sector everything from manufacturing agriculture, healthcare logistics transportation there's so many opportunities that very few people are working on so I think the next wave of AI is for us to also transform all of those other industries there was a McKinsey study estimating 13 trillion dollars of global economic growth US GDP is 19 trillion dollars so 13 trillion is a big number or PwC estimates 16 trillion dollars so whatever number is is large but the interesting thing to me was a lot of that impact will be outside the software internet sector so we need more teams to work with these companies to help them adopt AI and I think this is one thing so make you know help drive global economic growth and make humanity more powerful and like you said the impact is there so what are the best industries the biggest industries where AI can help perhaps outside the software tech sector frankly I think it's all of them some of the ones I'm spending a lot of time on are manufacturing agriculture look into healthcare for example in manufacturing we do a lot of work in visual inspection where today there are people standing around using the eye human eye to check if you know this plastic part or the smartphone or this thing has a scratch or a dent or something in it we can use a camera to take a picture use a algorithm deep learning and other things to check if it's defective or not and thus help factories improve yield and improve quality and improve throughput it turns out the practical problems we run into are very different than the ones you might read about in in most research papers the data sets are really small so we face small data problems you know the factories keep on changing the environment so it works well on your test set but guess what something changes in the factory the lights go on or off recently there was a factory in which a bird threw through the factory and pooped on something and so that changed stuff and so increasing our algorithm makes robustness so all the changes happen in the factory I find that we run a lot of practical problems that are not as widely discussed in academia and it's really fun kind of being on the cutting edge solving these problems before maybe before many people are even aware that there is a problem there and that's such a fascinating space you're absolutely right but what is the first step that a company should take it's just scary leap into this new world of going from the human eye inspecting to digitizing that process having a camera having an algorithm what's the first step like what's the early journey that you recommend that you see these companies taking I published a document called the AI Transformation Playbook that's online and taught briefly in the AI for Everyone course on Coursera about the long term journey that companies should take but the first step is actually to start small I've seen a lot more companies fail by starting too big than by starting too small take even Google you know most people don't realize how hard it was and how controversial it was in the early days so when I started Google Brain it was controversial you know people thought deep learning near nest tried it didn't work why would you want to do deep learning so my first internal customer within Google was the Google speech team which is not the most lucrative project in Google not the most important it's not web search or advertising but by starting small my team helped the speech team build a more accurate speech recognition system and this caused their peers other teams to start to have more faith in deep learning my second internal customer was the Google Maps team where we used computer vision to read house numbers from basic street view images to more accurately locate houses within Google Maps so improve the quality of geodata and it was only after those two successes that I then started a more serious conversation with the Google Ads team and so there's a ripple effect that you showed that it works in these cases and then it just propagates through the entire company that this thing has a lot of value and use for us I think the early small scale projects it helps the teams gain faith but also helps the teams learn what these technologies do I still remember when our first GPU server it was a server under some guy's desk and you know and then that taught us early important lessons about how do you have multiple users share a set of GPUs which is really not obvious at the time but those early lessons were important we learned a lot from that first GPU server that later helped the teams think through how to scale it up to much larger deployments Are there concrete challenges that companies face that you see is important for them to solve? I think building and deploying machine learning systems is hard there's a huge gulf between something that works in a jupyter notebook on your laptop versus something that runs their production deployment setting in a factory or agriculture plant or whatever so I see a lot of people get something to work on your laptop and say wow look what I've done and that's great that's hard that's a very important first step but a lot of teams underestimate the rest of the steps needed so for example I've heard this exact same conversation between a lot of machine learning people and business people the machine learning person says look my algorithm does well on the test set and it's a clean test set at the end of peak and the machine and the business person says thank you very much but your algorithm sucks it doesn't work and the machine learning person says no wait I did well on the test set and I think there is a gulf between what it takes to do well on the test set on your hard drive versus what it takes to work well in a deployment setting some common problems robustness and generalization you deploy something in the factory maybe they chop down a tree outside the factory so the tree no longer covers the window and the lighting is different so the test set changes and in machine learning and especially in academia we don't know how to deal with test set distributions that are dramatically different than the training set distribution you know that this research the stuff like domain annotation transfer learning you know there are people working on it but we're really not good at this so how do you actually get this to work because your test set distribution is going to change and I think also if you look at the number of lines of code in the software system the machine learning model is maybe five percent or even fewer relative to the entire software system you need to build so how do you get all that work done and make it reliable and systematic so good software engineering work is fundamental here to building a successful small machine learning system yes and the software system needs to interface with the machine learning system needs to interface with people's workloads so machine learning is automation on steroids if we take one task out of many tasks that are done in the factory so the factory does lots of things one task is vision inspection if we automate that one task it can be really valuable but you may need to redesign a lot of other tasks around that one task for example say the machine learning algorithm says this is defective what are you supposed to do do you throw it away do you get a human to double check do you want to rework it or fix it so you need to redesign a lot of tasks around that thing you've now automated so planning for the change management and making sure that the software you write is consistent with the new workflow and you take the time to explain to people what needs to happen so I think what landing AI has become good at and then I think we learned by making the steps and you know painful experiences well my what would become good at is working with our partners to think through all the things beyond just the machine learning model or running the jupyter notebook but to build the entire system manage the change process and figure out how to deploy this in a way that has an actual impact the processes that the large software tech companies use for deploying don't work for a lot of other scenarios for example when I was leading large speech teams if the speech recognition system goes down what happens well alarms goes off and then someone like me would say hey you 20 engine environment you 20 engineers please fix this but if you have a system girl in the factory there are not 20 machine learning engineers sitting around you can page your duty and have them fix it so how do you deal with the maintenance or the or the dev ops or the mo ops or the other aspects of this so these are concepts that I think landing AI and a few other teams on the cutting edge but we don't even have systematic terminology yet to describe some of the stuff we do because I think we're inventing it on the fly. So you mentioned some people are interested in discovering mathematical beauty and truth in the universe and you're interested in having a big positive impact in the world so let me ask the two are not inconsistent no they're all together I'm only half joking because you're probably interested a little bit in both but let me ask a romanticized question so much of the work your work and our discussion today has been on applied AI maybe you can even call narrow AI where the goal is to create systems that automate some specific process that adds a lot of value to the world but there's another branch of AI starting with Alan Turing that kind of dreams of creating human level or superhuman level intelligence is this something you dream of as well do you think we human beings will ever build a human level intelligence or superhuman level intelligence system? I would love to get to AGI and I think humanity will but whether it takes 100 years or 500 or 5000 I find hard to estimate do you have some folks have worries about the different trajectories that path would take even existential threats of an AGI system do you have such concerns whether in the short term or the long term? I do worry about the long term fate of humanity I do wonder as well I do worry about overpopulation on the planet Mars just not today I think there will be a day when maybe someday in the future Mars will be polluted there are all these children dying and someone will look back at this video and say Andrew how is Andrew so heartless? He didn't care about all these children dying on the planet Mars and I apologize to the future viewer I do care about the children but I just don't know how to productively work on that today your picture will be in the dictionary for the people who are ignorant about the overpopulation on Mars yes so it's a long term problem is there something in the short term we should be thinking about in terms of aligning the values of our AI systems with the values of us humans sort of something that Stuart Russell and other folks are thinking about as this system develops more and more we want to make sure that it represents the better angels of our nature the ethics the values of our society you know if you take self driving cars the biggest problem with self driving cars is not that there's some trolley dilemma and you teach this so you know how many times when you are driving your car did you face this moral dilemma who do I crash into? so I think self driving cars will run into that problem roughly as often as we do when we drive our cars the biggest problem with self driving cars is when there's a big white truck across the road and what you should do is break and not crash into it and the self driving car fails and it crashes into it so I think we need to solve that problem first I think the problem with some of these discussions about AGI you know alignments the paperclip problem is that is a huge distraction from the much harder problems that we actually need to address today it's not the hardest problems we need to address today it's not the hard problems we need to address today I think bias is a huge issue I worry about wealth and equality the AI and internet are causing an acceleration of concentration of power because we can now centralize data use AI to process it and so industry after industry we've affected every industry so the internet industry has a lot of win and take most or win and take all dynamics but we've infected all these other industries so we're also giving these other industries most of them to take all flavors so look at what Uber and Lyft did to the taxi industry so we're doing this type of thing it's a lot and so this so we're creating tremendous wealth but how do we make sure that the wealth is fairly shared I think that and then how do we help people whose jobs are displaced you know I think education is part of it there may be even more that we need to do than education I think bias is a serious issue there are adverse uses of AI like deepfakes being used for various and various purposes so I worry about some teams maybe accidentally and I hope not deliberately making a lot of noise about things that problems in the distant future rather than focusing on some of the much harder problems yeah the overshadow of the problems that we have already today they're exceptionally challenging like those you said and even the silly ones but the ones that have a huge impact huge impact which is the lighting variation outside of your factory window that that ultimately is what makes the difference between like you said the Jupiter notebook and something that actually transforms an entire industry potentially yeah and I think and then just to some companies or a regulator comes to you and says look your product is messing things up fixing it may have a revenue impact well it's much more fun to talk to them about how you promise not to wipe out humanity and to face the actually really hard problems we face so your life has been a great journey from teaching to research to entrepreneurship two questions one are there regrets moments that if you went back you would do differently and two are there moments you're especially proud of moments that made you truly happy you know I've made so many mistakes it feels like every time I discover something I go why didn't I think of this you know five years earlier or even 10 years earlier and as recently and then sometimes I read a book and I go I wish I read this book 10 years ago my life would have been so different although that happened recently and then I was thinking if only I read this book when we're starting up Coursera I could have been so much better but I discovered the book had not yet been written we're starting Coursera so that made me feel better so that made me feel better but I find that the process of discovery we keep on finding out things that seem so obvious in hindsight but it always takes us so much longer than than I wish to to figure it out so on the second question are there moments in your life that if you look back that you're especially proud of or you're especially happy what would be the that filled you with happiness and fulfillment well two answers one does my daughter know of her yes of course because I know how much time I spent with her I just can't spend enough time with her congratulations by the way thank you and then second is helping other people I think to me I think the meaning of life is helping others achieve whatever are their dreams and then also to try to move the world forward making humanity more powerful as a whole so the times that I felt most happy most proud was when I felt someone else allowed me the good fortune of helping them a little bit on the path to their dreams I think there's no better way to end it than talking about happiness and the meaning of life so Andrew it's a huge honor me and millions of people thank you for all the work you've done thank you for talking today thank you so much thanks thanks for listening to this conversation with Andrew Ng and thank you to our presenting sponsor Cash App download it use code LEX podcast you'll get ten dollars and ten dollars will go to FIRST an organization that inspires and educates young minds to become science and technology innovators of tomorrow if you enjoy this podcast subscribe on YouTube give it five stars on Apple podcast support it on Patreon or simply connect with me on Twitter at LEX Freedman and now let me leave you with some words of wisdom from Andrew Ng ask yourself if what you're working on succeeds beyond your wildest dreams would you have significantly helped other people? if not then keep searching for something else to work on otherwise you're not living up to your full potential thank you for listening and hope to see you next time
Andrew Ng: Deep Learning, Education, and Real-World AI | Lex Fridman Podcast #73
The following is a conversation with Michael I. Jordan, a professor at Berkeley and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and he has mentored many of the world class researchers defining the field of AI today, including Andrew Ng, Zubin Garamani, Ben Taskar, and Yoshua Bengio. All this, to me, is as impressive as the over 32,000 points in the six NBA championships of the Michael J. Jordan of basketball fame. There's a nonzero probability that I talked to the other Michael Jordan given my connection to and love of the Chicago Bulls of the 90s, but if I had to pick one, I'm going with the Michael Jordan of statistics and computer science, or as Yann LeCun calls him, the Miles Davis of machine learning. In his blog post titled Artificial Intelligence, the Revolution Hasn't Happened Yet, Michael argues for broadening the scope of the artificial intelligence field. In many ways, the underlying spirit of this podcast is the same, to see artificial intelligence as a deeply human endeavor, to not only engineer algorithms and robots, but to understand and empower human beings at all levels of abstraction, from the individual to our civilization as a whole. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe and YouTube, give it five stars at Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that worked behind the scenes to create the abstraction of the fractional orders is to me an algorithmic marvel. Great props for the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So once again, if you get Cash App from the App Store or Google Play and use the code LEX PODCAST, you'll get $10 and Cash App will also donate $10 to First, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Michael I. Jordan. Given that you're one of the greats in the field of AI, machine learning, computer science, and so on, you're trivially called the Michael Jordan of machine learning, although as you know, you were born first, so technically MJ is the Michael I. Jordan of basketball. But anyway, my favorite is Yann LeCun calling you the Miles Davis of machine learning because as he says, you reinvent yourself periodically and sometimes leave fans scratching their heads after you change direction. So can you put at first your historian hat on and give a history of computer science and AI as you saw it, as you experienced it, including the four generations of AI successes that I've seen you talk about? Sure. Yeah, first of all, I much prefer Yann's metaphor. Miles Davis was a real explorer in jazz and he had a coherent story. So I think I have one, but it's not just the one you lived, it's the one you think about later. What the historian does is they look back and they revisit. I think what's happening right now is not AI, that was an intellectual aspiration that's still alive today as an aspiration. But I think this is akin to the development of chemical engineering from chemistry or electrical engineering from electromagnetism. So if you go back to the 30s or 40s, there wasn't yet chemical engineering. There was chemistry, there was fluid flow, there was mechanics and so on. But people pretty clearly viewed interesting goals to try to build factories that make chemicals products and do it viably, safely, make good ones, do it at scale. So people started to try to do that, of course, and some factories worked, some didn't, some were not viable, some exploded, but in parallel, developed a whole field called chemical engineering. Electrical engineering is a field, it's no bones about it, it has theoretical aspects to it, it has practical aspects. It's not just engineering, quote unquote, it's the real thing, real concepts are needed. Same thing with electrical engineering. There was Maxwell's equations, which in some sense were everything you know about electromagnetism, but you needed to figure out how to build circuits, how to build modules, how to put them together, how to bring electricity from one point to another safely and so on and so forth. So a whole field that developed called electrical engineering. I think that's what's happening right now, is that we have a proto field, which is statistics, more of the theoretical side of it, algorithmic side of computer science, that was enough to start to build things, but what things? Systems that bring value to human beings and use human data and mix in human decisions. The engineering side of that is all ad hoc. That's what's emerging. In fact, if you wanna call machine learning a field, I think that's what it is, that it's a proto form of engineering based on statistical and computational ideas of previous generations. But do you think there's something deeper about AI in his dreams and aspirations as compared to chemical engineering and electrical engineering? Well the dreams and aspirations maybe, but those are 500 years from now. I think that that's like the Greeks sitting there and saying, it would be neat to get to the moon someday. I think we have no clue how the brain does computation. We're just a clueless. We're even worse than the Greeks on most anything interesting scientifically of our era. Can you linger on that just for a moment because you stand not completely unique, but a little bit unique in the clarity of that. Can you elaborate your intuition of why we're, like where we stand in our understanding of the human brain? And a lot of people say, you know, scientists say we're not very far in understanding human brain, but you're like, you're saying we're in the dark here. Well, I know I'm not unique. I don't even think in the clarity, but if you talk to real neuroscientists that really study real synapses or real neurons, they agree, they agree. It's a hundreds of year task and they're building it up slowly and surely. What the signal is there is not clear. We think we have all of our metaphors. We think it's electrical, maybe it's chemical, it's a whole soup, it's ions and proteins and it's a cell. And that's even around like a single synapse. If you look at a electron micrograph of a single synapse, it's a city of its own. And that's one little thing on a dendritic tree, which is extremely complicated electrochemical thing. And it's doing these spikes and voltages are flying around and then proteins are taking that and taking it down into the DNA and who knows what. So it is the problem of the next few centuries. It is fantastic. But we have our metaphors about it. Is it an economic device? Is it like the immune system or is it like a layered set of, you know, arithmetic computations? We have all these metaphors and they're fun. But that's not real science per se. There is neuroscience. That's not neuroscience. All right. That's like the Greek speculating about how to get to the moon, fun, right? And I think that I like to say this fairly strongly because I think a lot of young people think we're on the verge because a lot of people who don't talk about it clearly let it be understood that, yes, we kind of, this is a brain inspired, we're kind of close, you know, breakthroughs are on the horizon. And that's scrupulous people sometimes who need money for their labs. That's what I'm saying, scrupulous, but people will oversell, I need money for my lab, I'm studying computational neuroscience, I'm going to oversell it. And so there's been too much of that. So I'll step into the gray area between metaphor and engineering with, I'm not sure if you're familiar with brain computer interfaces. So a company like Elon Musk has Neuralink that's working on putting electrodes into the brain and trying to be able to read, both read and send electrical signals. Just as you said, even the basic mechanism of communication in the brain is not something we understand. But do you hope without understanding the fundamental principles of how the brain works, we'll be able to do something interesting at that gray area of metaphor? It's not my area. So I hope in the sense, like anybody else hopes for some interesting things to happen from research, I would expect more something like Alzheimer's will get figured out from modern neuroscience. There's a lot of human suffering based on brain disease and we throw things like lithium at the brain, it kind of works, no one has a clue why. That's not quite true, but mostly we don't know. And that's even just about the biochemistry of the brain and how it leads to mood swings and so on. How thought emerges from that, we were really, really completely dim. So that you might want to hook up electrodes and try to do some signal processing on that and try to find patterns, fine, by all means, go for it. It's just not scientific at this point. So it's like kind of sitting in a satellite and watching the emissions from a city and trying to infer things about the microeconomy, even though you don't have microeconomic concepts. It's really that kind of thing. And so yes, can you find some signals that do something interesting or useful? Can you control a cursor or mouse with your brain? Yeah, absolutely, and then I can imagine business models based on that and even medical applications of that. But from there to understanding algorithms that allow us to really tie in deeply from the brain to computer, I just, no, I don't agree with Elon Musk. I don't think that's even, that's not for our generations, not even for the century. So just in hopes of getting you to dream, you've mentioned Kolmogorov and Turing might pop up, do you think that there might be breakthroughs that will get you to sit back in five, 10 years and say, wow? Oh, I'm sure there will be, but I don't think that there'll be demos that impress me. I don't think that having a computer call a restaurant and pretend to be a human is a breakthrough. Right. And people, you know, some people present it as such. It's imitating human intelligence. It's even putting coughs in the thing to make a bit of a PR stunt. And so fine that the world runs on those things too. And I don't want to diminish all the hard work and engineering that goes behind things like that and the ultimate value to the human race. But that's not scientific understanding. And I know the people that work on these things, they are after scientific understanding. In the meantime, they've got to kind of, you know, the trains got to run and they got mouths to feed and they got things to do and there's nothing wrong with all that. I would call that though, just engineering. And I want to distinguish that between an engineering field, like electrical engineering and chemical engineering that originally emerged, that had real principles and you really know what you're doing and you had a little scientific understanding, maybe not even complete. So it became more predictable and it really gave value to human life because it was understood. And so we don't want to muddle too much these waters of, you know, what we're able to do versus what we really can't do in a way that's going to impress the next. So I don't need to be wowed, but I think that someone comes along in 20 years, a younger person who's absorbed all the technology and for them to be wowed, I think they have to be more deeply impressed. A young Kolmogorov would not be wowed by some of the stunts that you see right now coming from the big companies. The demos, but do you think the breakthroughs from Kolmogorov would be, and give this question a chance, do you think there'll be in the scientific fundamental principles arena or do you think it's possible to have fundamental breakthroughs in engineering? Meaning, you know, I would say some of the things that Elon Musk is working with SpaceX and then others sort of trying to revolutionize the fundamentals of engineering, of manufacturing, of saying, here's a problem we know how to do a demo of and actually taking it to scale. Yeah. So there's going to be all kinds of breakthroughs. I just don't like that terminology. I'm a scientist and I work on things day in and day out and things move along and eventually you say, wow, something happened, but I don't like that language very much. Also I don't like to prize theoretical breakthroughs over practical ones. I tend to be more of a theoretician and I think there's lots to do in that arena right now. And so I wouldn't point to the Kolmogorovs, I might point to the Edisons of the era and maybe Musk is a bit more like that. But you know, Musk, God bless him, also will say things about AI that he knows very little about and he leads people astray when he talks about things he doesn't know anything about. Trying to program a computer to understand natural language, to be involved in a dialogue we're having right now, that ain't going to happen in our lifetime. You could fake it, you can mimic, sort of take old sentences that humans use and retread them, but the deep understanding of language, no, it's not going to happen. And so from that, I hope you can perceive that the deeper, yet deeper kind of aspects and intelligence are not going to happen. Now will there be breakthroughs? No, I think that Google was a breakthrough, I think Amazon is a breakthrough, you know, I think Uber is a breakthrough, you know, that bring value to human beings at scale in new, brand new ways based on data flows and so on. A lot of these things are slightly broken because there's not kind of an engineering field that takes economic value in context of data and, you know, planetary scale and worries about all the externalities, the privacy, you know, we don't have that field so we don't think these things through very well. I see that as emerging and that will be, you know, looking back from 100 years, that will be a constituted breakthrough in this era, just like electrical engineering was a breakthrough in the early part of the last century and chemical engineering was a breakthrough. So the scale, the markets that you talk about and we'll get to will be seen as sort of breakthrough and we're in the very early days of really doing interesting stuff there and we'll get to that, but just taking a quick step back, can you give, kind of throw off the historian hat. I mean, you briefly said that the history of AI kind of mimics the history of chemical engineering, but... I keep saying machine learning. You keep wanting to say AI, just to let you know, I don't, you know, I resist that. I don't think this is about AI really was John McCarthy as almost a philosopher saying, wouldn't it be cool if we could put thought in a computer? If we could mimic the human capability to think or put intelligence in, in some sense into a computer. That's an interesting philosophical question and he wanted to make it more than philosophy. He wanted to actually write down a logical formula and algorithms that would do that. And that is a perfectly valid, reasonable thing to do. That's not what's happening in this era. So the reason I keep saying AI actually, and I'd love to hear what you think about it. Machine learning has a very particular set of methods and tools. Maybe your version of it is that mine doesn't, it's very, very open. It does optimization, it does sampling, it does... So systems that learn is what machine learning is. Systems that learn and make decisions. And make decisions. So it's not just pattern recognition and, you know, finding patterns, it's all about making decisions in real worlds and having close feedback loops. So something like symbolic AI, expert systems, reasoning systems, knowledge based representation, all of those kinds of things, search, does that neighbor fit into what you think of as machine learning? So I don't even like the word machine learning, I think that what the field you're talking about is all about making large collections of decisions under uncertainty by large collections of entities. Right? And there are principles for that, at that scale. You don't have to say the principles are for a single entity that's making decisions, single agent or single human. It really immediately goes to the network of decisions. Is a good word for that or no? No, there's no good words for any of this. That's kind of part of the problem. So we can continue the conversation to use AI for all that. I just want to kind of raise the flag here that this is not about, we don't know what intelligence is and real intelligence. We don't know much about abstraction and reasoning at the level of humans. We don't have a clue. We're not trying to build that because we don't have a clue. Eventually it may emerge. They'll make, I don't know if there'll be breakthroughs, but eventually we'll start to get glimmers of that. It's not what's happening right now. Okay. We're taking data. We're trying to make good decisions based on that. We're trying to scale. We're trying to economically viably, we're trying to build markets. We're trying to keep value at that scale and aspects of this will look intelligent. Computers were so dumb before, they will seem more intelligent. We will use that buzzword of intelligence so we can use it in that sense. So machine learning, you can scope it narrowly as just learning from data and pattern recognition. But when I talk about these topics, maybe data science is another word you could throw in the mix, it really is important that the decisions are as part of it. It's consequential decisions in the real world. Am I going to have a medical operation? Am I going to drive down the street? Things where there's scarcity, things that impact other human beings or other environments and so on. How do I do that based on data? How do I do that adaptively? How do I use computers to help those kinds of things go forward? Whatever you want to call that. So let's call it AI. Let's agree to call it AI, but let's not say that the goal of that is intelligence. The goal of that is really good working systems at planetary scale that we've never seen before. So reclaim the word AI from the Dartmouth conference from many decades ago of the dream of humans. I don't want to reclaim it. I want a new word. I think it was a bad choice. I mean, if you read one of my little things, the history was basically that McCarthy needed a new name because cybernetics already existed and he didn't like, no one really liked Norbert Wiener. Norbert Wiener was kind of an island to himself and he felt that he had encompassed all this and in some sense he did. You look at the language of cybernetics, it was everything we're talking about. It was control theory and signal processing and some notions of intelligence and closed feedback loops and data. It was all there. It's just not a word that lived on partly because of the maybe the personalities. But McCarthy needed a new word to say, I'm different from you. I'm not part of your show. I got my own. Invented this word and again, thinking forward about the movies that would be made about it, it was a great choice. But thinking forward about creating a sober academic and real world discipline, it was a terrible choice because it led to promises that are not true that we understand. We understand artificial perhaps, but we don't understand intelligence. It's a small tangent because you're one of the great personalities of machine learning, whatever the heck you call the field. Do you think science progresses by personalities or by the fundamental principles and theories and research that's outside of personalities? Both. And I wouldn't say there should be one kind of personality. I have mine and I have my preferences and I have a kind of network around me that feeds me and some of them agree with me and some of them disagree, but all kinds of personalities are needed. Right now, I think the personality that it's a little too exuberant, a little bit too ready to promise the moon is a little bit too much in ascendance. And I do think that there's some good to that. It certainly attracts lots of young people to our field, but a lot of those people come in with strong misconceptions and they have to then unlearn those and then find something to do. And so I think there's just got to be some multiple voices and I wasn't hearing enough of the more sober voice. So as a continuation of a fun tangent and speaking of vibrant personalities, what would you say is the most interesting disagreement you have with Jan Lacune? So Jan's an old friend and I just say that I don't think we disagree about very much really. He and I both kind of have a let's build it kind of mentality and does it work kind of mentality and kind of concrete. We both speak French and we speak French more together and we have a lot in common. And so if one wanted to highlight a disagreement, it's not really a fundamental one. I think it's just kind of what we're emphasizing. Jan has emphasized pattern recognition and has emphasized prediction. And it's interesting to try to take that as far as you can. If you could do perfect prediction, what would that give you kind of as a thought experiment? And I think that's way too limited. We cannot do perfect prediction. We will never have the data sets that allow me to figure out what you're about ready to do, what question you're going to ask next. I have no clue. I will never know such things. Moreover, most of us find ourselves during the day in all kinds of situations we had no anticipation of that are kind of very, very novel in various ways. And in that moment, we want to think through what we want. And also there's going to be market forces acting on us. I'd like to go down that street, but now it's full because there's a crane in the street. I got it. I got to think about that. I got to think about what I might really want here. And I got to sort of think about how much it costs me to do this action versus this action. I got to think about the risks involved. A lot of our current pattern recognition and prediction systems don't do any risk evaluations. They have no error bars, right? I got to think about other people's decisions around me. I got to think about a collection of my decisions, even just thinking about like a medical treatment, you know, I'm not going to take a, the prediction of a neural net about my health, about something consequential. I'm not about ready to have a heart attack because some number is over 0.7. Even if you had all the data in the world that ever been collected about heart attacks better than any doctor ever had, I'm not going to trust the output of that neural net to predict my heart attack. I'm going to want to ask what if questions around that. I'm going to want to look at some us or other possible data I didn't have, causal things. I'm going to want to have a dialogue with a doctor about things we didn't think about when he gathered the data. You know, I could go on and on. I hope you can see. And I don't, I think that if you say predictions, everything that, that, that you're missing all of this stuff. And so prediction plus decision making is everything, but both of them are equally important. And so the field has emphasized prediction, Jan rightly so has seen how powerful that is. But at the cost of people not being aware that decision making is where the rubber really hits the road, where human lives are at stake, where risks are being taken, where you got to gather more data. You got to think about the error bars. You got to think about the consequences of your decisions on others. You got to think about the economy around your decisions, blah, blah, blah, blah. I'm not the only one working on those, but we're a smaller tribe. And right now we're not the one that people talk about the most. But you know, if you go out in the real world and industry, you know, at Amazon, I'd say half the people there are working on decision making and the other half are doing, you know, the pattern recognition. It's important. And the words of pattern recognition and prediction, I think the distinction there, not to linger on words, but the distinction there is more a constrained sort of in the lab data set versus decision making is talking about consequential decisions in the real world, under the messiness and the uncertainty of the real world. And just the whole of it, the whole mess of it that actually touches human beings and scale. And the forces, that's the distinction. It helps add those, that perspective, that broader perspective. You're right. I totally agree. On the other hand, if you're a real prediction person, of course, you want it to be in the real world. You want to predict real world events. I'm just saying that's not possible with just data sets. That it has to be in the context of, you know, strategic things that someone's doing, data they might gather, things they could have gathered, the reasoning process around data. It's not just taking data and making predictions based on the data. So one of the things that you're working on, I'm sure there's others working on it, but I don't hear often it talked about, especially in the clarity that you talk about it, and I think it's both the most exciting and the most concerning area of AI in terms of decision making. So you've talked about AI systems that help make decisions that scale in a distributed way, millions, billions decisions, sort of markets of decisions. Can you, as a starting point, sort of give an example of a system that you think about when you're thinking about these kinds of systems? Yeah, so first of all, you're absolutely getting into some territory, which I will be beyond my expertise. And there are lots of things that are going to be very not obvious to think about. Just like, again, I like to think about history a little bit, but think about put yourself back in the sixties. There was kind of a banking system that wasn't computerized really. There was database theory emerging and database people had to think about how do I actually not just move data around, but actual money and have it be, you know, valid and have transactions that ATMs happen that are actually, you know, all valid and so on and so forth. So that's the kind of issues you get into when you start to get serious about sorts of things like this. I like to think about as kind of almost a thought experiment to help me think something simpler, which is the music market. And because there is, to first order, there is no music market in the world right now and in our country, for sure. There are something called things called record companies and they make money and they prop up a few really good musicians and make them superstars and they all make huge amounts of money. But there's a long tail of huge numbers of people that make lots and lots of really good music that is actually listened to by more people than the famous people. They are not in a market. They cannot have a career. They do not make money. The creators, the creators, the creators, the so called influencers or whatever that diminishes who they are. So there are people who make extremely good music, especially in the hip hop or Latin world these days. They do it on their laptop. That's what they do on the weekend and they have another job during the week and they put it up on SoundCloud or other sites. Eventually it gets streamed. It now gets turned into bits. It's not economically valuable. The information is lost. It gets put up there. People stream it. You walk around in a big city, you see people with headphones, especially young kids listening to music all the time. If you look at the data, very little of the music they are listening to is the famous people's music and none of it's old music. It's all the latest stuff. But the people who made that latest stuff are like some 16 year old somewhere who will never make a career out of this, who will never make money. Of course there will be a few counter examples. The record companies incentivize to pick out a few and highlight them. Long story short, there's a missing market there. There is not a consumer producer relationship at the level of the actual creative acts. The pipelines and Spotify's of the world that take this stuff and stream it along, they make money off of subscriptions or advertising and those things. They're making the money. All right. And then they will offer bits and pieces of it to a few people again to highlight that they simulate a market. Anyway, a real market would be if you're a creator of music that you actually are somebody who's good enough that people want to listen to you, you should have the data available to you. There should be a dashboard showing a map of the United States. So in last week, here's all the places your songs were listened to. It should be transparent, vetable, so that if someone down in Providence sees that you're being listened to 10,000 times in Providence, that they know that's real data. You know it's real data. They will have you come give a show down there. They will broadcast to the people who've been listening to you that you're coming. If you do this right, you could go down there and make $20,000. You do that three times a year, you start to have a career. So in this sense, AI creates jobs. It's not about taking away human jobs. It's creating new jobs because it creates a new market. Once you've created a market, you've now connected up producers and consumers. The person who's making the music can say to someone who comes to their shows a lot, hey, I'll play at your daughter's wedding for $10,000. You'll say 8,000. They'll say 9,000. Then again, you can now get an income up to $100,000. You're not going to be a millionaire. And now even think about really the value of music is in these personal connections, even so much so that a young kid wants to wear a tshirt with their favorite musician's signature on it. So if they listen to the music on the internet, the internet should be able to provide them with a button that they push and the merchandise arrives the next day. We can do that. And now why should we do that? Well, because the kid who bought the shirt will be happy, but more the person who made the music will get the money. There's no advertising needed. So you can create markets between producers and consumers, take 5% cut. Your company will be perfectly sound. It'll go forward into the future and it will create new markets and that raises human happiness. Now this seems like, well, this is easy, just create this dashboard, kind of create some connections and all that. But if you think about Uber or whatever, you think about the challenges in the real world of doing things like this, and there are actually new principles going to be needed. You're trying to create a new kind of two way market at a different scale that's ever been done before. There's going to be unwanted aspects of the market. There'll be bad people. There'll be the data will get used in the wrong ways, it'll fail in some ways, it won't deliver about. You have to think that through. Just like anyone who ran a big auction or ran a big matching service in economics will think these things through. And so that maybe doesn't get at all the huge issues that can arise when you start to create markets, but it starts to, at least for me, solidify my thoughts and allow me to move forward in my own thinking. Yeah. So I talked to the head of research at Spotify actually, and I think their longterm goal, they've said, is to have at least one million creators make a comfortable living putting on Spotify. So I think you articulate a really nice vision of the world and the digital and the cyberspace of markets. What do you think companies like Spotify or YouTube or Netflix can do to create such markets? Is it an AI problem? Is it an interface problem for interface design? Is it some other kind of, is it an economics problem? Who should they hire to solve these problems? Well, part of it's not just top down. So the Silicon Valley has this attitude that they know how to do it. They will create the system just like Google did with the search box that will be so good that they'll just, everyone will adopt that. It's everything you said, but really I think missing that kind of culture. So it's literally that 16 year old who's able to create the songs. You don't create that as a Silicon Valley entity. You don't hire them per se. You have to create an ecosystem in which they are wanted and that they belong. And so you have to have some cultural credibility to do things like this. Netflix, to their credit, wanted some of that credibility and they created shows, content. They call it content. It's such a terrible word, but it's culture. And so with movies, you can kind of go give a large sum of money to somebody graduating from the USC film school. It's a whole thing of its own, but it's kind of like rich white people's thing to do. And American culture has not been so much about rich white people. It's been about all the immigrants, all the Africans who came and brought that culture and those rhythms to this world and created this whole new thing. American culture. And so companies can't artificially create that. They can't just say, hey, we're here. We're going to buy it up. You've got a partner. And so anyway, not to denigrate, these companies are all trying and they should, and I'm sure they're asking these questions and some of them are even making an effort. But it is partly a respect the culture as a technology person. You've got to blend your technology with cultural meaning. How much of a role do you think the algorithm, so machine learning has in connecting the consumer to the creator, sort of the recommender system aspect of this? Yeah. It's a great question. I think pretty high. There's no magic in the algorithms, but a good recommender system is way better than a bad recommender system. And recommender systems is a billion dollar industry back even 10, 20 years ago. And it continues to be extremely important going forward. What's your favorite recommender system, just so we can put something, well, just historically I was one of the, when I first went to Amazon, I first didn't like Amazon because they put the book people out of business or the library, the local booksellers went out of business. I've come to accept that there probably are more books being sold now and poor people reading them than ever before. And then local book stores are coming back. So that's how economics sometimes work. You go up and you go down. But anyway, when I finally started going there and I bought a few books, I was really pleased to see another few books being recommended to me that I never would have thought of. And I bought a bunch of them. So they obviously had a good business model. But I learned things and I still to this day kind of browse using that service. And I think lots of people get a lot, that is a good aspect of a recommendation system. I'm learning from my peers in an indirect way. And their algorithms are not meant to have them impose what we learn. It really is trying to find out what's in the data. It doesn't work so well for other kinds of entities, but that's just the complexity of human life. Like shirts, I'm not going to get recommendations on shirts, but that's interesting. If you try to recommend restaurants, it's hard. It's hard to do it at scale. But a blend of recommendation systems with other economic ideas, matchings and so on is really, really still very open research wise. And there's new companies that are going to emerge that do that well. What do you think is going to the messy, difficult land of say politics and things like that, that YouTube and Twitter have to deal with in terms of recommendation systems? Being able to suggest, I think Facebook just launched Facebook news. So recommend the kind of news that are most likely for you to be interesting. Do you think this is AI solvable, again, whatever term we want to use, do you think it's a solvable problem for machines or is it a deeply human problem that's unsolvable? So I don't even think about it at that level. I think that what's broken with some of these companies, it's all monetization by advertising. They're not, at least Facebook, I want to critique them, but they didn't really try to connect a producer and a consumer in an economic way, right? No one wants to pay for anything. And so they all, you know, starting with Google and Facebook, they went back to the playbook of, you know, the television companies back in the day. No one wanted to pay for this signal. They will pay for the TV box, but not for the signal, at least back in the day. And so advertising kind of filled that gap and advertising was new and interesting and it somehow didn't take over our lives quite, right? Fast forward, Google provides a service that people don't want to pay for. And so somewhat surprisingly in the nineties, they made, they ended up making huge amounts so they cornered the advertising market. It didn't seem like that was going to happen, at least to me. These little things on the right hand side of the screen just did not seem all that economically interesting, but that companies had maybe no other choice. The TV market was going away and billboards and so on. So they've, they got it. And I think that sadly that Google just has, it was doing so well with that at making such money. They didn't think much more about how, wait a minute, is there a producer consumer relationship to be set up here? Not just between us and the advertisers market to be created. Is there an actual market between the producer consumer? They're the producers, the person who created that video clip, the person that made that website, the person who could make more such things, the person who could adjust it as a function of demand, the person on the other side who's asking for different kinds of things, you know? So you see glimmers of that now there's influencers and there's kind of a little glimmering of a market, but it should have been done 20 years ago. It should have been thought about. It should have been created in parallel with the advertising ecosystem. And then Facebook inherited that. And I think they also didn't think very much about that. So fast forward and now they are making huge amounts of money off of advertising. And the news thing and all these clicks is just feeding the advertising. It's all connected up to the advertiser. So you want more people to click on certain things because that money flows to you, Facebook. You're very much incentivized to do that. And when you start to find it's breaking, people are telling you, well, we're getting into some troubles. You try to adjust it with your smart AI algorithms, right? And figure out what are bad clicks. So maybe it shouldn't be click through rate, it should be something else. I find that pretty much hopeless. It does get into all the complexity of human life and you can try to fix it. You should, but you could also fix the whole business model. And the business model is that really, what are, are there some human producers and consumers out there? Is there some economic value to be liberated by connecting them directly? Is it such that it's so valuable that people will be able to pay for it? All right. And micro payments, like small payments. Micro, but even have to be micro. So I like the example, suppose I'm going, next week I'm going to India. Never been to India before. Right? I have a couple of days in Mumbai, I have no idea what to do there. Right? And I could go on the web right now and search. It's going to be kind of hopeless. I'm not going to find, you know, I have lots of advertisers in my face. Right? What I really want to do is broadcast to the world that I am going to Mumbai and have someone on the other side of a market look at me and, and there's a recommendation system there. So I'm not looking at all possible people coming to Mumbai. They're looking at the people who are relevant to them. So someone in my age group, someone who kind of knows me in some level, I give up a little privacy by that, but I'm happy because what I'm going to get back is this person can make a little video for me, or they're going to write a little two page paper on here's the cool things that you want to do and move by this week, especially, right? I'm going to look at that. I'm not going to pay a micro payment. I'm going to pay, you know, a hundred dollars or whatever for that. It's real value. It's like journalism. Um, and as an honest subscription, it's that I'm going to pay that person in that moment. Company's going to take 5% of that. And that person has now got it. It's a gig economy, if you will, but you know, done for it, you know, thinking about a little bit behind YouTube, there was actually people who could make more of those things. If they were connected to a market, they would make more of those things independently. You don't have to tell them what to do. You don't have to incentivize them any other way. Um, and so, yeah, these companies, I don't think have thought long and hard about that. So I do distinguish on Facebook on the one side, who just not thought about these things at all. I think, uh, thinking that AI will fix everything, uh, and Amazon thinks about them all the time because they were already out in the real world. They were delivering packages, people's doors. They were, they were worried about a market. They were worried about sellers and, you know, they worry and some things they do are great. Some things maybe not so great, but you know, they're in that business model. And then I'd say Google sort of hovers somewhere in between. I don't, I don't think for a long, long time they got it. I think they probably see that YouTube is more pregnant with possibility than, than, than they might've thought and that they're probably heading that direction. Um, but uh, you know, Silicon Valley has been dominated by the Google Facebook kind of mentality and the subscription and advertising and that is, that's the core problem, right? The fake news actually rides on top of that because it means that you're monetizing with clip through rate and that is the core problem. You got to remove that. So advertisement, if we're going to linger on that, I mean, that's an interesting thesis. I don't know if everyone really deeply thinks about that. So you're right. The thought is the advertising model is the only thing we have, the only thing we'll ever have. We have to fix, we have to build algorithms that despite that business model, you know, find the better angels of our nature and do good by society and by the individual. But you think we can slowly, you think, first of all, there's a difference between should and could. So you're saying we should slowly move away from the advertising model and have a direct connection between the consumer and the creator. The question I also have is, can we, because the advertising model is so successful now in terms of just making a huge amount of money and therefore being able to build a big company that provides, has really smart people working that create a good service. Do you think it's possible? And just to clarify, you think we should move away? Well, I think we should. Yeah. But we is the, you know, me. So society. Yeah. Well, the companies, I mean, so first of all, full disclosure, I'm doing a day a week at Amazon because I kind of want to learn more about how they do things. So, you know, I'm not speaking for Amazon in any way, but, you know, I did go there because I actually believe they get a little bit of this or trying to create these markets. And they don't really use, advertising is not a crucial part of it. Well, that's a good question. So it has become not crucial, but it's become more and more present if you go to Amazon website. And, you know, without revealing too many deep secrets about Amazon, I can tell you that, you know, a lot of people in the company question this and there's a huge questioning going on. You do not want a world where there's zero advertising. That actually is a bad world. Okay. So here's a way to think about it. You're a company that like Amazon is trying to bring products to customers, right? And the customer, at any given moment, you want to buy a vacuum cleaner, say, you want to know what's available for me. And, you know, it's not going to be that obvious. You have to do a little bit of work at it. The recommendation system will sort of help, right? But now suppose this other person over here has just made the world, you know, they spent a huge amount of energy. They had a great idea. They made a great vacuum cleaner. They know they really did it. They nailed it. It's an MIT, you know, whiz kid that made a great new vacuum cleaner, right? It's not going to be in the recommendation system. No one will know about it. The algorithms will not find it and AI will not fix that. Okay. At all. Right. How do you allow that vacuum cleaner to start to get in front of people, be sold well advertising. And here, what advertising is, it's a signal that you're, you believe in your product enough that you're willing to pay some real money for it. And to me as a consumer, I look at that signal. I say, well, first of all, I know these are not just cheap little ads cause we have now right now there. I know that, you know, these are super cheap, you know, pennies. If I see an ad where it's actually, I know the company is only doing a few of these and they're making, you know, real money is kind of flowing and I see an ad, I may pay more attention to it. And I actually might want that because I see, Hey, that guy spent money on his vacuum cleaner. Maybe there's something good there. So I will look at it. And so that's part of the overall information flow in a good market. So advertising has a role, but the problem is of course that that signal is now completely gone because it just, you know, dominant by these tiny little things that add up to big money for the company, you know? So I think it will just, I think it will change because the societies just don't, you know, stick with things that annoy a lot of people and advertising currently annoys people more than it provides information. And I think that a Google probably is smart enough to figure out that this is a dead, this is a bad model, even though it's a hard, huge amount of money and they'll have to figure out how to pull it away from it slowly. And I'm sure the CEO there will figure it out, but they need to do it. And they needed it to, so if you reduce advertising, not to zero, but you reduce it at the same time you bring up producer, consumer, actual real value being delivered. So real money is being paid and they take a 5% cut that 5% could start to get big enough to cancel out the lost revenue from the kind of the poor kind of advertising. And I think that a good company will do that, will realize that. And Facebook, you know, again, God bless them. They bring, you know, grandmothers, they bring children's pictures into grandmothers lives. It's fantastic. But they need to think of a new business model and that's the core problem there. Until they start to connect producer consumer, I think they will just continue to make money and then buy the next social network company and then buy the next one and the innovation level will not be high and the health issues will not go away. So I apologize that we kind of returned to words, I don't think the exact terms matter, but in sort of defense of advertisement, don't you think the kind of direct connection between consumer and creator producer is what advertisement strives to do, right? So that is best advertisement is literally now Facebook is listening to our conversation and heard that you're going to India and will be able to actually start automatically for you making these connections and start giving this offer. So like, I apologize if it's just a matter of terms, but just to draw a distinction, is it possible to make advertisements just better and better and better algorithmically to where it actually becomes a connection, almost a direct connection? That's a good question. So let's component on that. First of all, what we just talked about, I was defending advertising. Okay. So I was defending it as a way to get signals into a market that don't come any other way, especially algorithmically. It's a sign that someone spent money on it, it's a sign they think it's valuable. And if I think that if other things, someone else thinks it's valuable, and if I trust other people, I might be willing to listen. I don't trust that Facebook though, who's an intermediary between this. I don't think they care about me. Okay. I don't think they do. And I find it creepy that they know I'm going to India next week because of our conversation. Why do you think that is? So what, could you just put your PR hat on? Why do you think you find Facebook creepy and not trust them as do majority of the population? So they're out of the Silicon Valley companies, I saw like not approval rate, but there's ranking of how much people trust companies and Facebook is in the gutter. In the gutter, including people inside of Facebook. So what do you attribute that to? Because when I... Come on, you don't find it creepy that right now we're talking that I might walk out on the street right now that some unknown person who I don't know kind of comes up to me and says, I hear you're going to India. I mean, that's not even Facebook. That's just, I want transparency in human society. I want to have, if you know something about me, there's actually some reason you know something about me. That's something that if I look at it later and audit it kind of, I approve. You know something about me because you care in some way. There's a caring relationship even, or an economic one or something. Not just that you're someone who could exploit it in ways I don't know about or care about or I'm troubled by or whatever. We're in a world right now where that happens way too much and that Facebook knows things about a lot of people and could exploit it and does exploit it at times. I think most people do find that creepy. It's not for them. It's not that Facebook is not doing it because they care about them in a real sense. And they shouldn't. They should not be a big brother caring about us. That is not the role of a company like that. Why not? Wait, not the big brother part, but the caring, the trusting. I mean, don't those companies, just to link on it because a lot of companies have a lot of information about us. I would argue that there's companies like Microsoft that has more information about us than Facebook does and yet we trust Microsoft more. Well, Microsoft is pivoting. Microsoft, you know, under Satya Nadella has decided this is really important. We don't want to do creepy things. Really want people to trust us to actually only use information in ways that they really would approve of, that we don't decide, right? And I'm just kind of adding that the health of a market is that when I connect to someone who produces a consumer, it's not just a random producer or consumer, it's people who see each other. They don't like each other, but they sense that if they transact, some happiness will go up on both sides. If a company helps me to do that in moments that I choose of my choosing, then fine. So, and also think about the difference between, you know, browsing versus buying, right? There are moments in my life I just want to buy, you know, a gadget or something. I need something for that moment. I need some ammonia for my house or something because I got a problem with a spill. I want to just go in. I don't want to be advertised at that moment. I don't want to be led down various, you know, that's annoying. I want to just go and have it be extremely easy to do what I want. Other moments I might say, no, it's like today I'm going to the shopping mall. I want to walk around and see things and see people and be exposed to stuff. So I want control over that though. I don't want the company's algorithms to decide for me, right? I think that's the thing. There's a total loss of control if Facebook thinks they should take the control from us of deciding when we want to have certain kinds of information, when we don't, what information that is, how much it relates to what they know about us that we didn't really want them to know about us. I don't want them to be helping me in that way. I don't want them to be helping them by they decide they have control over what I want and when. I totally agree. Facebook, by the way, I have this optimistic thing where I think Facebook has the kind of personal information about us that could create a beautiful thing. So I'm really optimistic of what Facebook could do. It's not what it's doing, but what it could do. So I don't see that. I think that optimism is misplaced because there's not a bit, you have to have a business model behind these things. Create a beautiful thing is really, let's be, let's be clear. It's about something that people would value. And I don't think they have that business model and I don't think they will suddenly discover it by what, you know, a long hot shower. I disagree. I disagree in terms of, you can discover a lot of amazing things in a shower. So I didn't say that. I said, they won't come, they won't do it, but in the shower, I think a lot of other people will discover it. I think that this guy, so I should also, full disclosure, there's a company called United Masters, which I'm on their board and they've created this music market and I have a hundred thousand artists now signed on and they've done things like gone to the NBA and the NBA, the music you find behind NBA clips right now is their music, right? That's a company that had the right business model in mind from the get go, right? Executed on that. And from day one, there was value brought to, so here you have a kid who made some songs who suddenly their songs are on the NBA website, right? That's real economic value to people. And so, you know, so you and I differ on the optimism of being able to sort of change the direction of the Titanic, right? So I, yeah, I'm older than you, so I've seen some Titanic's crash, got it. But and just to elaborate, cause I totally agree with you and I just want to know how difficult you think this problem is of, so for example, I want to read some news and I would, there's a lot of times in the day where something makes me either smile or think in a way where I like consciously think this really gave me value. Like I sometimes listen to the daily podcasts in the New York times, way better than the New York times themselves, by the way, for people listening. That's like real journalism is happening for some reason in the podcast space. It doesn't make sense to me, but often I listen to it 20 minutes and I would be willing to pay for that, like $5, $10 for that experience. And how difficult, that's kind of what you're getting at is that little transaction. How difficult is it to create a frictionless system like Uber has, for example, for other things? What's your intuition there? So I, first of all, I pay little bits of money to, you know, to send, there's something called courts that does financial things. I like medium as a site, I don't pay there, but I would. You had a great post on medium. I would have loved to pay you a dollar and not others. I wouldn't have wanted it per se because there should be also sites where that's not actually the goal. The goal is to actually have a broadcast channel that I monetize in some other way if I chose to. I mean, I could now people know about it. I could, I'm not doing it, but that's fine with me. Also the musicians who are making all this music, I don't think the right model is that you pay a little subscription fee to them, right? Because people can copy the bits too easily and it's just not that somewhere the value is. The value is that a connection was made between real human beings, then you can follow up on that. All right. And create yet more value. So no, I think there's a lot of open questions here, hot open questions, but also, yeah, I do want good recommendation systems that recommend cool stuff to me. But it's pretty hard, right? I don't like them to recommend stuff just based on my browsing history. I don't like the based on stuff they know about me, quote unquote. What's unknown about me is the most interesting. So this is the, this is the really interesting question. We may disagree, maybe not. I think that I love recommender systems and I want to give them everything about me in a way that I trust. Yeah. But you, but you don't, because, so for example, this morning I clicked on a, you know, I was pretty sleepy this morning. I clicked on a story about the queen of England. Yes. Right. I do not give a damn about the queen of England. I really do not. But it was clickbait. It kind of looked funny and I had to say, what the heck are they talking about? I don't want to have my life, you know, heading that direction. Now that's in my browsing history. The system in any reasonable system will think that I care about the queen of England. That's browsing history. Right. But, but you're saying all the trace, all the digital exhaust or whatever, that's been kind of the models. If you collect all this stuff, you're going to figure all of us out. Well, if you're trying to figure out like kind of one person like Trump or something, maybe you could figure him out. But if you're trying to figure out, you know, 500 million people, you know, no way, no way. You think so? No, I do. I think so. I think we are, humans are just amazingly rich and complicated. Every one of us has our little quirks, every one of us has our little things that could intrigue us that we don't even know it will intrigue us. And there's no sign of it in our past, but by God, there it comes and you know, you fall in love with it. And I don't want a company trying to figure that out for me and anticipate that I want them to provide a forum, a market, a place that I kind of go and by hook or by crook, this happens, you know, I I'm walking down the street and I hear some Chilean music being played and I never knew I liked Chilean music, but wow. So there is that side and I want them to provide a limited, but you know, interesting place to go. Right. And so don't try to use your AI to kind of, you know, figure me out and then put me in a world where you figured me out, you know, no, create huge spaces for human beings where our creativity and our style will be enriched and come forward and it'll be a lot of more transparency. I won't have people randomly, anonymously putting comments up and I'll special based on stuff they know about me, facts that, you know, we are so broken right now. If you're, you know, especially if you're a celebrity, but you know, it's about anybody that anonymous people are hurting lots and lots of people right now. That's part of this thing that Silicon Valley is thinking that, you know, just collect all this information and use it in a great way. So no, I'm not, I'm not a pessimist, I'm very much an optimist by nature, but I think that's just been the wrong path for the whole technology to take. Be more limited, create, let humans rise up. Don't try to replace them. That's the AI mantra. Don't try to anticipate them. Don't try to predict them because you're, you're, you're not going to, you're not going to be able to do those things. You're going to make things worse. Okay. So right now, just give this a chance. Right now, the recommender systems are the creepy people in the shadow watching your every move. So they're looking at traces of you. They're not directly interacting with you, sort of the, your close friends and family, the way they know you is by having conversation, by actually having interactions back and forth. Do you think there's a place for recommender systems sort of to step, cause you, you just emphasize the value of human to human connection, but yeah, just give it a chance, AI human connection. Is there a role for an AI system to have conversations with you in terms of, to try to figure out what kind of music you like, not by just watching what you listening to, but actually having a conversation, natural language or otherwise. Yeah, no, I'm, I'm, so I'm not against it. I just wanted to push back against the, maybe you're saying you have options for Facebook. So there I think it's misplaced, but, but I think that distributing, yeah, no, so good for you. Go for it. That's a hard spot to be in. Yeah, no, good. Human interaction, like on our daily, the context around me in my own home is something that I don't want some big company to know about at all, but I would be more than happy to have technology help me with it. Which kind of technology? Well, you know, just, Alexa, Amazon, well, a good, Alexa's done right. And I think Alexa is a research platform right now more than anything else. But Alexa done right, you know, could do things like I, I leave the water running in my garden and I say, Hey, Alexa, the water's running in my garden. And even have Alexa figure out that that means when my wife comes home, that she should be told about that. That's a little bit of a reasoning. I would call that AI and by any kind of stretch, it's a little bit of reasoning and it actually kind of would make my life a little easier and better. And you know, I don't, I wouldn't call this a wow moment, but I kind of think that overall rises human happiness up to have that kind of thing. But not when you're lonely, Alexa, knowing loneliness. No, no, I don't want Alexa to be, feel intrusive. And I don't want just the designer of the system to kind of work all this out. I really want to have a lot of control and I want transparency and control. And if a company can stand up and give me that in the context of new technology, I think they're good. First of all, be way more successful than our current generation. And like I said, I was mentioning Microsoft, I really think they're, they're pivoting to kind of be the trusted old uncle, but you know, I think that they get that this is a way to go, that if you let people find technology, empowers them to have more control and have and have control, not just over privacy, but over this rich set of interactions, that that people are going to like that a lot more. And that's, that's the right business model going forward. What does control over privacy look like? Do you think you should be able to just view all the data that? No, it's much more than that. I mean, first of all, it should be an individual decision. Some people don't want privacy. They want their whole life out there. Other people's want it. Privacy is not a zero one. It's not a legal thing. It's not just about which data is available, which is not. I like to recall to people that, you know, a couple hundred years ago, everyone, there was not really big cities, everyone lived in on the countryside and villages and villages. Everybody knew everything about you. Very, you didn't have any privacy. Is that bad? Are we better off now? Well, you know, arguably no, because what did you get for that loss of certain kinds of privacy? Well, people help each other if they, because they know everything about you. They know something's bad's happening, they will help you with that. Right. And now you live in a big city, no one knows about that. You get no help. So it kind of depends the answer. I want certain people who I trust and there should be relationships. I should kind of manage all those, but who knows what about me? I should have some agency there. It shouldn't, I shouldn't be a drift in a sea of technology where I have no agency. I don't want to go reading things and checking boxes. So I don't know how to do that. And I'm not a privacy researcher per se. I just, I recognize the vast complexity of this. It's not just technology. It's not just legal scholars meeting technologists. There's gotta be kind of a whole layers around it. And so I, when I alluded to this emerging engineering field, this is a big part of it. When electrical engineering came, I'm not one around at the time, but you just didn't plug electricity into walls and all kinds of work. You don't have to have like underwriters laboratory that reassured you that that plug's not going to burn up your house and that that machine will do this and that and everything. There'll be whole people who can install things. There'll be people who can watch the installers. There'll be a whole layers, you know, an onion of these kinds of things. And for things as deep and interesting as privacy, which is as least as interesting as electricity, that's going to take decades to kind of work out, but it's going to require a lot of new structures that we don't have right now. So it's kind of hard to talk about it. And you're saying there's a lot of money to be made if you get it right. So something you should look at. A lot of money to be made in all these things that provide human services and people recognize them as useful parts of their lives. So yeah. So yeah, the dialogue sometimes goes from the exuberant technologists to the no technology is good, kind of. And that's, you know, in our public discourse, you know, and as far as you see too much of this kind of thing and the sober discussions in the middle, which are the challenge he wants to have or where we need to be having our conversations. And you know, there's just not actually, there's not many forum fora for those. You know, there's, that's, that's kind of what I would look for. Maybe I could go and I could read a comment section of something and it would actually be this kind of dialogue going back and forth. You don't see much of this, right? Which is why actually there's a resurgence of podcasts out of all, because people are really hungry for conversation, but there's technology is not helping much. So comment sections of anything, including YouTube is not hurting and not helping. Yeah. And you think technically speaking, it's possible to help. I don't know the answers, but it's a, it's a, it's a less anonymity, a little more locality, you know, worlds that you kind of enter in and you trust the people there in those worlds so that when you start having a discussion, you know, not only is that people are not going to hurt you, but it's not going to be a total waste of your time because there's a lot of wasting of time that, you know, a lot of us, I pulled out of Facebook early on cause it was clearly going to waste a lot of my time even though there was some value. And so, yeah, worlds that are somehow you enter in and you know what you're getting and it's kind of appeals to you and you might, new things might happen, but you kind of have some, some trust in that world. And there's some deep, interesting, complex psychological aspects around anonymity, how that changes human behavior that's quite dark. Quite dark. Yeah. I think a lot of us are, especially those of us who really loved the advent of technology. I love social networks when they came out. I was just, I didn't see any negatives there at all. But then I started seeing comment sections. I think it was maybe, you know, with the CNN or something. And I started to go, wow, this, this darkness I just did not know about and, and our technology is now amplifying it. So sorry for the big philosophical question, but on that topic, do you think human beings, cause you've also, out of all things, had a foot in psychology too, the, do you think human beings are fundamentally good? Like all of us have good intent that could be mind or is it depending on context and environment, everybody could be evil. So my answer is fundamentally good. But fundamentally limited. All of us have very, you know, blinkers on. We don't see the other person's pain that easily. We don't see the other person's point of view that easily. We're very much in our own head, in our own world. And on my good days, I think the technology could open us up to, you know, more perspectives and more less blinkered and more understanding, you know, a lot of wars in human history happened because of just ignorance. They didn't, they, they thought the other person was doing this while their person wasn't doing this. And we have a huge amounts of that. But in my lifetime, I've not seen technology really help in that way yet. And I do, I do, I do believe in that, but you know, no, I think fundamentally humans are good. The people suffer, people have grievances because you have grudges and those things cause them to do things they probably wouldn't want. They regret it often. So no, I, I think it's a, you know, part of the progress of technology is to indeed allow it to be a little easier to be the real good person you actually are. Well, but do you think individual human life or society could be modeled as an optimization problem? Not the way I think typically, I mean, that's, you're talking about one of the most complex phenomenon in the whole, you know, in all of which the individual human life or society as a whole. Both, both. I mean, individual human life is amazingly complex. And so you know, optimization is kind of just one branch of mathematics that talks about certain kinds of things. And it just feels way too limited for the complexity of such things. What properties of optimization problems do you think, so do you think most interesting problems that could be solved through optimization, what kind of properties does that surface have non convexity, convexity, linearity, all those kinds of things, saddle points? Well, so optimization is just one piece of mathematics. You know, there's like, you just, even in our era, we're aware that say sampling is coming up, examples of something coming up with a distribution. What's optimization? What's sampling? Well, they, you can, if you're a kind of a certain kind of mathematician, you can try to blend them and make them seem to be sort of the same thing. But optimization is roughly speaking, trying to find a point that, a single point that is the optimum of a criterion function of some kind. And sampling is trying to, from that same surface, treat that as a distribution or density and find points that have high density. So I want the entire distribution in a sampling paradigm and I want the, you know, the single point, that's the best point in the optimization paradigm. Now if you were optimizing in the space of probability measures, the output of that could be a whole probability distribution. So you can start to make these things the same. But in mathematics, if you go too high up that kind of abstraction hierarchy, you start to lose the, you know, the ability to do the interesting theorems. So you kind of don't try that. You don't try to overly over abstract. So as a small tangent, what kind of worldview do you find more appealing? One that is deterministic or stochastic? Well, that's easy. I mean, I'm a statistician. You know, the world is highly stochastic. I don't know what's going to happen in the next five minutes, right? Because what you're going to ask, what we're going to do, what I'll say. Due to the uncertainty. Due to the... Massive uncertainty. Yeah. You know, massive uncertainty. And so the best I can do is have come rough sense or probability distribution on things and somehow use that in my reasoning about what to do now. So how does the distributed at scale when you have multi agent systems look like? So optimization can optimize sort of, it makes a lot more sense, sort of at least from my from robotics perspective, for a single robot, for a single agent, trying to optimize some objective function. When you start to enter the real world, this game theoretic concept starts popping up. That's how do you see optimization in this? Because you've talked about markets in a scale. What does that look like? Do you see it as optimization? Do you see it as sampling? Do you see like, how should you mark? These all blend together. And a system designer thinking about how to build an incentivized system will have a blend of all these things. So, you know, a particle in a potential well is optimizing a functional called a Lagrangian, right? The particle doesn't know that. There's no algorithm running that does that. It just happens. And so it's a description mathematically of something that helps us understand as analysts what's happening, right? And so the same thing will happen when we talk about, you know, mixtures of humans and computers and markets and so on and so forth, there'll be certain principles that allow us to understand what's happening, whether or not the actual algorithms are being used by any sense is not clear. Now at some point, I may have set up a multi agent or market kind of system. And I'm now thinking about an individual agent in that system. And they're asked to do some task and they're incentivized in some way, they get certain signals and they have some utility. What they will do at that point is they just won't know the answer, they may have to optimize to find an answer. Okay, so an artist could be embedded inside of an overall market. You know, and game theory is very, very broad. It is often studied very narrowly for certain kinds of problems. But it's roughly speaking, this is just the, I don't know what you're going to do. So I kind of anticipate that a little bit, and you anticipate what I'm anticipating. And we kind of go back and forth in our own minds. We run kind of thought experiments. You've talked about this interesting point in terms of game theory, you know, most optimization problems really hate saddle points, maybe you can describe what saddle points are. But I've heard you kind of mentioned that there's a there's a branch of optimization that you could try to explicitly look for saddle points as a good thing. Oh, not optimization. That's just game theory that that so there's all kinds of different equilibria in game theory. And some of them are highly explanatory behavior. They're not attempting to be algorithmic. They're just trying to say, if you happen to be at this equilibrium, you would see certain kind of behavior. And we see that in real life. That's what an economist wants to do, especially behavioral economists in continuous differential game theory, you're in continuous spaces, a some of the simplest equilibria are saddle points and Nash equilibrium as a saddle point. It's a special kind of saddle point. So classically, in game theory, you were trying to find Nash equilibria and an algorithmic game theory, you're trying to find algorithms that would find them. And so you're trying to find saddle points. I mean, so that's literally what you're trying to do. But you know, any economist knows that Nash equilibria have their limitations. They are definitely not that explanatory in many situations. They're not what you really want. There's other kind of equilibria. And there's names associated with these because they came from history with certain people working on them, but there will be new ones emerging. So you know, one example is a Stackelberg equilibrium. So you know, Nash, you and I are both playing this game against each other or for each other, maybe it's cooperative, and we're both going to think it through and then we're going to decide and we're going to do our thing simultaneously. You know, in a Stackelberg, no, I'm going to be the first mover. I'm going to make a move. You're going to look at my move and then you're going to make yours. Now since I know you're going to look at my move, I anticipate what you're going to do. And so I don't do something stupid, but then I know that you are also anticipating me. So we're kind of going back and forth on why, but there is then a first mover thing. And so those are different equilibria, right? And so just mathematically, yeah, these things have certain topologies and certain shapes that are like, what's it, algorithmically or dynamically, how do you move towards them? How do you move away from things? You know, so some of these questions have answers, they've been studied, others do not. And especially if it becomes stochastic, especially if there's large numbers of decentralized things, there's just, you know, young people get in this field who kind of think it's all done because we have, you know, TensorFlow. Well, no, these are all open problems and they're really important and interesting. And it's about strategic settings. How do I collect data? Suppose I don't know what you're going to do because I don't know you very well, right? Well, I got to collect data about you. So maybe I want to push you into a part of the space where I don't know much about you so I can get data. Cause, and then later I'll realize that you'll never, you'll never go there because of the way the game is set up. You know, that's part of the overall, you know, data analysis context is that. Even the game of poker is fascinating space, whenever there's any uncertainty, a lack of information, it's a super exciting space. Just to linger on optimization for a second. So when we look at deep learning, it's essentially minimization of a complicated loss function. So is there something insightful or hopeful that you see in the kinds of function surface that loss functions, the deep learning and in the real world is trying to optimize over? Is there something interesting as it's just the usual kind of problems of optimization? I think from an optimization point of view, that surface, first of all, it's pretty smooth. And secondly, if there's over, if it's over parameterized, there's kind of lots of paths down to reasonable Optima. And so kind of the getting downhill to the, to an optimum is viewed as not as hard as you might've expected in high dimensions. The fact that some Optima tend to be really good ones and others not so good. And you tend to, it's not, sometimes you find the good ones is sort of still needs explanation. Yeah. But, but the particular surface is coming from the particular generation of neural nets. I kind of suspect those will, those will change in 10 years. It will not be exactly those surfaces. There'll be some others that are an optimization theory will help contribute to why other surfaces or why other algorithms. Years of arithmetic operations with a little bit of nonlinearity, that's not, that didn't come from neuroscience per se. I mean, maybe in the minds of some of the people working on it, they were thinking about brains, but they were arithmetic circuits in all kinds of fields, computer science control theory and so on. And that layers of these could transform things in certain ways. And that if it's smooth, maybe you could find parameter values is a sort of big discovery that it's working, it's able to work at this scale. But I don't think that we're stuck with that and we're, we're certainly not stuck with that cause we're understanding the brain. So in terms of on the algorithm side sort of gradient descent, do you think we're stuck with gradient descent as a variance of it? What variance do you find interesting or do you think there'll be something else invented that is able to walk all over these optimization spaces in more interesting ways? So there's a co design of the surface and the, or the architecture and the algorithm. So if you just ask if we stay with the kind of architectures that we have now and not just neural nets, but you know, phase retrieval architectures or matrix completion architectures and so on. You know, I think we've kind of come to a place where yeah, a stochastic gradient algorithms are dominant and there are versions that are a little better than others. They have more guarantees, they're more robust and so on. And there's ongoing research to kind of figure out which is the best arm for which situation. But I think that that'll start to co evolve, that that'll put pressure on the actual architecture. And so we shouldn't do it in this particular way, we should do it in a different way because this other algorithm is now available if you do it in a different way. So that I can't really anticipate that co evolution process, but you know, gradients are amazing mathematical objects. They have a lot of people who start to study them more deeply mathematically are kind of shocked about what they are and what they can do. Think about it this way, suppose that I tell you if you move along the x axis, you go uphill in some objective by three units, whereas if you move along the y axis, you go uphill by seven units, right? Now I'm going to only allow you to move a certain unit distance, right? What are you going to do? Well, most people will say that I'm going to go along the y axis, I'm getting the biggest bang for my buck, you know, and my buck is only one unit, so I'm going to put all of it in the y axis, right? And why should I even take any of my strength, my step size and put any of it in the x axis because I'm getting less bang for my buck. That seems like a completely clear argument and it's wrong because the gradient direction is not to go along the y axis, it's to take a little bit of the x axis. And to understand that, you have to know some math and so even a trivial so called operator like gradient is not trivial and so, you know, exploiting its properties is still very important. Now we know that just pervading descent has got all kinds of problems, it gets stuck in many ways and it had never, you know, good dimension dependence and so on. So my own line of work recently has been about what kinds of stochasticity, how can we get dimension dependence, how can we do the theory of that and we've come up pretty favorable results with certain kinds of stochasticity. We have sufficient conditions generally. We know if you do this, we will give you a good guarantee. We don't have necessary conditions that it must be done a certain way in general. So stochasticity, how much randomness to inject into the walking along the gradient? And what kind of randomness? Why is randomness good in this process? Why is stochasticity good? Yeah, so I can give you simple answers but in some sense again, it's kind of amazing. Stochasticity just, you know, particular features of a surface that could have hurt you if you were doing one thing deterministically won't hurt you because by chance, there's very little chance that you would get hurt. So here stochasticity, it just kind of saves you from some of the particular features of surfaces. In fact, if you think about surfaces that are discontinuous in our first derivative, like an absolute value function, you will go down and hit that point where there's nondifferentiability. And if you're running a deterministic algorithm at that point, you can really do something bad. Whereas stochasticity just means it's pretty unlikely that's going to happen, that you're going to hit that point. So it's again, nontrivial to analyze but especially in higher dimensions, also stochasticity, our intuition isn't very good about it but it has properties that kind of are very appealing in high dimensions for a lot of large number of reasons. So it's all part of the mathematics to kind of, that's what's fun to work in the field is that you get to try to understand this mathematics. But long story short, you know, partly empirically, it was discovered stochastic gradient is very effective and theory kind of followed, I'd say, that but I don't see that we're getting clearly out of that. What's the most beautiful, mysterious, a profound idea to you in optimization? I don't know the most. But let me just say that Nesterov's work on Nesterov acceleration to me is pretty surprising and pretty deep. Can you elaborate? Well Nesterov acceleration is just that, suppose that we are going to use gradients to move around in a space. For the reasons I've alluded to, they're nice directions to move. And suppose that I tell you that you're only allowed to use gradients, you're not going to be allowed to use this local person that can only sense kind of the change in the surface. But I'm going to give you kind of a computer that's able to store all your previous gradients. And so you start to learn some something about the surface. And I'm going to restrict you to maybe move in the direction of like a linear span of all the gradients. So you can't kind of just move in some arbitrary direction, right? So now we have a well defined mathematical complexity model. There's certain classes of algorithms that can do that and others that can't. And we can ask for certain kinds of surfaces, how fast can you get down to the optimum? So there's answers to these. So for a smooth convex function, there's an answer, which is one over the number of steps squared. You will be within a ball of that size after k steps. Gradient descent in particular has a slower rate, it's one over k. So you could ask, is gradient descent actually, even though we know it's a good algorithm, is it the best algorithm? And the answer is no. Well, not clear yet, because one over k squared is a lower bound. That's probably the best you can do. Gradient is one over k, but is there something better? And so I think as a surprise to most, Nesterov discovered a new algorithm that has got two pieces to it. It's two gradients and puts those together in a certain kind of obscure way. And the thing doesn't even move downhill all the time. It sometimes goes back uphill. And if you're a physicist, that kind of makes some sense. You're building up some momentum and that is kind of the right intuition, but that intuition is not enough to understand kind of how to do it and why it works. But it does. It achieves one over k squared and it has a mathematical structure and it's still kind of to this day, a lot of us are writing papers and trying to explore that and understand it. So there are lots of cool ideas and optimization, but just kind of using gradients, I think is number one that goes back, you know, 150 years. And then Nesterov, I think has made a major contribution with this idea. So like you said, gradients themselves are in some sense, mysterious. They're not as trivial as... Not as trivial. Coordinate descent is more of a trivial one. You just pick one of the coordinates. That's how we think. That's how our human mind thinks. That's how our human minds think. And gradients are not that easy for our human mind to grapple with. An absurd question, but what is statistics? So here it's a little bit, it's somewhere between math and science and technology. It's somewhere in that convex hole. So it's a set of principles that allow you to make inferences that have got some reason to be believed and also principles that allow you to make decisions where you can have some reason to believe you're not going to make errors. So all of that requires some assumptions about what do you mean by an error? What do you mean by the probabilities? But after you start making some of those assumptions, you're led to conclusions that, yes, I can guarantee that if you do this in this way, your probability of making an error will be small. Your probability of continuing to not make errors over time will be small. And the probability that you found something that's real will be small, will be high. So decision making is a big part of that. Decision making is a big part. Yeah. So statistics, short history was that, it goes back as a formal discipline, 250 years or so. It was called inverse probability because around that era, probability was developed sort of especially to explain gambling situations. Of course, interesting. So you would say, well, given the state of nature is this, there's a certain roulette board that has a certain mechanism and what kind of outcomes do I expect to see? And especially if I do things long amounts of time, what outcomes will I see? And the physicists started to pay attention to this. And then people said, well, let's turn the problem around. What if I saw certain outcomes, could I infer what the underlying mechanism was? That's an inverse problem. And in fact, for quite a while, statistics was called inverse probability. That was the name of the field. And I believe that it was Laplace who was working in Napoleon's government who needed to do a census of France, learn about the people there. So he went and gathered data and he analyzed that data to determine policy and said, well, let's call this field that does this kind of thing statistics because the word state is in there. In French, that's etat, but it's the study of data for the state. So anyway, that caught on and it's been called statistics ever since. But by the time it got formalized, it was sort of in the 30s. And around that time, there was game theory and decision theory developed nearby. People in that era didn't think of themselves as either computer science or statistics or control or econ. They were all the above. And so Von Neumann is developing game theory, but also thinking of that as decision theory. Wald is an econometrician developing decision theory and then turning that into statistics. And so it's all about, here's not just data and you analyze it, here's a loss function. Here's what you care about. Here's the question you're trying to ask. Here is a probability model and here's the risk you will face if you make certain decisions. And to this day, in most advanced statistical curricula, you teach decision theory as the starting point and then it branches out into the two branches of Bayesian and frequentist. But that's all about decisions. In statistics, what is the most beautiful, mysterious, maybe surprising idea that you've come across? Yeah, good question. I mean, there's a bunch of surprising ones. There's something that's way too technical for this thing, but something called James Stein estimation, which is kind of surprising and really takes time to wrap your head around. Can you try to maybe... I think I don't want to even want to try. Let me just say a colleague at Steven Stigler at University of Chicago wrote a really beautiful paper on James Stein estimation, which helps to... It's views a paradox. It kind of defeats the mind's attempts to understand it, but you can and Steve has a nice perspective on that. So one of the troubles with statistics is that it's like in physics that are in quantum physics, you have multiple interpretations. There's a wave and particle duality in physics and you get used to that over time, but it still kind of haunts you that you don't really quite understand the relationship. The electron's a wave and electron's a particle. Well the same thing happens here. There's Bayesian ways of thinking and frequentist, and they are different. They sometimes become sort of the same in practice, but they are physically different. And then in some practice, they are not the same at all. They give you rather different answers. And so it is very much like wave and particle duality, and that is something that you have to kind of get used to in the field. Can you define Bayesian and frequentist? Yeah in decision theory you can make, I have a video that people could see. It's called are you a Bayesian or a frequentist and kind of help try to make it really clear. It comes from decision theory. So you know, decision theory, you're talking about loss functions, which are a function of data X and parameter theta. They're a function of two arguments. Okay. Neither one of those arguments is known. You don't know the data a priori. It's random and the parameters unknown. All right. So you have a function of two things you don't know, and you're trying to say, I want that function to be small. I want small loss, right? Well what are you going to do? So you sort of say, well, I'm going to average over these quantities or maximize over them or something so that, you know, I turn that uncertainty into something certain. So you could look at the first argument and average over it, or you could look at the second argument and average over it. That's Bayesian and frequentist. So the frequentist says, I'm going to look at the X, the data, and I'm going to take that as random and I'm going to average over the distribution. So I take the expectation loss under X. Theta is held fixed, right? That's called the risk. And so it's looking at other, all the data sets you could get, right? And say, how well will a certain procedure do under all those data sets? That's called a frequentist guarantee, right? So I think it is very appropriate when like you're building a piece of software and you're shipping it out there and people are using it on all kinds of data sets. You want to have a stamp, a guarantee on it that as people run it on many, many data sets that you never even thought about that 95% of the time it will do the right thing. Perfectly reasonable. The Bayesian perspective says, well, no, I'm going to look at the other argument of the loss function, the theta part, okay? That's unknown and I'm uncertain about it. So I could have my own personal probability for what it is, you know, how many tall people are there out there? I'm trying to infer the average height of the population while I have an idea roughly what the height is. So I'm going to average over the theta. So now that loss function as only now, again, one argument's gone, now it's a function of X and that's what a Bayesian does is they say, well, let's just focus on the particular X we got, the data set we got, we condition on that. Conditional on the X, I say something about my loss. That's a Bayesian approach to things. And the Bayesian will argue that it's not relevant to look at all the other data sets you could have gotten and average over them, the frequentist approach. It's really only the data sets you got, right? And I do agree with that, especially in situations where you're working with a scientist, you can learn a lot about the domain and you're really only focused on certain kinds of data and you gathered your data and you make inferences. I don't agree with it though, that, you know, in the sense that there are needs for frequentist guarantees, you're writing software, people are using it out there, you want to say something. So these two things have to got to fight each other a little bit, but they have to blend. So long story short, there's a set of ideas that are right in the middle that are called empirical Bayes. And empirical Bayes sort of starts with the Bayesian framework. It's kind of arguably philosophically more, you know, reasonable and kosher. Write down a bunch of the math that kind of flows from that, and then realize there's a bunch of things you don't know because it's the real world and you don't know everything. So you're uncertain about certain quantities. At that point, ask, is there a reasonable way to plug in an estimate for those things? Okay. And in some cases, there's quite a reasonable thing to do, to plug in, there's a natural thing you can observe in the world that you can plug in and then do a little bit more mathematics and assure yourself it's really good. So based on math or based on human expertise, what's, what, what are good? Oh, they're both going in. The Bayesian framework allows you to put a lot of human expertise in, but the math kind of guides you along that path and then kind of reassures you the end, you could put that stamp of approval under certain assumptions, this thing will work. So you asked the question, what's my favorite, you know, or what's the most surprising, nice idea. So one that is more accessible is something called false discovery rate, which is, you know, you're making not just one hypothesis test or making one decision, you're making a whole bag of them. And in that bag of decisions, you look at the ones where you made a discovery, you announced that something interesting had happened. All right. That's going to be some subset of your big bag. In the ones you made a discovery, which subset of those are bad? Or false, false discoveries. You'd like the fraction of your false discoveries among your discoveries to be small. That's a different criterion than accuracy or precision or recall or sensitivity and specificity. It's a different quantity. Those latter ones are almost all of them have more of a frequentist flavor. They say, given the truth is that the null hypothesis is true. Here's what accuracy I would get, or given that the alternative is true, here's what I would get. So it's kind of going forward from the state of nature to the data. The Bayesian goes the other direction from the data back to the state of nature. And that's actually what false discovery rate is. It says, given you made a discovery, okay, that's conditioned on your data. What's the probability of the hypothesis? It's going the other direction. And so the classical frequency look at that, well, I can't know that there's some priors needed in that. And the empirical Bayesian goes ahead and plows forward and starts writing down these formulas and realizes at some point, some of those things can actually be estimated in a reasonable way. And so it's kind of, it's a beautiful set of ideas. So I, this kind of line of argument has come out. It's not certainly mine, but it sort of came out from Robbins around 1960. Brad Efron has written beautifully about this in various papers and books. And the FDR is, you know, Benjamin in Israel, John Story did this Bayesian interpretation and so on. And he used to absorb these things over the years and find it a very healthy way to think about statistics. Let me ask you about intelligence to jump slightly back out into philosophy, perhaps. You said that maybe you can elaborate, but you said that defining just even the question of what is intelligence is a very difficult question. Is it a useful question? Do you think we'll one day understand the fundamentals of human intelligence and what it means, you know, have good benchmarks for general intelligence that we put before our machines? So I don't work on these topics so much that you're really asking the question for a psychologist really. And I studied some, but I don't consider myself at least an expert at this point. You know, a psychologist aims to understand human intelligence, right? And I think many psychologists I know are fairly humble about this. They might try to understand how a baby understands, you know, whether something's a solid or liquid or whether something's hidden or not. And maybe how a child starts to learn the meaning of certain words, what's a verb, what's a noun and also, you know, slowly but surely trying to figure out things. But humans ability to take a really complicated environment, reason about it, abstract about it, find the right abstractions, communicate about it, interact and so on is just, you know, really staggeringly rich and complicated. And so, you know, I think in all humility, we don't think we're kind of aiming for that in the near future. A certain psychologist doing experiments with babies in the lab or with people talking has a much more limited aspiration. And you know, Kahneman and Tversky would look at our reasoning patterns and they're not deeply understanding all the how we do our reasoning, but they're sort of saying, hey, here's some oddities about the reasoning and some things you should think about it. But also, as I emphasize in some things I've been writing about, you know, AI, the revolution hasn't happened yet. Yeah. Great blog post. I've been emphasizing that, you know, if you step back and look at intelligent systems of any kind and whatever you mean by intelligence, it's not just the humans or the animals or, you know, the plants or whatever, you know, so a market that brings goods into a city, you know, food to restaurants or something every day is a system. It's a decentralized set of decisions. Looking at it from far enough away, it's just like a collection of neurons. Every neuron is making its own little decisions, presumably in some way. And if you step back enough, every little part of an economic system is making all of its decisions. And just like with the brain, who knows what an individual neuron does and what the overall goal is, right? But something happens at some aggregate level, same thing with the economy. People eat in a city and it's robust. It works at all scales, small villages to big cities. It's been working for thousands of years. It works rain or shine, so it's adaptive. So all the kind of, you know, those are adjectives one tends to apply to intelligent systems. Robust, adaptive, you know, you don't need to keep adjusting it, self healing, whatever. Plus not perfect. You know, intelligences are never perfect and markets are not perfect. But I do not believe in this era that you cannot, that you can say, well, our computers are, our humans are smart, but you know, no markets are not, more markets are. So they are intelligent. Now we humans didn't evolve to be markets. We've been participating in them, right? But we are not ourselves a market per se. The neurons could be viewed as the market. There's economic, you know, neuroscience kind of perspective. That's interesting to pursue all that. The point though is, is that if you were to study humans and really be the world's best psychologist studied for thousands of years and come up with the theory of human intelligence, you might have never discovered principles of markets, you know, supply demand curves and you know, matching and auctions and all that. Those are real principles and they lead to a form of intelligence that's not maybe human intelligence. It's arguably another kind of intelligence. There probably are third kinds of intelligence or fourth that none of us are really thinking too much about right now. So if you really, and then all of those are relevant to computer systems in the future. Certainly the market one is relevant right now. Whereas the understanding of human intelligence is not so clear that it's relevant right now. Probably not. So if you want general intelligence, whatever one means by that, or, you know, understanding intelligence in a deep sense and all that, it is definitely has to be not just human intelligence. It's gotta be this broader thing. And that's not a mystery. Markets are intelligent. So, you know, it's definitely not just a philosophical stance to say we've got to move beyond intelligence. That sounds ridiculous. Yeah. But it's not. And in that blog post, you define different kinds of like intelligent infrastructure, AI, which I really like is some of the concepts you've just been describing. Do you see ourselves, if we see earth, human civilization as a single organism, do you think the intelligence of that organism, when you think from the perspective of markets and intelligence infrastructure is increasing, is it increasing linearly? Is it increasing exponentially? What do you think the future of that intelligence? Yeah, I don't know. I don't tend to think, I don't tend to answer questions like that because you know, that's science fiction. I'm hoping to catch you off guard. Well again, because you said it's so far in the future, it's fun to ask and you'll probably, you know, like you said, predicting the future is really nearly impossible. But say as an axiom, one day we create a human level, a superhuman level intelligent, not the scale of markets, but the scale of an individual. What do you think it is, what do you think it would take to do that? Or maybe to ask another question is how would that system be different than the biological human beings that we see around us today? Is it possible to say anything interesting to that question or is it just a stupid question? It's not a stupid question, but it's science fiction. Science fiction. And so I'm totally happy to read science fiction and think about it from time in my own life. I loved, there was this like brain in a vat kind of, you know, little thing that people were talking about when I was a student, I remember, you know, imagine that, you know, between your brain and your body, there's a, you know, there's a bunch of wires, right? And suppose that every one of them was replaced with a literal wire. And then suppose that wire was turned in actually a little wireless, you know, there's a receiver and sender. So the brain has got all the senders and receiver, you know, on all of its exiting, you know, axons and all the dendrites down to the body have replaced with senders and receivers. Now you could move the body off somewhere and put the brain in a vat, right? And then you could do things like start killing off those senders and receivers one by one. And after you've killed off all of them, where is that person? You know, they thought they were out in the body walking around the world and they moved on. So those are science fiction things. Those are fun to think about. It's just intriguing about where is, what is thought, where is it and all that. And I think every 18 year old should take philosophy classes and think about these things. And I think that everyone should think about what could happen in society that's kind of bad and all that. But I really don't think that's the right thing for most of us that are my age group to be doing and thinking about. I really think that we have so many more present, you know, first challenges and dangers and real things to build and all that such that, you know, spending too much time on science fiction, at least in public for like this, I think is not what we should be doing. Maybe over beers in private. That's right. Well, I'm not going to broadcast where I have beers because this is going to go on Facebook and I don't want a lot of people showing up there. But yeah, I'll, I love Facebook, Twitter, Amazon, YouTube. I have I'm optimistic and hopeful, but maybe, maybe I don't have grounds for such optimism and hope. But let me ask, you've mentored some of the brightest sort of some of the seminal figures in the field. Can you give advice to people who are undergraduates today? What does it take to take, you know, advice on their journey if they're interested in machine learning and in the ideas of markets from economics and psychology and all the kinds of things that you've exploring? What steps should they take on that journey? Well, yeah, first of all, the door is open and second, it's a journey. I like your language there. It is not that you're so brilliant and you have great, brilliant ideas and therefore that's just, you know, that's how you have success or that's how you enter into the field. It's that you apprentice yourself, you spend a lot of time, you work on hard things, you try and pull back and you be as broad as you can, you talk to lots of people. And it's like entering in any kind of a creative community. There's years that are needed and human connections are critical to it. So, you know, I think about, you know, being a musician or being an artist or something, you don't just, you know, immediately from day one, you know, you're a genius and therefore you do it. No, you, you know, practice really, really hard on basics and you be humble about where you are and then, and you realize you'll never be an expert on everything. So you kind of pick and there's a lot of randomness and a lot of kind of luck, but luck just kind of picks out which branch of the tree you go down, but you'll go down some branch. So yeah, it's a community. So the graduate school is, I still think is one of the wonderful phenomena that we have in our, in our world. It's very much about apprenticeship with an advisor. It's very much about a group of people you belong to. It's a four or five year process. So it's plenty of time to start from kind of nothing to come up to something, you know, more, more expertise, and then to start to have your own creativity start to flower, even surprising your own self. And it's a very cooperative endeavor. I think a lot of people think of science as highly competitive and I think in some other fields it might be more so. Here it's way more cooperative than you might imagine. And people are always teaching each other something and people are always more than happy to be clear that, so I feel I'm an expert on certain kinds of things, but I'm very much not expert on lots of other things and a lot of them are relevant and a lot of them are, I should know, but should in some society, you know, you don't. So I'm always willing to reveal my ignorance to people around me so they can teach me things. And I think a lot of us feel that way about our field. So it's very cooperative. I might add it's also very international because it's so cooperative. We see no barriers. And so that the nationalism that you see, especially in the current era and everything is just at odds with the way that most of us think about what we're doing here, where this is a human endeavor and we cooperate and are very much trying to do it together for the, you know, the benefit of everybody. So last question, where and how and why did you learn French and which language is more beautiful English or French? Great question. So first of all, I think Italian is actually more beautiful than French and English. And I also speak that. So I'm married to an Italian and I have kids and we speak Italian. Anyway, all kidding aside, every language allows you to express things a bit differently. And it is one of the great fun things to do in life is to explore those things. So in fact, when I kids or teens or college students ask me what they study, I say, well, do what your heart, where your heart is, certainly do a lot of math. Math is good for everybody, but do some poetry and do some history and do some language too. You know, throughout your life, you'll want to be a thinking person. You'll want to have done that. For me, French I learned when I was, I'd say a late teen, I was living in the middle of the country in Kansas and not much was going on in Kansas with all due respect to Kansas. And so my parents happened to have some French books on the shelf and just in my boredom, I pulled them down and I found this is fun. And I kind of learned the language by reading. And when I first heard it spoken, I had no idea what was being spoken, but I realized I had somehow knew it from some previous life and so I made the connection. But then I traveled and just I love to go beyond my own barriers and my own comfort or whatever. And I found myself on trains in France next to say older people who had lived a whole life of their own. And the ability to communicate with them was special and the ability to also see myself in other people's shoes and have empathy and kind of work on that language as part of that. So after that kind of experience and also embedding myself in French culture, which is quite amazing, languages are rich, not just because there's something inherently beautiful about it, but it's all the creativity that went into it. So I learned a lot of songs, read poems, read books. And then I was here actually at MIT where we're doing the podcast today and a young professor not yet married and not having a lot of friends in the area. So I just didn't have, I was kind of a bored person. I said, I heard a lot of Italians around. There's happened to be a lot of Italians at MIT, an Italian professor for some reason. And so I was kind of vaguely understanding what they were talking about. I said, well, I should learn this language too. So I did. And then later met my spouse and Italian became a part of my life. But I go to China a lot these days. I go to Asia, I go to Europe and every time I go, I kind of am amazed by the richness of human experience and the people don't have any idea if you haven't traveled, kind of how amazingly rich and I love the diversity. It's not just a buzzword to me. It really means something. I love to embed myself with other people's experiences. And so yeah, learning language is a big part of that. I think I've said in some interview at some point that if I had millions of dollars and infinite time or whatever, what would you really work on if you really wanted to do AI? And for me, that is natural language and really done right. Deep understanding of language. That's to me, an amazingly interesting scientific challenge. One we're very far away on. One we're very far away, but good natural language. People are kind of really invested then. I think a lot of them see that's where the core of AI is that if you understand that you really help human communication, you understand something about the human mind, the semantics that come out of the human mind and I agree, I think that will be such a long time. So I didn't do that in my career just cause I kind of, I was behind in the early days. I didn't kind of know enough of that stuff. I was at MIT, I didn't learn much language and it was too late at some point to kind of spend a whole career doing that, but I admire that field and so in my little way by learning language, you know, kind of that part of my brain has been trained up. Jan was right. You truly are the Miles Davis of machine learning. I don't think there's a better place than it. Mike it was a huge honor talking to you today. Merci beaucoup. All right. It's been my pleasure. Thanks for listening to this conversation with Michael I. Jordan and thank you to our presenting sponsor, Cash App. Download it, use code LEXPodcast, you'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words of wisdom from Michael I. Jordan from his blog post titled Artificial Intelligence, the revolution hasn't happened yet, calling for broadening the scope of the AI field. We should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term engineering is often invoked in a narrow sense in academia and beyond with overtones of cold, effectless machinery and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new, a human centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym AI continues to be used, let's be aware of the very real limitations of this placeholder. Let's broaden our scope, tone down the hype, and recognize the serious challenges ahead. Thank you for listening and hope to see you next time.
Michael I. Jordan: Machine Learning, Recommender Systems, and Future of AI | Lex Fridman Podcast #74
The following is a conversation with Marcus Hutter, senior research scientist at Google DeepMind. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legge, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of AICSI, spelled AIXI model, which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonov induction, and reinforcement learning. In 2006, Marcus launched the 50,000 Euro Hutter Prize for lossless compression of human knowledge. The idea behind this prize is that the ability to compress well is closely related to intelligence. This, to me, is a profound idea. Specifically, if you can compress the first 100 megabytes or 1 gigabyte of Wikipedia better than your predecessors, your compressor likely has to also be smarter. The intention of this prize is to encourage the development of intelligent compressors as a path to AGI. In conjunction with his podcast release just a few days ago, Marcus announced a 10x increase in several aspects of this prize, including the money, to 500,000 Euros. The better your compressor works relative to the previous winners, the higher fraction of that prize money is awarded to you. You can learn more about it if you Google simply Hutter Prize. I'm a big fan of benchmarks for developing AI systems, and the Hutter Prize may indeed be one that will spark some good ideas for approaches that will make progress on the path of developing AGI systems. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square, a member SIPC. Since Cash App allows you to send and receive money digitally, peer to peer, and security in all digital transactions is very important. Let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now, we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the App Store or Google Play and use the code LEX PODCAST, you'll get $10. And Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Markus Hutter. Do you think of the universe as a computer or maybe an information processing system? Let's go with a big question first. Okay, with a big question first. I think it's a very interesting hypothesis or idea. And I have a background in physics, so I know a little bit about physical theories, the standard model of particle physics and general relativity theory. And they are amazing and describe virtually everything in the universe. And they're all in a sense, computable theories. I mean, they're very hard to compute. And it's very elegant, simple theories, which describe virtually everything in the universe. So there's a strong indication that somehow the universe is computable, but it's a plausible hypothesis. So what do you think, just like you said, general relativity, quantum field theory, what do you think that the laws of physics are so nice and beautiful and simple and compressible? Do you think our universe was designed, is naturally this way? Are we just focusing on the parts that are especially compressible? Are human minds just enjoy something about that simplicity? And in fact, there's other things that are not so compressible. I strongly believe and I'm pretty convinced that the universe is inherently beautiful, elegant and simple and described by these equations. And we're not just picking that. I mean, if there were some phenomena which cannot be neatly described, scientists would try that. And there's biology, which is more messy, but we understand that it's an emergent phenomena and it's complex systems, but they still follow the same rules of quantum and electrodynamics. All of chemistry follows that and we know that. I mean, we cannot compute everything because we have limited computational resources. No, I think it's not a bias of the humans, but it's objectively simple. I mean, of course, you never know, maybe there's some corners very far out in the universe or super, super tiny below the nucleus of atoms or parallel universes which are not nice and simple, but there's no evidence for that. And we should apply Occam's razor and choose the simplest three consistent with it. But also it's a little bit self referential. So maybe a quick pause. What is Occam's razor? So Occam's razor says that you should not multiply entities beyond necessity, which sort of, if you translate it to proper English means, and in the scientific context means that if you have two theories or hypothesis or models, which equally well describe the phenomenon, your study or the data, you should choose the more simple one. So that's just the principle or sort of, that's not like a provable law, perhaps. Perhaps we'll kind of discuss it and think about it, but what's the intuition of why the simpler answer is the one that is likely to be more correct descriptor of whatever we're talking about? I believe that Occam's razor is probably the most important principle in science. I mean, of course we lead logical deduction and we do experimental design, but science is about finding, understanding the world, finding models of the world. And we can come up with crazy complex models, which explain everything but predict nothing. But the simple model seem to have predictive power and it's a valid question why? And there are two answers to that. You can just accept it. That is the principle of science and we use this principle and it seems to be successful. We don't know why, but it just happens to be. Or you can try, find another principle which explains Occam's razor. And if we start with the assumption that the world is governed by simple rules, then there's a bias towards simplicity and applying Occam's razor is the mechanism to finding these rules. And actually in a more quantitative sense, and we come back to that later in terms of somnolent reduction, you can rigorously prove that. You can assume that the world is simple, then Occam's razor is the best you can do in a certain sense. So I apologize for the romanticized question, but why do you think, outside of its effectiveness, why do you think we find simplicity so appealing as human beings? Why does E equals MC squared seem so beautiful to us humans? I guess mostly, in general, many things can be explained by an evolutionary argument. And there's some artifacts in humans which are just artifacts and not evolutionary necessary. But with this beauty and simplicity, it's, I believe, at least the core is about, like science, finding regularities in the world, understanding the world, which is necessary for survival. If I look at a bush and I just see noise, and there is a tiger and it eats me, then I'm dead. But if I try to find a pattern, and we know that humans are prone to find more patterns in data than they are, like the Mars face and all these things, but these biads towards finding patterns, even if they are non, but, I mean, it's best, of course, if they are, yeah, helps us for survival. Yeah, that's fascinating. I haven't thought really about the, I thought I just loved science, but indeed, in terms of just for survival purposes, there is an evolutionary argument for why we find the work of Einstein so beautiful. Maybe a quick small tangent. Could you describe what's, Salomonov induction is? Yeah, so that's a theory which I claim, and Mr. Lomanov sort of claimed a long time ago, that this solves the big philosophical problem of induction. And I believe the claim is essentially true. And what it does is the following. So, okay, for the picky listener, induction can be interpreted narrowly and widely. Narrow means inferring models from data. And widely means also then using these models for doing predictions, so predictions also part of the induction. So I'm a little bit sloppy sort of with the terminology, and maybe that comes from Ray Salomonov, you know, being sloppy, maybe I shouldn't say that. He can't complain anymore. So let me explain a little bit this theory in simple terms. So assume you have a data sequence, make it very simple, the simplest one say 1, 1, 1, 1, 1, and you see if 100 ones, what do you think comes next? The natural answer, I'm gonna speed up a little bit, the natural answer is of course, you know, one, okay? And the question is why, okay? Well, we see a pattern there, yeah, okay, there's a one and we repeat it. And why should it suddenly after 100 ones be different? So what we're looking for is simple explanations or models for the data we have. And now the question is, a model has to be presented in a certain language, in which language do we use? In science, we want formal languages, and we can use mathematics, or we can use programs on a computer. So abstractly on a Turing machine, for instance, or it can be a general purpose computer. So, and there are of course, lots of models of, you can say maybe it's 100 ones and then 100 zeros and 100 ones, that's a model, right? But there are simpler models, there's a model print one loop, and it also explains the data. And if you push that to the extreme, you are looking for the shortest program, which if you run this program reproduces the data you have, it will not stop, it will continue naturally. And this you take for your prediction. And on the sequence of ones, it's very plausible, right? That print one loop is the shortest program. We can give some more complex examples like one, two, three, four, five. What comes next? The short program is again, you know, counter, and so that is roughly speaking how solomotive induction works. The extra twist is that it can also deal with noisy data. So if you have, for instance, a coin flip, say a biased coin, which comes up head with 60% probability, then it will predict, it will learn and figure this out, and after a while it predicts, oh, the next coin flip will be head with probability 60%. So it's the stochastic version of that. But the goal is, the dream is always the search for the short program. Yes, yeah. Well, in solomotive induction, precisely what you do is, so you combine, so looking for the shortest program is like applying Opaque's razor, like looking for the simplest theory. There's also Epicorus principle, which says, if you have multiple hypotheses, which equally well describe your data, don't discard any of them, keep all of them around, you never know. And you can put that together and say, okay, I have a bias towards simplicity, but it don't rule out the larger models. And technically what we do is, we weigh the shorter models higher and the longer models lower. And you use a Bayesian techniques, you have a prior, and which is precisely two to the minus the complexity of the program. And you weigh all this hypothesis and take this mixture, and then you get also the stochasticity in. Yeah, like many of your ideas, that's just a beautiful idea of weighing based on the simplicity of the program. I love that, that seems to me maybe a very human centric concept. It seems to be a very appealing way of discovering good programs in this world. You've used the term compression quite a bit. I think it's a beautiful idea. Sort of, we just talked about simplicity and maybe science or just all of our intellectual pursuits is basically the time to compress the complexity all around us into something simple. So what does this word mean to you, compression? I essentially have already explained it. So it compression means for me, finding short programs for the data or the phenomenon at hand. You could interpret it more widely, finding simple theories, which can be mathematical theories or maybe even informal, like just in words. Compression means finding short descriptions, explanations, programs for the data. Do you see science as a kind of our human attempt at compression, so we're speaking more generally, because when you say programs, you're kind of zooming in on a particular sort of almost like a computer science, artificial intelligence focus, but do you see all of human endeavor as a kind of compression? Well, at least all of science, I see as an endeavor of compression, not all of humanity, maybe. And well, there are also some other aspects of science like experimental design, right? I mean, we create experiments specifically to get extra knowledge. And that isn't part of the decision making process, but once we have the data, to understand the data is essentially compression. So I don't see any difference between compression, compression, understanding, and prediction. So we're jumping around topics a little bit, but returning back to simplicity, a fascinating concept of Kolmogorov complexity. So in your sense, do most objects in our mathematical universe have high Kolmogorov complexity? And maybe what is, first of all, what is Kolmogorov complexity? Okay, Kolmogorov complexity is a notion of simplicity or complexity, and it takes the compression view to the extreme. So I explained before that if you have some data sequence, just think about a file in a computer and best sort of, you know, just a string of bits. And if you, and we have data compressors, like we compress big files into zip files with certain compressors. And you can also produce self extracting ArcaFs. That means as an executable, if you run it, it reproduces your original file without needing an extra decompressor. It's just a decompressor plus the ArcaF together in one. And now there are better and worse compressors, and you can ask, what is the ultimate compressor? So what is the shortest possible self extracting ArcaF you could produce for a certain data set here, which reproduces the data set. And the length of this is called the Kolmogorov complexity. And arguably that is the information content in the data set. I mean, if the data set is very redundant or very boring, you can compress it very well. So the information content should be low and you know, it is low according to this definition. So it's the length of the shortest program that summarizes the data? Yes. And what's your sense of our sort of universe when we think about the different objects in our universe that we try, concepts or whatever at every level, do they have higher or low Kolmogorov complexity? So what's the hope? Do we have a lot of hope and be able to summarize much of our world? That's a tricky and difficult question. So as I said before, I believe that the whole universe based on the evidence we have is very simple. So it has a very short description. Sorry, to linger on that, the whole universe, what does that mean? You mean at the very basic fundamental level in order to create the universe? Yes, yeah. So you need a very short program and you run it. To get the thing going. To get the thing going and then it will reproduce our universe. There's a problem with noise. We can come back to that later possibly. Is noise a problem or is it a bug or a feature? I would say it makes our life as a scientist really, really much harder. I mean, think about without noise, we wouldn't need all of the statistics. But then maybe we wouldn't feel like there's a free will. Maybe we need that for the... This is an illusion that noise can give you free will. At least in that way, it's a feature. But also, if you don't have noise, you have chaotic phenomena, which are effectively like noise. So we can't get away with statistics even then. I mean, think about rolling a dice and forget about quantum mechanics and you know exactly how you throw it. But I mean, it's still so hard to compute the trajectory that effectively it is best to model it as coming out with a number, this probability one over six. But from this set of philosophical Kolmogorov complexity perspective, if we didn't have noise, then arguably you could describe the whole universe as well as a standard model plus generativity. I mean, we don't have a theory of everything yet, but sort of assuming we are close to it or have it. Plus the initial conditions, which may hopefully be simple. And then you just run it and then you would reproduce the universe. But that's spoiled by noise or by chaotic systems or by initial conditions, which may be complex. So now if we don't take the whole universe, but just a subset, just take planet Earth. Planet Earth cannot be compressed into a couple of equations. This is a hugely complex system. So interesting. So when you look at the window, like the whole thing might be simple, but when you just take a small window, then... It may become complex and that may be counterintuitive, but there's a very nice analogy. The book, the library of all books. So imagine you have a normal library with interesting books and you go there, great, lots of information and quite complex. So now I create a library which contains all possible books, say of 500 pages. So the first book just has A, A, A, A, A over all the pages. The next book A, A, A and ends with B and so on. I create this library of all books. I can write a super short program which creates this library. So this library which has all books has zero information content. And you take a subset of this library and suddenly you have a lot of information in there. So that's fascinating. I think one of the most beautiful object, mathematical objects that at least today seems to be understudied or under talked about is cellular automata. What lessons do you draw from sort of the game of life for cellular automata where you start with the simple rules just like you're describing with the universe and somehow complexity emerges. Do you feel like you have an intuitive grasp on the fascinating behavior of such systems where like you said, some chaotic behavior could happen, some complexity could emerge, some it could die out and some very rigid structures. Do you have a sense about cellular automata that somehow transfers maybe to the bigger questions of our universe? Yeah, the cellular automata and especially the Conway's game of life is really great because these rules are so simple. You can explain it to every child and even by hand you can simulate a little bit and you see these beautiful patterns emerge and people have proven that it's even Turing complete. You cannot just use a computer to simulate game of life but you can also use game of life to simulate any computer. That is truly amazing. And it's the prime example probably to demonstrate that very simple rules can lead to very rich phenomena. And people sometimes, how is chemistry and biology so rich? I mean, this can't be based on simple rules. But no, we know quantum electrodynamics describes all of chemistry. And we come later back to that. I claim intelligence can be explained or described in one single equation. This very rich phenomenon. You asked also about whether I understand this phenomenon and it's probably not. And there's this saying, you never understand really things, you just get used to them. And I think I got pretty used to cellular automata. So you believe that you understand now why this phenomenon happens. But I give you a different example. I didn't play too much with Conway's game of life but a little bit more with fractals and with the Mandelbrot set and these beautiful patterns, just look Mandelbrot set. And well, when the computers were really slow and I just had a black and white monitor and programmed my own programs in assembler too. Assembler, wow. Wow, you're legit. To get these fractals on the screen and it was mesmerized and much later. So I returned to this every couple of years and then I tried to understand what is going on. And you can understand a little bit. So I tried to derive the locations, there are these circles and the apple shape and then you have smaller Mandelbrot sets recursively in this set. And there's a way to mathematically by solving high order polynomials to figure out where these centers are and what size they are approximately. And by sort of mathematically approaching this problem, you slowly get a feeling of why things are like they are and that sort of isn't, you know, first step to understanding why this rich phenomena. Do you think it's possible, what's your intuition? Do you think it's possible to reverse engineer and find the short program that generated these fractals sort of by looking at the fractals? Well, in principle, yes, yeah. So, I mean, in principle, what you can do is you take, you know, any data set, you know, you take these fractals or you take whatever your data set, whatever you have, say a picture of Convey's Game of Life and you run through all programs. You take a program size one, two, three, four and all these programs around them all in parallel in so called dovetailing fashion, give them computational resources, first one 50%, second one half resources and so on and let them run, wait until they halt, give an output, compare it to your data and if some of these programs produce the correct data, then you stop and then you have already some program. It may be a long program because it's faster and then you continue and you get shorter and shorter programs until you eventually find the shortest program. The interesting thing, you can never know whether it's the shortest program because there could be an even shorter program which is just even slower and you just have to wait here. But asymptotically and actually after a finite time, you have the shortest program. So this is a theoretical but completely impractical way of finding the underlying structure in every data set and that is what Solomov induction does and Kolmogorov complexity. In practice, of course, we have to approach the problem more intelligently. And then if you take resource limitations into account, there's, for instance, a field of pseudo random numbers and these are deterministic sequences, but no algorithm which is fast, fast means runs in polynomial time, can detect that it's actually deterministic. So we can produce interesting, I mean, random numbers maybe not that interesting, but just an example. We can produce complex looking data and we can then prove that no fast algorithm can detect the underlying pattern. Which is, unfortunately, that's a big challenge for our search for simple programs in the space of artificial intelligence, perhaps. Yes, it definitely is for artificial intelligence and it's quite surprising that it's, I can't say easy. I mean, physicists worked really hard to find these theories, but apparently it was possible for human minds to find these simple rules in the universe. It could have been different, right? It could have been different. It's awe inspiring. So let me ask another absurdly big question. What is intelligence in your view? So I have, of course, a definition. I wasn't sure what you're going to say because you could have just as easily said, I have no clue. Which many people would say, but I'm not modest in this question. So the informal version, which I worked out together with Shane Lack, who cofounded DeepMind, is that intelligence measures an agent's ability to perform well in a wide range of environments. So that doesn't sound very impressive. And these words have been very carefully chosen and there is a mathematical theory behind that and we come back to that later. And if you look at this definition by itself, it seems like, yeah, okay, but it seems a lot of things are missing. But if you think it through, then you realize that most, and I claim all of the other traits, at least of rational intelligence, which we usually associate with intelligence, are emergent phenomena from this definition. Like creativity, memorization, planning, knowledge. You all need that in order to perform well in a wide range of environments. So you don't have to explicitly mention that in a definition. Interesting. So yeah, so the consciousness, abstract reasoning, all these kinds of things are just emergent phenomena that help you in towards, can you say the definition again? So multiple environments. Did you mention the word goals? No, but we have an alternative definition. Instead of performing well, you can just replace it by goals. So intelligence measures an agent's ability to achieve goals in a wide range of environments. That's more or less equal. But interesting, because in there, there's an injection of the word goals. So we want to specify there should be a goal. Yeah, but perform well is sort of, what does it mean? It's the same problem. Yeah. There's a little bit gray area, but it's much closer to something that could be formalized. In your view, are humans, where do humans fit into that definition? Are they general intelligence systems that are able to perform in, like how good are they at fulfilling that definition at performing well in multiple environments? Yeah, that's a big question. I mean, the humans are performing best among all species. We know of, yeah. Depends. You could say that trees and plants are doing a better job. They'll probably outlast us. Yeah, but they are in a much more narrow environment, right? I mean, you just have a little bit of air pollutions and these trees die and we can adapt, right? We build houses, we build filters, we do geoengineering. So the multiple environment part. Yeah, that is very important, yeah. So that distinguish narrow intelligence from wide intelligence, also in the AI research. So let me ask the Allentourian question. Can machines think? Can machines be intelligent? So in your view, I have to kind of ask, the answer is probably yes, but I want to kind of hear what your thoughts on it. Can machines be made to fulfill this definition of intelligence, to achieve intelligence? Well, we are sort of getting there and on a small scale, we are already there. The wide range of environments are missing, but we have self driving cars, we have programs which play Go and chess, we have speech recognition. So that's pretty amazing, but these are narrow environments. But if you look at AlphaZero, that was also developed by DeepMind. I mean, got famous with AlphaGo and then came AlphaZero a year later. That was truly amazing. So reinforcement learning algorithm, which is able just by self play, to play chess and then also Go. And I mean, yes, they're both games, but they're quite different games. And you didn't don't feed them the rules of the game. And the most remarkable thing, which is still a mystery to me, that usually for any decent chess program, I don't know much about Go, you need opening books and end game tables and so on too. And nothing in there, nothing was put in there. Especially with AlphaZero, the self playing mechanism starting from scratch, being able to learn actually new strategies is... Yeah, it rediscovered all these famous openings within four hours by itself. What I was really happy about, I'm a terrible chess player, but I like Queen Gumby. And AlphaZero figured out that this is the best opening. Finally, somebody proved you correct. So yes, to answer your question, yes, I believe that general intelligence is possible. And it also, I mean, it depends how you define it. Do you say AGI with general intelligence, artificial intelligence, only refers to if you achieve human level or a subhuman level, but quite broad, is it also general intelligence? So we have to distinguish, or it's only super human intelligence, general artificial intelligence. Is there a test in your mind, like the Turing test for natural language or some other test that would impress the heck out of you that would kind of cross the line of your sense of intelligence within the framework that you said? Well, the Turing test has been criticized a lot, but I think it's not as bad as some people think. And some people think it's too strong. So it tests not just for system to be intelligent, but it also has to fake human deception, which is much harder. And on the other hand, they say it's too weak because it just maybe fakes emotions or intelligent behavior. It's not real. But I don't think that's the problem or a big problem. So if you would pass the Turing test, so a conversation over terminal with a bot for an hour, or maybe a day or so, and you can fool a human into not knowing whether this is a human or not, so that's the Turing test, I would be truly impressed. And we have this annual competition, the Lübner Prize. And I mean, it started with ELISA, that was the first conversational program. And what is it called? The Japanese Mitsuko or so. That's the winner of the last couple of years. And well. Quite impressive. Yeah, it's quite impressive. And then Google has developed Mina, right? Just recently, that's an open domain conversational bot, just a couple of weeks ago, I think. Yeah, I kind of like the metric that sort of the Alexa Prize has proposed. I mean, maybe it's obvious to you. It wasn't to me of setting sort of a length of a conversation. Like you want the bot to be sufficiently interesting that you would want to keep talking to it for like 20 minutes. And that's a surprisingly effective in aggregate metric, because really, like nobody has the patience to be able to talk to a bot that's not interesting and intelligent and witty, and is able to go on to different tangents, jump domains, be able to say something interesting to maintain your attention. And maybe many humans will also fail this test. That's the, unfortunately, we set, just like with autonomous vehicles, with chatbots, we also set a bar that's way too high to reach. I said, you know, the Turing test is not as bad as some people believe, but what is really not useful about the Turing test, it gives us no guidance how to develop these systems in the first place. Of course, you know, we can develop them by trial and error and, you know, do whatever and then run the test and see whether it works or not. But a mathematical definition of intelligence gives us, you know, an objective, which we can then analyze by theoretical tools or computational, and, you know, maybe even prove how close we are. And we will come back to that later with the iXe model. So, I mentioned the compression, right? So in natural language processing, they have achieved amazing results. And one way to test this, of course, you know, take the system, you train it, and then you see how well it performs on the task. But a lot of performance measurement is done by so called perplexity, which is essentially the same as complexity or compression length. So the NLP community develops new systems and then they measure the compression length and then they have ranking and leaks because there's a strong correlation between compressing well, and then the system's performing well at the task at hand. It's not perfect, but it's good enough for them as an intermediate aim. So you mean a measure, so this is kind of almost returning to the common goal of complexity. So you're saying good compression usually means good intelligence. Yes. So you mentioned you're one of the only people who dared boldly to try to formalize the idea of artificial general intelligence, to have a mathematical framework for intelligence, just like as we mentioned, termed AIXI, A, I, X, I. So let me ask the basic question. What is AIXI? Okay, so let me first say what it stands for because... What it stands for, actually, that's probably the more basic question. What it... The first question is usually how it's pronounced, but finally I put it on the website how it's pronounced and you figured it out. The name comes from AI, artificial intelligence, and the X, I, is the Greek letter Xi, which are used for Solomonov's distribution for quite stupid reasons, which I'm not willing to repeat here in front of camera. Sure. So it just happened to be more or less arbitrary. I chose the Xi. But it also has nice other interpretations. So there are actions and perceptions in this model. An agent has actions and perceptions over time. So this is A index I, X index I. So there's the action at time I and then followed by perception at time I. Yeah, we'll go with that. I'll edit out the first part. I'm just kidding. I have some more interpretations. So at some point, maybe five years ago or 10 years ago, I discovered in Barcelona, it was on a big church that was in stone engraved, some text, and the word Aixia appeared there a couple of times. I was very surprised and happy about that. And I looked it up. So it is a Catalan language and it means with some interpretation of that's it, that's the right thing to do. Yeah, Huayrica. Oh, so it's almost like destined somehow. It came to you in a dream. And similar, there's a Chinese word, Aixi, also written like Aixi, if you transcribe that to Pinyin. And the final one is that it's AI crossed with induction because that is, and that's going more to the content now. So good old fashioned AI is more about planning and known deterministic world and induction is more about often IID data and inferring models. And essentially what this Aixi model does is combining these two. And I actually also recently, I think heard that in Japanese AI means love. So if you can combine XI somehow with that, I think we can, there might be some interesting ideas there. So Aixi, let's then take the next step. Can you maybe talk at the big level of what is this mathematical framework? Yeah, so it consists essentially of two parts. One is the learning and induction and prediction part. And the other one is the planning part. So let's come first to the learning, induction, prediction part, which essentially I explained already before. So what we need for any agent to act well is that it can somehow predict what happens. I mean, if you have no idea what your actions do, how can you decide which actions are good or not? So you need to have some model of what your actions effect. So what you do is you have some experience, you build models like scientists of your experience, then you hope these models are roughly correct, and then you use these models for prediction. And the model is, sorry to interrupt, and the model is based on your perception of the world, how your actions will affect that world. That's not... So how do you think about a model? That's not the important part, but it is technically important, but at this stage we can just think about predicting, let's say, stock market data, weather data, or IQ sequences, one, two, three, four, five, what comes next, yeah? So of course our actions affect what we're doing, but I'll come back to that in a second. So, and I'll keep just interrupting. So just to draw a line between prediction and planning, what do you mean by prediction in this way? It's trying to predict the environment without your long term action in the environment? What is prediction? Okay, if you want to put the actions in now, okay, then let's put it in now, yeah? So... We don't have to put them now. Yeah, yeah. Scratch it, scratch it, dumb question, okay. So the simplest form of prediction is that you just have data which you passively observe, and you want to predict what happens without interfering, as I said, weather forecasting, stock market, IQ sequences, or just anything, okay? And Solomonov's theory of induction based on compression, so you look for the shortest program which describes your data sequence, and then you take this program, run it, it reproduces your data sequence by definition, and then you let it continue running, and then it will produce some predictions, and you can rigorously prove that for any prediction task, this is essentially the best possible predictor. Of course, if there's a prediction task, or a task which is unpredictable, like, you know, you have fair coin flips. Yeah, I cannot predict the next fair coin flip. What Solomonov does is says, okay, next head is probably 50%. It's the best you can do. So if something is unpredictable, Solomonov will also not magically predict it. But if there is some pattern and predictability, then Solomonov induction will figure that out eventually, and not just eventually, but rather quickly, and you can have proof convergence rates, whatever your data is. So there's pure magic in a sense. What's the catch? Well, the catch is that it's not computable, and we come back to that later. You cannot just implement it even with Google resources here, and run it and predict the stock market and become rich. I mean, Ray Solomonov already tried it at the time. But so the basic task is you're in the environment, and you're interacting with the environment to try to learn to model that environment, and the model is in the space of all these programs, and your goal is to get a bunch of programs that are simple. Yeah, so let's go to the actions now. But actually, good that you asked. Usually I skip this part, although there is also a minor contribution which I did, so the action part, but I usually sort of just jump to the decision part. So let me explain the action part now. Thanks for asking. So you have to modify it a little bit by now not just predicting a sequence which just comes to you, but you have an observation, then you act somehow, and then you want to predict the next observation based on the past observation and your action. Then you take the next action. You don't care about predicting it because you're doing it. Then you get the next observation, and you want, well, before you get it, you want to predict it, again, based on your past action and observation sequence. You just condition extra on your actions. There's an interesting alternative that you also try to predict your own actions. If you want. In the past or the future? In your future actions. That's interesting. Yeah. Wait, let me wrap. I think my brain just broke. We should maybe discuss that later after I've explained the IXE model. That's an interesting variation. But that is a really interesting variation, and a quick comment. I don't know if you want to insert that in here, but you're looking at the, in terms of observations, you're looking at the entire, the big history, the long history of the observations. Exactly. That's very important. The whole history from birth sort of of the agent, and we can come back to that. And also why this is important. Often, you know, in RL, you have MDPs, micro decision processes, which are much more limiting. Okay. So now we can predict conditioned on actions. So even if you influence environment, but prediction is not all we want to do, right? We also want to act really in the world. And the question is how to choose the actions. And we don't want to greedily choose the actions, you know, just, you know, what is best in the next time step. And we first, I should say, you know, what is, you know, how do we measure performance? So we measure performance by giving the agent reward. That's the so called reinforcement learning framework. So every time step, you can give it a positive reward or negative reward, or maybe no reward. It could be a very scarce, right? Like if you play chess, just at the end of the game, you give plus one for winning or minus one for losing. So in the RxC framework, that's completely sufficient. So occasionally you give a reward signal and you ask the agent to maximize reward, but not greedily sort of, you know, the next one, next one, because that's very bad in the long run if you're greedy. So, but over the lifetime of the agent. So let's assume the agent lives for M time steps, or say dies in sort of a hundred years sharp. That's just, you know, the simplest model to explain. So it looks at the future reward sum and ask what is my action sequence, or actually more precisely my policy, which leads in expectation, because I don't know the world, to the maximum reward sum. Let me give you an analogy. In chess, for instance, we know how to play optimally in theory. It's just a mini max strategy. I play the move which seems best to me under the assumption that the opponent plays the move which is best for him. So best, so worst for me under the assumption that he, I play again, the best move. And then you have this expecting max three to the end of the game, and then you back propagate, and then you get the best possible move. So that is the optimal strategy, which von Neumann already figured out a long time ago, for playing adversarial games. Luckily, or maybe unluckily for the theory, it becomes harder. The world is not always adversarial. So it can be, if there are other humans, even cooperative, or nature is usually, I mean, the dead nature is stochastic, you know, things just happen randomly, or don't care about you. So what you have to take into account is the noise, and not necessarily adversarialty. So you replace the minimum on the opponent's side by an expectation, which is general enough to include also adversarial cases. So now instead of a mini max strategy, you have an expected max strategy. So far, so good. So that is well known. It's called sequential decision theory. But the question is, on which probability distribution do you base that? If I have the true probability distribution, like say I play backgammon, right? There's dice, and there's certain randomness involved. Yeah, I can calculate probabilities and feed it in the expected max, or the sequential decision tree, come up with the optimal decision if I have enough compute. But for the real world, we don't know that, you know, what is the probability the driver in front of me breaks? I don't know. So depends on all kinds of things, and especially new situations, I don't know. So this is this unknown thing about prediction, and there's where Solomonov comes in. So what you do is in sequential decision tree, you just replace the true distribution, which we don't know, by this universal distribution. I didn't explicitly talk about it, but this is used for universal prediction and plug it into the sequential decision tree mechanism. And then you get the best of both worlds. You have a long term planning agent, but it doesn't need to know anything about the world because the Solomonov induction part learns. Can you explicitly try to describe the universal distribution and how Solomonov induction plays a role here? I'm trying to understand. So what it does it, so in the simplest case, I said, take the shortest program, describing your data, run it, have a prediction which would be deterministic. Yes. Okay. But you should not just take the shortest program, but also consider the longer ones, but give it lower a priori probability. So in the Bayesian framework, you say a priori, any distribution, which is a model or a stochastic program, has a certain a priori probability, which is two to the minus, and why two to the minus length? You know, I could explain length of this program. So longer programs are punished a priori. And then you multiply it with the so called likelihood function, which is, as the name suggests, is how likely is this model given the data at hand. So if you have a very wrong model, it's very unlikely that this model is true. And so it is very small number. So even if the model is simple, it gets penalized by that. And what you do is then you take just the sum, or this is the average over it. And this gives you a probability distribution. So it's universal distribution or Solomonov distribution. So it's weighed by the simplicity of the program and the likelihood. Yes. It's kind of a nice idea. Yeah. So okay, and then you said there's you're playing N or M or forgot the letter steps into the future. So how difficult is that problem? What's involved there? Okay, so basic optimization problem. What are we talking about? Yeah, so you have a planning problem up to horizon M, and that's exponential time in the horizon M, which is, I mean, it's computable, but intractable. I mean, even for chess, it's already intractable to do that exactly. And you know, for goal. But it could be also discounted kind of framework where. Yeah, so having a hard horizon, you know, at 100 years, it's just for simplicity of discussing the model and also sometimes the math is simple. But there are lots of variations, actually quite interesting parameter. There's nothing really problematic about it, but it's very interesting. So for instance, you think, no, let's let the parameter M tend to infinity, right? You want an agent which lives forever, right? If you do it normally, you have two problems. First, the mathematics breaks down because you have an infinite reward sum, which may give infinity, and getting reward 0.1 every time step is infinity, and giving reward one every time step is infinity, so equally good. Not really what we want. Other problem is that if you have an infinite life, you can be lazy for as long as you want for 10 years and then catch up with the same expected reward. And think about yourself or maybe some friends or so. If they knew they lived forever, why work hard now? Just enjoy your life and then catch up later. So that's another problem with infinite horizon. And you mentioned, yes, we can go to discounting, but then the standard discounting is so called geometric discounting. So a dollar today is about worth as much as $1.05 tomorrow. So if you do the so called geometric discounting, you have introduced an effective horizon. So the agent is now motivated to look ahead a certain amount of time effectively. It's like a moving horizon. And for any fixed effective horizon, there is a problem to solve, which requires larger horizon. So if I look ahead five time steps, I'm a terrible chess player, right? I'll need to look ahead longer. If I play go, I probably have to look ahead even longer. So for every problem, for every horizon, there is a problem which this horizon cannot solve. But I introduced the so called near harmonic horizon, which goes down with one over T rather than exponential in T, which produces an agent, which effectively looks into the future proportional to each age. So if it's five years old, it plans for five years. If it's 100 years old, it then plans for 100 years. And it's a little bit similar to humans too, right? I mean, children don't plan ahead very long, but then we get adult, we play ahead more longer. Maybe when we get very old, I mean, we know that we don't live forever. Maybe then our horizon shrinks again. So that's really interesting. So adjusting the horizon, is there some mathematical benefit of that? Or is it just a nice, I mean, intuitively, empirically, it would probably be a good idea to sort of push the horizon back, extend the horizon as you experience more of the world. But is there some mathematical conclusions here that are beneficial? With solomonic reductions or the prediction part, we have extremely strong finite time, but not finite data results. So you have so and so much data, then you lose so and so much. So it's a, the theory is really great. With the ICSE model, with the planning part, many results are only asymptotic, which, well, this is... What does asymptotic mean? Asymptotic means you can prove, for instance, that in the long run, if the agent, you know, acts long enough, then, you know, it performs optimal or some nice thing happens. So, but you don't know how fast it converges. So it may converge fast, but we're just not able to prove it because of a difficult problem. Or maybe there's a bug in the model so that it's really that slow. So that is what asymptotic means, sort of eventually, but we don't know how fast. And if I give the agent a fixed horizon M, then I cannot prove asymptotic results, right? So I mean, sort of if it dies in a hundred years, then in a hundred years it's over, I cannot say eventually. So this is the advantage of the discounting that I can prove asymptotic results. So just to clarify, so I, okay, I made, I've built up a model, we're now in the moment of, I have this way of looking several steps ahead. How do I pick what action I will take? It's like with the playing chess, right? You do this minimax. In this case here, do expectimax based on the solomonov distribution, you propagate back, and then while an action falls out, the action which maximizes the future expected reward on the solomonov distribution, and then you just take this action. And then repeat. And then you get a new observation, and you feed it in this action observation, then you repeat. And the reward, so on. Yeah, so you rewrote too, yeah. And then maybe you can even predict your own action. I love that idea. But okay, this big framework, what is it, I mean, it's kind of a beautiful mathematical framework to think about artificial general intelligence. What can you, what does it help you into it about how to build such systems? Or maybe from another perspective, what does it help us in understanding AGI? So when I started in the field, I was always interested in two things. One was AGI, the name didn't exist then, what's called general AI or strong AI, and the physics theory of everything. So I switched back and forth between computer science and physics quite often. You said the theory of everything. The theory of everything, yeah. Those are basically the two biggest problems before all of humanity. Yeah, I can explain if you wanted some later time, why I'm interested in these two questions. Can I ask you in a small tangent, if it was one to be solved, which one would you, if an apple fell on your head and there was a brilliant insight and you could arrive at the solution to one, would it be AGI or the theory of everything? Definitely AGI, because once the AGI problem is solved, I can ask the AGI to solve the other problem for me. Yeah, brilliant input. Okay, so as you were saying about it. Okay, so, and the reason why I didn't settle, I mean, this thought about, once you have solved AGI, it solves all kinds of other, not just the theory of every problem, but all kinds of more useful problems to humanity is very appealing to many people. And I had this thought also, but I was quite disappointed with the state of the art of the field of AI. There was some theory about logical reasoning, but I was never convinced that this will fly. And then there was this more heuristic approaches with neural networks and I didn't like these heuristics. So, and also I didn't have any good idea myself. So that's the reason why I toggled back and forth quite some while and even worked four and a half years in a company developing software, something completely unrelated. But then I had this idea about the ICSE model. And so what it gives you, it gives you a gold standard. So I have proven that this is the most intelligent agents which anybody could build in quotation mark, because it's just mathematical and you need infinite compute. But this is the limit and this is completely specified. It's not just a framework and every year, tens of frameworks are developed, which are just skeletons and then pieces are missing. And usually these missing pieces, turn out to be really, really difficult. And so this is completely and uniquely defined and we can analyze that mathematically. And we've also developed some approximations. I can talk about that a little bit later. That would be sort of the top down approach, like say for Neumann's minimax theory, that's the theoretical optimal play of games. And now we need to approximate it, put heuristics in, prune the tree, blah, blah, blah, and so on. So we can do that also with the ICSE model, but for general AI. It can also inspire those, and most researchers go bottom up, right? They have the systems, they try to make it more general, more intelligent. It can inspire in which direction to go. What do you mean by that? So if you have some choice to make, right? So how should I evaluate my system if I can't do cross validation? How should I do my learning if my standard regularization doesn't work well? So the answer is always this, we have a system which does everything, that's ICSE. It's just completely in the ivory tower, completely useless from a practical point of view. But you can look at it and see, ah, yeah, maybe I can take some aspects. And instead of Kolmogorov complexity, that just takes some compressors, which has been developed so far. And for the planning, well, we have UCT, which has also been used in Go. And at least it's inspired me a lot to have this formal definition. And if you look at other fields, like I always come back to physics because I have a physics background, think about the phenomenon of energy. That was long time a mysterious concept. And at some point it was completely formalized. And that really helped a lot. And you can point out a lot of these things which were first mysterious and vague, and then they have been rigorously formalized. Speed and acceleration has been confused, right? Until it was formally defined, yeah, there was a time like this. And people often who don't have any background, still confuse it. And this ICSE model or the intelligence definitions, which is sort of the dual to it, we come back to that later, formalizes the notion of intelligence uniquely and rigorously. So in a sense, it serves as kind of the light at the end of the tunnel. So for, I mean, there's a million questions I could ask her. So maybe kind of, okay, let's feel around in the dark a little bit. So there's been here a deep mind, but in general, been a lot of breakthrough ideas, just like we've been saying around reinforcement learning. So how do you see the progress in reinforcement learning is different? Like which subset of ICSE does it occupy? The current, like you said, maybe the Markov assumption is made quite often in reinforcement learning. There's other assumptions made in order to make the system work. What do you see as the difference connection between reinforcement learning and ICSE? And so the major difference is that essentially all other approaches, they make stronger assumptions. So in reinforcement learning, the Markov assumption is that the next state or next observation only depends on the previous observation and not the whole history, which makes, of course, the mathematics much easier rather than dealing with histories. Of course, they profit from it also, because then you have algorithms that run on current computers and do something practically useful. But for general AI, all the assumptions which are made by other approaches, we know already now they are limiting. So, for instance, usually you need a goddessity assumption in the MDP frameworks in order to learn. A goddessity essentially means that you can recover from your mistakes and that there are no traps in the environment. And if you make this assumption, then essentially you can go back to a previous state, go there a couple of times and then learn what statistics and what the state is like, and then in the long run perform well in this state. But there are no fundamental problems. But in real life, we know there can be one single action. One second of being inattentive while driving a car fast can ruin the rest of my life. I can become quadriplegic or whatever. So, and there's no recovery anymore. So, the real world is not ergodic, I always say. There are traps and there are situations where you are not recover from. And very little theory has been developed for this case. What about, what do you see in the context of IECSIA as the role of exploration? Sort of, you mentioned in the real world you can get into trouble when we make the wrong decisions and really pay for it. But exploration seems to be fundamentally important for learning about this world, for gaining new knowledge. So, is exploration baked in? Another way to ask it, what are the potential to ask it, what are the parameters of IECSIA that can be controlled? Yeah, I say the good thing is that there are no parameters to control. Some other people track knobs to control. And you can do that. I mean, you can modify IECSIA so that you have some knobs to play with if you want to. But the exploration is directly baked in. And that comes from the Bayesian learning and the longterm planning. So these together already imply exploration. You can nicely and explicitly prove that for simple problems like so called bandit problems, where you say, to give a real world example, say you have two medical treatments, A and B, you don't know the effectiveness, you try A a little bit, B a little bit, but you don't want to harm too many patients. So you have to sort of trade off exploring. And at some point you want to explore and you can do the mathematics and figure out the optimal strategy. They talk about Bayesian agents, they're also non Bayesian agents, but it shows that this Bayesian framework by taking a prior or possible worlds, doing the Bayesian mixture, then the Bayes optimal decision with longterm planning that is important, automatically implies exploration, also to the proper extent, not too much exploration and not too little. It is very simple settings. In the IXE model, I was also able to prove that it is a self optimizing theorem or asymptotic optimality theorems, although they're only asymptotic, not finite time bounds. So it seems like the longterm planning is really important, but the longterm part of the planning is really important. And also, I mean, maybe a quick tangent, how important do you think is removing the Markov assumption and looking at the full history? Sort of intuitively, of course, it's important, but is it like fundamentally transformative to the entirety of the problem? What's your sense of it? Like, cause we all, we make that assumption quite often. It's just throwing away the past. No, I think it's absolutely crucial. The question is whether there's a way to deal with it in a more heuristic and still sufficiently well way. So I have to come up with an example and fly, but you have some key event in your life, long time ago in some city or something, you realized that's a really dangerous street or whatever. And you want to remember that forever, in case you come back there. Kind of a selective kind of memory. So you remember all the important events in the past, but somehow selecting the important is. That's very hard. And I'm not concerned about just storing the whole history. Just, you can calculate, human life says 30 or 100 years, doesn't matter, right? How much data comes in through the vision system and the auditory system, you compress it a little bit, in this case, lossily and store it. We are soon in the means of just storing it. But you still need to the selection for the planning part and the compression for the understanding part. The raw storage I'm really not concerned about. And I think we should just store, if you develop an agent, preferably just store all the interaction history. And then you build of course models on top of it and you compress it and you are selective, but occasionally you go back to the old data and reanalyze it based on your new experience you have. Sometimes you are in school, you learn all these things you think is totally useless and much later you realize, oh, they were not so useless as you thought. I'm looking at you, linear algebra. Right. So maybe let me ask about objective functions because that rewards, it seems to be an important part. The rewards are kind of given to the system. For a lot of people, the specification of the objective function is a key part of intelligence. The agent itself figuring out what is important. What do you think about that? Is it possible within the IXE framework to yourself discover the reward based on which you should operate? Okay, that will be a long answer. So, and that is a very interesting question. And I'm asked a lot about this question, where do the rewards come from? And that depends. So, and then I give you now a couple of answers. So if you want to build agents, now let's start simple. So let's assume we want to build an agent based on the IXE model, which performs a particular task. Let's start with something super simple, like, I mean, super simple, like playing chess, or go or something, yeah. Then you just, the reward is winning the game is plus one, losing the game is minus one, done. You apply this agent. If you have enough compute, you let it self play and it will learn the rules of the game, will play perfect chess after some while, problem solved. Okay, so if you have more complicated problems, then you may believe that you have the right reward, but it's not. So a nice, cute example is the elevator control that is also in Rich Sutton's book, which is a great book, by the way. So you control the elevator and you think, well, maybe the reward should be coupled to how long people wait in front of the elevator. Long wait is bad. You program it and you do it. And what happens is the elevator eagerly picks up all the people, but never drops them off. So then you realize, oh, maybe the time in the elevator also counts, so you minimize the sum, yeah? And the elevator does that, but never picks up the people in the 10th floor and the top floor because in expectation, it's not worth it. Just let them stay. Yeah. Yeah. Yeah. So even in apparently simple problems, you can make mistakes, yeah? And that's what in more serious contexts AGI safety researchers consider. So now let's go back to general agents. So assume you want to build an agent, which is generally useful to humans, yeah? So you have a household robot, yeah? And it should do all kinds of tasks. So in this case, the human should give the reward on the fly. I mean, maybe it's pre trained in the factory and that there's some sort of internal reward for the battery level or whatever, yeah? But so it does the dishes badly, you punish the robot, it does it good, you reward the robot and then train it to a new task, yeah, like a child, right? So you need the human in the loop. If you want a system, which is useful to the human. And as long as these agents stay subhuman level, that should work reasonably well, apart from these examples. It becomes critical if they become on a human level. It's like with children, small children, you have reasonably well under control, they become older, the reward technique doesn't work so well anymore. So then finally, so this would be agents, which are just, you could say slaves to the humans, yeah? So if you are more ambitious and just say, we want to build a new species of intelligent beings, we put them on a new planet and we want them to develop this planet or whatever. So we don't give them any reward. So what could we do? And you could try to come up with some reward functions like it should maintain itself, the robot, it should maybe multiply, build more robots, right? And maybe all kinds of things which you find useful, but that's pretty hard, right? What does self maintenance mean? What does it mean to build a copy? Should it be exact copy, an approximate copy? And so that's really hard, but Laurent also at DeepMind developed a beautiful model. So it just took the ICSE model and coupled the rewards to information gain. So he said the reward is proportional to how much the agent had learned about the world. And you can rigorously, formally, uniquely define that in terms of archival versions, okay? So if you put that in, you get a completely autonomous agent. And actually, interestingly, for this agent, we can prove much stronger result than for the general agent, which is also nice. And if you let this agent loose, it will be in a sense, the optimal scientist. It is absolutely curious to learn as much as possible about the world. And of course, it will also have a lot of instrumental goals, right? In order to learn, it needs to at least survive, right? A dead agent is not good for anything. So it needs to have self preservation. And if it builds small helpers, acquiring more information, it will do that, yeah? If exploration, space exploration or whatever is necessary, right, to gathering information and develop it. So it has a lot of instrumental goals falling on this information gain. And this agent is completely autonomous of us. No rewards necessary anymore. Yeah, of course, it could find a way to game the concept of information and get stuck in that library that you mentioned beforehand with a very large number of books. The first agent had this problem. It would get stuck in front of an old TV screen, which has just had white noise. Yeah, white noise, yeah. But the second version can deal with at least stochasticity. Well. Yeah, what about curiosity? This kind of word, curiosity, creativity, is that kind of the reward function being of getting new information? Is that similar to idea of kind of injecting exploration for its own sake inside the reward function? Do you find this at all appealing, interesting? I think that's a nice definition. Curiosity is rewards. Sorry, curiosity is exploration for its own sake. Yeah, I would accept that. But most curiosity, well, in humans, and especially in children, is not just for its own sake, but for actually learning about the environment and for behaving better. So I think most curiosity is tied in the end towards performing better. Well, okay, so if intelligence systems need to have this reward function, let me, you're an intelligence system, currently passing the torrent test quite effectively. What's the reward function of our human intelligence existence? What's the reward function that Marcus Hutter is operating under? Okay, to the first question, the biological reward function is to survive and to spread, and very few humans sort of are able to overcome this biological reward function. But we live in a very nice world where we have lots of spare time and can still survive and spread, so we can develop arbitrary other interests, which is quite interesting. On top of that. On top of that, yeah. But the survival and spreading sort of is, I would say, the goal or the reward function of humans, so that the core one. I like how you avoided answering the second question, which a good intelligence system would. So my. That your own meaning of life and the reward function. My own meaning of life and reward function is to find an AGI to build it. Beautifully put. Okay, let's dissect the X even further. So one of the assumptions is kind of infinity keeps creeping up everywhere, which, what are your thoughts on kind of bounded rationality and sort of the nature of our existence and intelligence systems is that we're operating always under constraints, under limited time, limited resources. How does that, how do you think about that within the IXE framework, within trying to create an AGI system that operates under these constraints? Yeah, that is one of the criticisms about IXE, that it ignores computation and completely. And some people believe that intelligence is inherently tied to what's bounded resources. What do you think on this one point? Do you think it's, do you think the bounded resources are fundamental to intelligence? I would say that an intelligence notion, which ignores computational limits is extremely useful. A good intelligence notion, which includes these resources would be even more useful, but we don't have that yet. And so look at other fields outside of computer science, computational aspects never play a fundamental role. You develop biological models for cells, something in physics, these theories, I mean, become more and more crazy and harder and harder to compute. Well, in the end, of course, we need to do something with this model, but this is more a nuisance than a feature. And I'm sometimes wondering if artificial intelligence would not sit in a computer science department, but in a philosophy department, then this computational focus would be probably significantly less. I mean, think about the induction problem is more in the philosophy department. There's virtually no paper who cares about, how long it takes to compute the answer. That is completely secondary. Of course, once we have figured out the first problem, so intelligence without computational resources, then the next and very good question is, could we improve it by including computational resources, but nobody was able to do that so far in an even halfway satisfactory manner. I like that, that in the long run, the right department to belong to is philosophy. That's actually quite a deep idea, or even to at least to think about big picture philosophical questions, big picture questions, even in the computer science department. But you've mentioned approximation. Sort of, there's a lot of infinity, a lot of huge resources needed. Are there approximations to IXE that within the IXE framework that are useful? Yeah, we have developed a couple of approximations. And what we do there is that the Solomov induction part, which was find the shortest program describing your data, we just replace it by standard data compressors. And the better compressors get, the better this part will become. We focus on a particular compressor called context tree weighting, which is pretty amazing, not so well known. It has beautiful theoretical properties, also works reasonably well in practice. So we use that for the approximation of the induction and the learning and the prediction part. And for the planning part, we essentially just took the ideas from a computer go from 2006. It was Java Zipes Bari, also now at DeepMind, who developed the so called UCT algorithm, upper confidence bound for trees algorithm on top of the Monte Carlo tree search. So we approximate this planning part by sampling. And it's successful on some small toy problems. We don't want to lose the generality, right? And that's sort of the handicap, right? If you want to be general, you have to give up something. So, but this single agent was able to play small games like Coon poker and Tic Tac Toe and even Pacman in the same architecture, no change. The agent doesn't know the rules of the game, really nothing and all by self or by a player with these environments. So Jürgen Schmidhuber proposed something called Ghetto Machines, which is a self improving program that rewrites its own code. Sort of mathematically, philosophically, what's the relationship in your eyes, if you're familiar with it, between AXI and the Ghetto Machines? Yeah, familiar with it. He developed it while I was in his lab. Yeah, so the Ghetto Machine, to explain it briefly, you give it a task. It could be a simple task as, you know, finding prime factors in numbers, right? You can formally write it down. There's a very slow algorithm to do that. Just try all the factors, yeah. Or play chess, right? Optimally, you write the algorithm to minimax to the end of the game. So you write down what the Ghetto Machine should do. Then it will take part of its resources to run this program and other part of its resources to improve this program. And when it finds an improved version, which provably computes the same answer. So that's the key part, yeah. It needs to prove by itself that this change of program still satisfies the original specification. And if it does so, then it replaces the original program by the improved program. And by definition, it does the same job, but just faster, okay? And then, you know, it proves over it and over it. And it's developed in a way that all parts of this Ghetto Machine can self improve, but it stays provably consistent with the original specification. So from this perspective, it has nothing to do with iXe. But if you would now put iXe as the starting axioms in, it would run iXe, but you know, that takes forever. But then if it finds a provable speed up of iXe, it would replace it by this and this and this. And maybe eventually it comes up with a model which is still the iXe model. It cannot be, I mean, just for the knowledgeable reader, iXe is incomputable and that can prove that therefore there cannot be a computable exact algorithm computers. There needs to be some approximations and this is not dealt with the Ghetto Machine. So you have to do something about it. But there's the iXe TL model, which is finitely computable, which we could put in. Which part of iXe is noncomputable? The Solomonov induction part. The induction, okay, so. But there is ways of getting computable approximations of the iXe model, so then it's at least computable. It is still way beyond any resources anybody will ever have, but then the Ghetto Machine could sort of improve it further and further in an exact way. So is it theoretically possible that the Ghetto Machine process could improve? Isn't iXe already optimal? It is optimal in terms of the reward collected over its interaction cycles, but it takes infinite time to produce one action. And the world continues whether you want it or not. So the model is assuming you had an oracle, which solved this problem, and then in the next 100 milliseconds or the reaction time you need gives the answer, then iXe is optimal. It's optimal in sense of also from learning efficiency and data efficiency, but not in terms of computation time. And then the Ghetto Machine in theory, but probably not provably could make it go faster. Yes. Okay, interesting. Those two components are super interesting. The sort of the perfect intelligence combined with self improvement, sort of provable self improvement since you're always getting the correct answer and you're improving. Beautiful ideas. Okay, so you've also mentioned that different kinds of things in the chase of solving this reward, sort of optimizing for the goal, interesting human things could emerge. So is there a place for consciousness within iXe? Where does, maybe you can comment, because I suppose we humans are just another instantiation of iXe agents and we seem to have consciousness. You say humans are an instantiation of an iXe agent? Yes. Well, that would be amazing, but I think that's not true even for the smartest and most rational humans. I think maybe we are very crude approximations. Interesting. I mean, I tend to believe, again, I'm Russian, so I tend to believe our flaws are part of the optimal. So we tend to laugh off and criticize our flaws and I tend to think that that's actually close to an optimal behavior. Well, some flaws, if you think more carefully about it, are actually not flaws, yeah, but I think there are still enough flaws. I don't know. It's unclear. As a student of history, I think all the suffering that we've endured as a civilization, it's possible that that's the optimal amount of suffering we need to endure to minimize longterm suffering. That's your Russian background, I think. That's the Russian. Whether humans are or not instantiations of an iXe agent, do you think there's a consciousness of something that could emerge in a computational form or framework like iXe? Let me also ask you a question. Do you think I'm conscious? Yeah, that's a good question. That tie is confusing me, but I think so. You think that makes me unconscious because it strangles me or? If an agent were to solve the imitation game posed by Turing, I think that would be dressed similarly to you. That because there's a kind of flamboyant, interesting, complex behavior pattern that sells that you're human and you're conscious. But why do you ask? Was it a yes or was it a no? Yes, I think you're conscious, yes. So, and you explained sort of somehow why, but you infer that from my behavior, right? You can never be sure about that. And I think the same thing will happen with any intelligent agent we develop if it behaves in a way sufficiently close to humans or maybe even not humans. I mean, maybe a dog is also sometimes a little bit self conscious, right? So if it behaves in a way where we attribute typically consciousness, we would attribute consciousness to these intelligent systems. And I see probably in particular that of course doesn't answer the question whether it's really conscious. And that's the big hard problem of consciousness. Maybe I'm a zombie. I mean, not the movie zombie, but the philosophical zombie. Is to you the display of consciousness close enough to consciousness from a perspective of AGI that the distinction of the hard problem of consciousness is not an interesting one? I think we don't have to worry about the consciousness problem, especially the hard problem for developing AGI. I think, you know, we progress. At some point we have solved all the technical problems and this system will behave intelligent and then super intelligent. And this consciousness will emerge. I mean, definitely it will display behavior which we will interpret as conscious. And then it's a philosophical question. Did this consciousness really emerge or is it a zombie which just, you know, fakes everything? We still don't have to figure that out. Although it may be interesting, at least from a philosophical point of view, it's very interesting, but it may also be sort of practically interesting. You know, there's some people saying, if it's just faking consciousness and feelings, you know, then we don't need to be concerned about, you know, rights. But if it's real conscious and has feelings, then we need to be concerned, yeah. I can't wait till the day where AI systems exhibit consciousness because it'll truly be some of the hardest ethical questions of what we do with that. It is rather easy to build systems which people ascribe consciousness. And I give you an analogy. I mean, remember, maybe it was before you were born, the Tamagotchi? Yeah. Freaking born. How dare you, sir? Why, that's the, you're young, right? Yes, that's good. Thank you, thank you very much. But I was also in the Soviet Union. We didn't have any of those fun things. But you have heard about this Tamagotchi, which was, you know, really, really primitive, actually, for the time it was, and, you know, you could raise, you know, this, and kids got so attached to it and, you know, didn't want to let it die and probably, if we would have asked, you know, the children, do you think this Tamagotchi is conscious? They would have said yes. Half of them would have said yes, I would guess. I think that's kind of a beautiful thing, actually, because that consciousness, ascribing consciousness, seems to create a deeper connection. Yeah. Which is a powerful thing. But we'll have to be careful on the ethics side of that. Well, let me ask about the AGI community broadly. You kind of represent some of the most serious work on AGI, as of at least earlier, and DeepMind represents serious work on AGI these days. But why, in your sense, is the AGI community so small or has been so small until maybe DeepMind came along? Like, why aren't more people seriously working on human level and superhuman level intelligence from a formal perspective? Okay, from a formal perspective, that's sort of an extra point. So I think there are a couple of reasons. I mean, AI came in waves, right? You know, AI winters and AI summers, and then there were big promises which were not fulfilled, and people got disappointed. And that narrow AI solving particular problems, which seemed to require intelligence, was always to some extent successful, and there were improvements, small steps. And if you build something which is useful for society or industrial useful, then there's a lot of funding. So I guess it was in parts the money, which drives people to develop a specific system solving specific tasks. But you would think that, at least in university, you should be able to do ivory tower research. And that was probably better a long time ago, but even nowadays, there's quite some pressure of doing applied research or translational research, and it's harder to get grants as a theorist. So that also drives people away. It's maybe also harder attacking the general intelligence problem. So I think enough people, I mean, maybe a small number were still interested in formalizing intelligence and thinking of general intelligence, but not much came up, right? Well, not much great stuff came up. So what do you think, we talked about the formal big light at the end of the tunnel, but from the engineering perspective, what do you think it takes to build an AGI system? Is that, and I don't know if that's a stupid question or a distinct question from everything we've been talking about at AICSI, but what do you see as the steps that are necessary to take to start to try to build something? So you want a blueprint now, and then you go off and do it? That's the whole point of this conversation, trying to squeeze that in there. Now, is there, I mean, what's your intuition? Is it in the robotics space or something that has a body and tries to explore the world? Is it in the reinforcement learning space, like the efforts with AlphaZero and AlphaStar that are kind of exploring how you can solve it through in the simulation in the gaming world? Is there stuff in sort of all the transformer work and natural English processing, sort of maybe attacking the open domain dialogue? Like what, where do you see a promising pathways? Let me pick the embodiment maybe. So embodiment is important, yes and no. I don't believe that we need a physical robot walking or rolling around, interacting with the real world in order to achieve AGI. And I think it's more of a distraction probably than helpful, it's sort of confusing the body with the mind. For industrial applications or near term applications, of course we need robots for all kinds of things, but for solving the big problem, at least at this stage, I think it's not necessary. But the answer is also yes, that I think the most promising approach is that you have an agent and that can be a virtual agent in a computer interacting with an environment, possibly a 3D simulated environment like in many computer games. And you train and learn the agent, even if you don't intend to later put it sort of, this algorithm in a robot brain and leave it forever in the virtual reality, getting experience in a, although it's just simulated 3D world, is possibly, and I say possibly, important to understand things on a similar level as humans do, especially if the agent or primarily if the agent needs to interact with the humans. If you talk about objects on top of each other in space and flying and cars and so on, and the agent has no experience with even virtual 3D worlds, it's probably hard to grasp. So if you develop an abstract agent, say we take the mathematical path and we just want to build an agent which can prove theorems and becomes a better and better mathematician, then this agent needs to be able to reason in very abstract spaces and then maybe sort of putting it into 3D environments, simulated or not is even harmful. It should sort of, you put it in, I don't know, an environment which it creates itself or so. It seems like you have a interesting, rich, complex trajectory through life in terms of your journey of ideas. So it's interesting to ask what books, technical, fiction, philosophical, books, ideas, people had a transformative effect. Books are most interesting because maybe people could also read those books and see if they could be inspired as well. Yeah, luckily I asked books and not singular book. It's very hard and I try to pin down one book. And I can do that at the end. So the most, the books which were most transformative for me or which I can most highly recommend to people interested in AI. Both perhaps. Yeah, yeah, both, both, yeah, yeah. I would always start with Russell and Norvig, Artificial Intelligence, A Modern Approach. That's the AI Bible. It's an amazing book. It's very broad. It covers all approaches to AI. And even if you focused on one approach, I think that is the minimum you should know about the other approaches out there. So that should be your first book. Fourth edition should be coming out soon. Oh, okay, interesting. There's a deep learning chapter now, so there must be. Written by Ian Goodfellow, okay. And then the next book I would recommend, The Reinforcement Learning Book by Satneen Barto. That's a beautiful book. If there's any problem with the book, it makes RL feel and look much easier than it actually is. It's very gentle book. It's very nice to read, the exercises to do. You can very quickly get some RL systems to run. You know, very toy problems, but it's a lot of fun. And in a couple of days you feel you know what RL is about, but it's much harder than the book. Yeah. Oh, come on now, it's an awesome book. Yeah, it is, yeah. And maybe, I mean, there's so many books out there. If you like the information theoretic approach, then there's Kolmogorov Complexity by Alin Vitani, but probably, you know, some short article is enough. You don't need to read a whole book, but it's a great book. And if you have to mention one all time favorite book, it's of different flavor, that's a book which is used in the International Baccalaureate for high school students in several countries. That's from Nicholas Alchin, Theory of Knowledge, second edition or first, not the third, please. The third one, they took out all the fun. Okay. So this asks all the interesting, or to me, interesting philosophical questions about how we acquire knowledge from all perspectives, from math, from art, from physics, and ask how can we know anything? And the book is called Theory of Knowledge. From which, is this almost like a philosophical exploration of how we get knowledge from anything? Yes, yeah, I mean, can religion tell us, you know, about something about the world? Can science tell us something about the world? Can mathematics, or is it just playing with symbols? And, you know, it's open ended questions. And, I mean, it's for high school students, so they have then resources from Hitchhiker's Guide to the Galaxy and from Star Wars and The Chicken Crossed the Road, yeah. And it's fun to read, but it's also quite deep. If you could live one day of your life over again, has it made you truly happy? Or maybe like we said with the books, it was truly transformative. What day, what moment would you choose that something pop into your mind? Does it need to be a day in the past, or can it be a day in the future? Well, space time is an emergent phenomena, so it's all the same anyway. Okay. Okay, from the past. You're really good at saying from the future, I love it. No, I will tell you from the future, okay. So from the past, I would say when I discovered my Axie model. I mean, it was not in one day, but it was one moment where I realized Kolmogorov complexity and didn't even know that it existed, but I discovered sort of this compression idea myself, but immediately I knew I can't be the first one, but I had this idea. And then I knew about sequential decisionry, and I knew if I put it together, this is the right thing. And yeah, still when I think back about this moment, I'm super excited about it. Was there any more details and context that moment? Did an apple fall on your head? So it was like, if you look at Ian Goodfellow talking about GANs, there was beer involved. Is there some more context of what sparked your thought, or was it just? No, it was much more mundane. So I worked in this company. So in this sense, the four and a half years was not completely wasted. And I worked on an image interpolation problem, and I developed a quite neat new interpolation techniques and they got patented, which happens quite often. I got sort of overboard and thought about, yeah, that's pretty good, but it's not the best. So what is the best possible way of doing interpolation? And then I thought, yeah, you want the simplest picture, which is if you coarse grain it, recovers your original picture. And then I thought about the simplicity concept more in quantitative terms, and then everything developed. And somehow the four beautiful mix of also being a physicist and thinking about the big picture of it, then led you to probably think big with AIX. So as a physicist, I was probably trained not to always think in computational terms, just ignore that and think about the fundamental properties, which you want to have. So what about if you could really one day in the future? What would that be? When I solve the AGI problem. In practice, so in theory, I have solved it with the AIX model, but in practice. And then I ask the first question. What would be the first question? What's the meaning of life? I don't think there's a better way to end it. Thank you so much for talking today. It's a huge honor to finally meet you. Yeah, thank you too. It was a pleasure of mine too. And now let me leave you with some words of wisdom from Albert Einstein. The measure of intelligence is the ability to change. Thank you for listening and hope to see you next time.
Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75
The following is a conversation with John Hopfield, professor at Princeton, whose life's work weaved beautifully through biology, chemistry, neuroscience, and physics. Most crucially, he saw the messy world of biology through the piercing eyes of a physicist. He's perhaps best known for his work on associative neural networks, now known as Hopfield networks, that were one of the early ideas that catalyzed the development of the modern field of deep learning. As his 2019 Franklin Medal in Physics Award states, he applied concepts of theoretical physics to provide new insights on important biological questions in a variety of areas, including genetics and neuroscience with significant impact on machine learning. And as John says in his 2018 article titled, Now What?, his accomplishments have often come about by asking that very question, now what? And often responding by a major change of direction. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, and Lex Friedman, spelled F R I D M A M. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LexPodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is to me an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store, Google Play, and use code LexPodcast, you'll get $10, and Cash App will also donate $10 to First, one of my favorite organizations that is helping advance robotics and STEM education for young people around the world. And now here's my conversation with John Hopfield. What difference between biological neural networks and artificial neural networks is most captivating and profound to you? At the higher philosophical level, let's not get technical just yet. But one of the things that very much intrigues me is the fact that neurons have all kinds of components, properties to them. And in evolutionary biology, if you have some little quirk in how a molecule works or how a cell works, and it can be made use of, evolution will sharpen it up and make it into a useful feature rather than a glitch. And so you expect in neurobiology for evolution to have captured all kinds of possibilities of getting neurons, of how you get neurons to do things for you. And that aspect has been completely suppressed in artificial neural networks. So the glitches become features in the biological neural network. They can. Look, let me take one of the things that I used to do research on. If you take things which oscillate, they have rhythms which are sort of close to each other. Under some circumstances, these things will have a phase transition and suddenly the rhythm will, everybody will fall into step. There was a marvelous physical example of that in the Millennium Bridge across the Thames River, about, built about 2001. And pedestrians walking across, pedestrians don't walk synchronized, they don't walk in lockstep. But they're all walking about the same frequency and the bridge could sway at that frequency and the slight sway made pedestrians tend a little bit to lock into step and after a while, the bridge was oscillating back and forth and the pedestrians were walking in step to it. And you could see it in the movies made out of the bridge. And the engineers made a simple minor mistake. They assume when you walk, it's step, step, step and it's back and forth motion. But when you walk, it's also right foot left with side to side motion. And it's the side to side motion for which the bridge was strong enough, but it wasn't stiff enough. And as a result, you would feel the motion and you'd fall into step with it. And people were very uncomfortable with it. They closed the bridge for two years while they built stiffening for it. Now, nerve cells produce action potentials. You have a bunch of cells which are loosely coupled together producing action potentials at the same rate. There'll be some circumstances under which these things can lock together. Other circumstances in which they won't. Well, if they're fired together, you can be sure that other cells are gonna notice it. So you can make a computational feature out of this in an evolving brain. Most artificial neural networks don't even have action potentials, let alone have the possibility for synchronizing them. And you mentioned the evolutionary process. So the evolutionary process that builds on top of biological systems leverages the weird mess of it somehow. So how do you make sense of that ability to leverage all the different kinds of complexities in the biological brain? Well, look, in the biological molecule level, you have a piece of DNA which encodes for a particular protein. You could duplicate that piece of DNA and now one part of it can code for that protein, but the other one could itself change a little bit and thus start coding for a molecule which is slightly different. Now, if that molecule was just slightly different, had a function which helped any old chemical reaction which was important to the cell, you would go ahead and let that try, and evolution would slowly improve that function. And so you have the possibility of duplicating and then having things drift apart. One of them retain the old function, the other one do something new for you. And there's evolutionary pressure to improve. Look, there isn't in computers too, but improvement has to do with closing some companies and opening some others. The evolutionary process looks a little different. Yeah, similar timescale perhaps. Much shorter in timescale. Companies close, yeah, go bankrupt and are born, yeah, shorter, but not much shorter. Some companies last a century, but yeah, you're right. I mean, if you think of companies as a single organism that builds and you all know, yeah, it's a fascinating dual correspondence there between biological organisms. And companies have difficulty having a new product competing with an old product. When IBM built its first PC, you probably read the book, they made a little isolated internal unit to make the PC. And for the first time in IBM's history, they didn't insist that you build it out of IBM components. But they understood that they could get into this market, which is a very different thing by completely changing their culture. And biology finds other markets in a more adaptive way. Yeah, it's better at it. It's better at that kind of integration. So maybe you've already said it, but what to use the most beautiful aspect or mechanism of the human mind? Is it the adaptive, the ability to adapt as you've described, or is there some other little quirk that you particularly like? Adaptation is everything when you get down to it. But the difference, there are differences between adaptation where your learning goes on only over generations and over evolutionary time, where your learning goes on at the time scale of one individual who must learn from the environment during that individual's lifetime. And biology has both kinds of learning in it. And the thing which makes neurobiology hard is that a mathematical system, as it were, built on this other kind of evolutionary system. What do you mean by mathematical system? Where's the math and the biology? Well, when you talk to a computer scientist about neural networks, it's all math. The fact that biology actually came about from evolution, and the fact that biology is about a system which you can build in three dimensions. If you look at computer chips, computer chips are basically two dimensional structures, maybe 2.1 dimensions, but they really have difficulty doing three dimensional wiring. Biology is, the neocortex is actually also sheet like, and it sits on top of the white matter, which is about 10 times the volume of the gray matter and contains all what you might call the wires. But there's a huge, the effect of computer structure on what is easy and what is hard is immense. And biology does, it makes some things easy that are very difficult to understand how to do computationally. On the other hand, you can't do simple floating point arithmetic because it's awfully stupid. And you're saying this kind of three dimensional complicated structure makes, it's still math. It's still doing math. The kind of math it's doing enables you to solve problems of a very different kind. That's right, that's right. So you mentioned two kinds of adaptation, the evolutionary adaptation and the adaptation or learning at the scale of a single human life. Which do you, which is particularly beautiful to you and interesting from a research and from just a human perspective? And which is more powerful? I find things most interesting that I begin to see how to get into the edges of them and tease them apart a little bit and see how they work. And since I can't see the evolutionary process going on, I'm in awe of it. But I find it just a black hole as far as trying to understand what to do. And so in a certain sense, I'm in awe of it, but I couldn't be interested in working on it. The human life's time scale is however thing you can tease apart and study. Yeah, you can do, there's developmental neurobiology which understands how the connections and how the structure evolves from a combination of what the genetics is like and the real, the fact that you're building a system in three dimensions. In just days and months, those early days of a human life are really interesting. They are and of course, there are times of immense cell multiplication. There are also times of the greatest cell death in the brain is during infancy. It's turnover. So what is not effective, what is not wired well enough to use at the moment, throw it out. It's a mysterious process. From, let me ask, from what field do you think the biggest breakthrough is in understanding the mind will come in the next decades? Is it neuroscience, computer science, neurobiology, psychology, physics, maybe math, maybe literature? Well, of course, I see the world always through a lens of physics. I grew up in physics and the way I pick problems is very characteristic of physics and of an intellectual background which is not psychology, which is not chemistry and so on and so on. Yeah, both of your parents are physicists. Both of my parents were physicists and the real thing I got out of that was a feeling that the world is an understandable place and if you do enough experiments and think about what they mean and structure things so you can do the mathematics of the, relevant to the experiments, you ought to be able to understand how things work. But that was, that was a few years ago. Did you change your mind at all through many decades of trying to understand the mind, of studying in different kinds of ways? Not even the mind, just biological systems. You still have hope that physics, that you can understand? There's a question of what do you mean by understand? Of course. When I taught freshman physics, I used to say, I wanted to get physics to understand the subject, to understand Newton's laws. I didn't want them simply to memorize a set of examples to which they knew the equations to write down to generate the answers. I had this nebulous idea of understanding so that if you looked at a situation, you could say, oh, I expect the ball to make that trajectory or I expect some intuitive notion of understanding and I don't know how to express that very well and I've never known how to express it well. And you run smack up against it when you do these, look at these simple neural nets, feed forward neural nets, which do amazing things and yet, you know, contain nothing of the essence of what I would have felt was understanding. Understanding is more than just an enormous lookup table. Let's linger on that. How sure you are of that? What if the table gets really big? So, I mean, asked another way, these feed forward neural networks, do you think they'll ever understand? Could answer that in two ways. I think if you look at real systems, feedback is an essential aspect of how these real systems compute. On the other hand, if I have a mathematical system with feedback, I know I can unlayer this and do it, but I have an exponential expansion in the amount of stuff I have to build if I can resolve the problem that way. So feedback is essential. So we can talk even about recurrent neural nets, so recurrence, but do you think all the pieces are there to achieve understanding through these simple mechanisms? Like back to our original question, what is the fundamental, is there a fundamental difference between artificial neural networks and biological or is it just a bunch of surface stuff? Suppose you ask a neurosurgeon, when is somebody dead? Yeah. So we'll probably go back to saying, well, I can look at the brain rhythms and tell you this is a brain which has never could have functioned again. This is one of the, this other one is one of the stuff we treat it well is still recoverable. And then just do that by some electrodes looking at simple electrical patterns, which don't look in any detail at all what individual neurons are doing. These rhythms are utterly absent from anything which goes on at Google. Yeah, but the rhythms. But the rhythms what? So, well, that's like comparing, okay, I'll tell you, it's like you're comparing the greatest classical musician in the world to a child first learning to play. The question I'm at, but they're still both playing the piano. I'm asking, is there, will it ever go on at Google? Do you have a hope? Because you're one of the seminal figures in both launching both disciplines, both sides of the river. I think it's going to go on generation after generation. The way it has where what you might call the AI computer science community says, let's take the following. This is our model of neurobiology at the moment. Let's pretend it's good enough and do everything we can with it. And it does interesting things. And after a while it sort of grinds into the sand and you say, ah, something else is needed for neurobiology. And some other grand thing comes in and enables you to go a lot further. What will go into the sand again? And I think it could be generations of this evolution. I don't know how many of them. And each one is going to get you further into what a brain does. And in some sense, past the Turing test longer and in more broad aspects. And how many of these are going to have to be before you say, I've made something, I've made a human, I don't know. But your sense is it might be a couple. My sense is it might be a couple more. Yeah. And going back to my brainwaves as it were. Yes, from the AI point of view, they would say, ah, maybe these are an epiphenomenon and not important at all. The first car I had, a real wreck of a 1936 Dodge, go above about 45 miles an hour and the wheels would shimmy. Yeah. Good speedometer that. Now, nobody designed the car that way. The car is malfunctioning to have that. But in biology, if it were useful to know when are you going more than 45 miles an hour, you just capture that. And you wouldn't worry about where it came from. Yeah. It's going to be a long time before that kind of thing, which can take place in large complex networks of things is actually used in the computation. Look, how many transistors are there in your laptop these days? Actually, I don't know the number. It's on the scale of 10 to the 10. I can't remember the number either. Yeah. And all the transistors are somewhat similar. And most physical systems with that many parts, all of which are similar, have collective properties. Yes. Sound waves in air, earthquakes, what have you, have collective properties. Weather. There are no collective properties used in artificial neural networks, in AI. Yeah, it's very. If biology uses them, it's going to take us to more generations of things for people to actually dig in and see how they are used and what they mean. See, you're very right. We might have to return several times to neurobiology and try to make our transistors more messy. Yeah, yeah. At the same time, the simple ones will conquer big aspects. And I think one of the most, biggest surprises to me was how well learning systems because they're manifestly nonbiological, how important they can be actually, and how important and how useful they can be in AI. So if we can just take a stroll to some of your work. If we can just take a stroll to some of your work that is incredibly surprising, that it works as well as it does, that launched a lot of the recent work with neural networks. If we go to what are now called Hopfield networks, can you tell me what is associative memory in the mind for the human side? Let's explore memory for a bit. Okay, what do you mean by associative memory is, ah, you have a memory of each of your friends. Your friend has all kinds of properties from what they look like, what their voice sounds like, to where they went to college, where you met them, go on and on, what science papers they've written. And if I start talking about a 5 foot 10 wire, cognitive scientist who's got a very bad back, it doesn't take very long for you to say, oh, he's talking about Jeff Hinton. I never mentioned the name or anything very particular. But somehow a few facts that are associated with a particular person enables you to get a hold of the rest of the facts. Or not the rest of them, another subset of them. And it's this ability to link things together, link experiences together, which goes under the general name of associative memory. And a large part of intelligent behavior is actually just large associative memories at work, as far as I can see. What do you think is the mechanism of how it works? What do you think is the mechanism of how it works in the mind? Is it a mystery to you still? Do you have inklings of how this essential thing for cognition works? What I made 35 years ago was, of course, a crude physics model to actually enable you to understand my old sense of understanding as a physicist, because you could say, ah, I understand why this goes to stable states. It's like things going downhill. And that gives you something with which to think in physical terms rather than only in mathematical terms. So you've created these associative artificial networks. That's right. Now, if you look at what I did, I didn't at all describe a system which gracefully learns. I described a system in which you could understand how learning could link things together, how very crudely it might learn. One of the things which intrigues me as I reinvestigate that system now to some extent is, look, I see you, I'll see you every second for the next hour or what have you. Each look at you is a little bit different. I don't store all those second by second images. I don't store 3,000 images. I somehow compact this information. So I now have a view of you, which I can use. It doesn't slavishly remember anything in particular, but it compacts the information into useful chunks, which are somehow these chunks, which are not just activities of neurons, bigger things than that, which are the real entities which are useful to you. Which are useful to you. Useful to you to describe, to compress this information coming at you. And you have to compress it in such a way that if the information comes in just like this again, I don't bother to rewrite it or efforts to rewrite it simply do not yield anything because those things are already written. And that needs to be not, look this up, have I stored it somewhere already? There'll be something which is much more automatic in the machine hardware. Right, so in the human mind, how complicated is that process do you think? So you've created, feels weird to be sitting with John Hotfield calling him Hotfield Networks, but. It is weird. Yeah, but nevertheless, that's what everyone calls him. So here we are. So that's a simplification. That's what a physicist would do. You and Richard Feynman sat down and talked about associative memory. Now, if you look at the mind where you can't quite simplify it so perfectly, do you think that? Well, let me backtrack just a little bit. Yeah. Biology is about dynamical systems. Computers are dynamical systems. You can ask, if you want to model biology, if you want to model neurobiology, what is the time scale? There's a dynamical system in which, of a fairly fast time scale in which you could say, the synapses don't change much during this computation, so I'll think of the synapses fixed and just do the dynamics of the activity. Or you can say, the synapses are changing fast enough that I have to have the synaptic dynamics working at the same time as the system dynamics in order to understand the biology. Most, if you look at the feedforward artificial neural nets, they're all done as learnings. First of all, I spend some time learning, not performing, and I turn off learning and I turn off learning, and I turn off learning and I perform. Right. That's not biology. And so as I look more deeply at neurobiology, even as associative memory, I've got to face the fact that the dynamics of the synapse change is going on all the time. And I can't just get by by saying, I'll do the dynamics of activity with fixed synapses. Yeah. So the synaptic, the dynamics of the synapses is actually fundamental to the whole system. Yeah, yeah. And there's nothing necessarily separating the time scales. When the time scale's gonna be separated, it's neat from the physicist's or the mathematician's point of view, but it's not necessarily true in neurobiology. So you're kind of dancing beautifully between showing a lot of respect to physics and then also saying that physics cannot quite reach the complexity of biology. So where do you land? Or do you continuously dance between the two points? I continuously dance between them because my whole notion of understanding is that you can describe to somebody else how something works in ways which are honest and believable and still not describe all the nuts and bolts in detail. Weather. I can describe weather as 10 to the 32 molecules colliding in the atmosphere. I can simulate weather that way if I have a big enough machine. I'll simulate it accurately. It's no good for understanding. If I want to understand things, I want to understand things in terms of wind patterns, hurricanes, pressure differentials, and so on, all things as they're collective. And the physicist in me always hopes that biology will have some things that can be said about it which are both true and for which you don't need all the molecular details as the molecules colliding. That's what I mean from the roots of physics, by understanding. So what did, again, sorry, but Hopfield Networks help you understand what insight did give us about memory, about learning? They didn't give insights about learning. They gave insights about how things having learned could be expressed, how having learned a picture of you, a picture of you reminds me of your name. That would, but it didn't describe a reasonable way of actually doing the learning. They only said if you had previously learned the connections of this kind of pattern, would now be able to, behave in a physical way was to say, ah, if I put the part of the pattern in here, the other part of the pattern will complete over here. I could understand that physics, if the right learning stuff had already been put in. And it could understand why then putting in a picture of somebody else would generate something else over here. But it did not have a reasonable description of the learning that was going on. It did not have a reasonable description of the learning process. But even, so forget learning. I mean, that's just a powerful concept that sort of forming representations that are useful to be robust, you know, for error correction kind of thing. So this is kind of what the biology does we're talking about. Yeah, and what my paper did was simply enable you, there are lots of ways of being robust. If you think of a dynamical system, you think of a system where a path is going on in time. And if you think for a computer, there's a computational path, which is going on in a huge dimensional space of ones and zeros. And an error correction system is a system, which if you get a little bit off that trajectory, will push you back onto that trajectory again. So you get to the same answer in spite of the fact that there were things, so that the computation wasn't being ideally done all the way along the line. And there are lots of models for error correction. But one of the models for error correction is to say, there's a valley that you're following, flowing down. And if you push a little bit off the valley, just like water being pushed a little bit by a rock, it gets back and follows the course of the river. And that basically the analog in the physical system, which enables you to say, oh yes, error free computation and an associative memory are very much like things that I can understand from the point of view of a physical system. The physical system is, can be under some circumstances, an accurate metaphor. It's not the only metaphor. There are error correction schemes, which don't have a valley and energy behind them. But those are error correction schemes, which a mathematician may be able to understand, but I don't. So there's the physical metaphor that seems to work here. That's right, that's right. So these kinds of networks actually led to a lot of the work that is going on now in neural networks, artificial neural networks. So the follow on work with restricted Boltzmann machines and deep belief nets followed on from these ideas of the Hopfield network. So what do you think about this continued progress of that work towards now re revigorated exploration of feed forward neural networks and recurrent neural networks and convolutional neural networks and kinds of networks that are helping solve image recognition, natural language processing, all that kind of stuff. It always intrigued me that one of the most long lived of the learning systems is the Boltzmann machine, which is intrinsically a feedback network. And with the brilliance of Hind and Sinowski to understand how to do learning in that. And it's still a useful way to understand learning and the learning that you understand in that has something to do with the way that feed forward systems work. But it's not always exactly simple to express that intuition. But it's always amuses me to see Hinton going back to the will yet again on a form of the Boltzmann machine because really that which has feedback and interesting probabilities in it is a lovely encapsulation of something in computational. Something computational? Something both computational and physical. Computational and it's very much related to feed forward networks. Physical in that Boltzmann machine learning is really learning a set of parameters for a physics Hamiltonian or energy function. What do you think about learning in this whole domain? Do you think the aforementioned guy, Jeff Hinton, all the work there with backpropagation, all the kind of learning that goes on in these networks, if we compare it to learning in the brain, for example, is there echoes of the same kind of power that backpropagation reveals about these kinds of recurrent networks? Or is it something fundamentally different going on in the brain? I don't think the brain is as deep as the deepest networks go, the deepest computer science networks. And I do wonder whether part of that depth of the computer science networks is necessitated by the fact that the only learning that's easily done on a machine is feed forward. And so there's the question of to what extent is the biology, which has some feed forward and some feed back, been captured by something which has got many more neurons but much more depth than the neurons in it. So part of you wonders if the feedback is actually more essential than the number of neurons or the depth, the dynamics of the feedback. The dynamics of the feedback. Look, if you don't have feedback, it's a little bit like a building a big computer and running it through one clock cycle. And then you can't do anything until you reload something coming in. How do you use the fact that there are multiple clock cycles? How do I use the fact that you can close your eyes, stop listening to me and think about a chessboard for two minutes without any input whatsoever? Yeah, that memory thing, that's fundamentally a feedback kind of mechanism. You're going back to something. Yes, it's hard to understand. It's hard to introspect, let alone consciousness. Oh, let alone consciousness, yes, yes. Because that's tied up in there too. You can't just put that on another shelf. Every once in a while I get interested in consciousness and then I go and I've done that for years and ask one of my betters, as it were, their view on consciousness. It's been interesting collecting them. What is consciousness? Let's try to take a brief step into that room. Well, ask Marvin Minsky, his view on consciousness. And Marvin said, consciousness is basically overrated. It may be an epiphenomenon. After all, all the things your brain does, but they're actually hard computations you do nonconsciously. And there's so much evidence that even the simple things you do, you can make decisions, you can make committed decisions about them, the neurobiologist can say, he's now committed, he's going to move the hand left before you know it. So his view that consciousness is not, that's just like little icing on the cake. The real cake is in the subconscious. Yum, yum. Subconscious, nonconscious. Nonconscious, what's the better word, sir? It's only that Freud captured the other word. Yeah, it's a confusing word, subconscious. Nicholas Chaiter wrote an interesting book. I think the title of it is The Mind is Flat. Flat in a neural net sense, might be flat as something which is a very broad neural net without any layers in depth, whereas a deep brain would be many layers and not so broad. In the same sense that if you push Minsky hard enough, he would probably have said, consciousness is your effort to explain to yourself that which you have already done. Yeah, it's the weaving of the narrative around the things that have already been computed for you. That's right, and so much of what we do for our memories of events, for example. If there's some traumatic event you witness, you will have a few facts about it correctly done. If somebody asks you about it, you will weave a narrative which is actually much more rich in detail than that based on some anchor points you have of correct things and pulling together general knowledge on the other, but you will have a narrative. And once you generate that narrative, you are very likely to repeat that narrative and claim that all the things you have in it are actually the correct things. There was a marvelous example of that in the Watergate slash impeachment era of John Dean. John Dean, you're too young to know, had been the personal lawyer of Nixon. And so John Dean was involved in the coverup and John Dean ultimately realized the only way to keep himself out of jail for a long time was actually to tell some of the truths about Nixon. And John Dean was a tremendous witness. He would remember these conversations in great detail and very convincing detail. And long afterward, some of the tapes, the secret tapes as it were from which these, Don was, Gene was recalling these conversations were published, and one found out that John Dean had a good but not exceptional memory. What he had was an ability to paint vividly and in some sense accurately the tone of what was going on. By the way, that's a beautiful description of consciousness. Do you, like where do you stand in your today? So perhaps it changes day to day, but where do you stand on the importance of consciousness in our whole big mess of cognition? Is it just a little narrative maker or is it actually fundamental to intelligence? That's a very hard one. When I asked Francis Crick about consciousness, he launched forward in a long monologue about Mendel and the peas and how Mendel knew that there was something and how biologists understood that there was something in inheritance, which was just very, very different. And the fact that inherited traits didn't just wash out into a gray, but this or this and propagated that that was absolutely fundamental to the biology. And it took generations of biologists to understand that there was genetics and it took another generation or two to understand that genetics came from DNA. But very shortly after Mendel, thinking biologists did realize that there was a deep problem about inheritance. And Francis would have liked to have said, and that's why I'm working on consciousness. But of course, he didn't have any smoking gun in the sense of Mendel. And that's the weakness of his position. If you read his book, which he wrote with Koch, I think. Yeah, Christoph Koch, yeah. I find it unconvincing for the smoking gun reason. So I'm going on collecting views without actually having taken a very strong one myself, because I haven't seen the entry point. Not seeing the smoking gun from the point of view of physics, I don't see the entry point. Whereas in neurobiology, once I understood the idea of a collective, an evolution of dynamics, which could be described as a collective phenomenon, I thought, ah, there's a point where what I know about physics is so different from any neurobiologist that I have something that I might be able to contribute. And right now, there's no way to grasp at consciousness from a physics perspective. From my point of view, that's correct. And of course, people, physicists, like everybody else, think very muddily about things. You ask the closely related question about free will. Do you believe you have free will? Physicists will give an offhand answer, and then backtrack, backtrack, backtrack, where they realize that the answer they gave must fundamentally contradict the laws of physics. Natural, answering questions of free will and consciousness naturally lead to contradictions from a physics perspective. Because it eventually ends up with quantum mechanics, and then you get into that whole mess of trying to understand how much, from a physics perspective, how much is determined, already predetermined, how much is already deterministic about our universe, and there's lots of different things. And if you don't push quite that far, you can say, essentially, all of neurobiology, which is relevant, can be captured by classical equations of motion. Right, because in my view of the mysteries of the brain are not the mysteries of quantum mechanics, but the mysteries of what can happen when you have a dynamical system, driven system, with 10 to the 14 parts. That that complexity is something which is, that the physics of complex systems is at least as badly understood as the physics of phase coherence in quantum mechanics. Can we go there for a second? You've talked about attractor networks, and just maybe you could say what are attractor networks, and more broadly, what are interesting network dynamics that emerge in these or other complex systems? You have to be willing to think in a huge number of dimensions, because in a huge number of dimensions, the behavior of a system can be thought as just the motion of a point over time in this huge number of dimensions. All right. And an attractor network is simply a network where there is a line and other lines converge on it in time. That's the essence of an attractor network. That's how you. In a highly dimensional space. And the easiest way to get that is to do it in a highly dimensional space, where some of the dimensions provide the dissipation, which, if I have a physical system, trajectories can't contract everywhere. They have to contract in some places and expand in others. There's a fundamental classical theorem of statistical mechanics, which goes under the name of Liouville's theorem, which says you can't contract everywhere. If you contract somewhere, you expand somewhere else. In interesting physical systems, you've got driven systems where you have a small subsystem, which is the interesting part. And the rest of the contraction and expansion, the physicists would say it's entropy flow in this other part of the system. But basically, attractor networks are dynamics that are funneling down so that you can't be any, so that if you start somewhere in the dynamical system, you will soon find yourself on a pretty well determined pathway, which goes somewhere. If you start somewhere else, you'll wind up on a different pathway, but I don't have just all possible things. You have some defined pathways which are allowed and onto which you will converge. And that's the way you make a stable computer, and that's the way you make a stable behavior. So in general, looking at the physics of the emergent stability in networks, what are some interesting characteristics that, what are some interesting insights from studying the dynamics of such high dimensional systems? Most dynamical systems, most driven dynamical systems, are driven, they're coupled somehow to an energy source. And so their dynamics keeps going because it's coupling to the energy source. Most of them, it's very difficult to understand at all what the dynamical behavior is going to be. You have to run it. You have to run it. There's a subset of systems which has what is actually known to the mathematicians as a Lyapunov function, and those systems, you can understand convergent dynamics by saying you're going downhill on something or other. And that's what I found with ever knowing what Lyapunov functions were in the simple model I made in the early 80s, was an energy function so you could understand how you could get this channeling on the pathways without having to follow the dynamics in infinite detail. You started rolling a ball at the top of a mountain, it's gonna wind up at the bottom of a valley. You know that's true without actually watching the ball roll down. There's certain properties of the system that when you can know that. That's right. And not all systems behave that way. Most don't, probably. Most don't, but it provides you with a metaphor for thinking about systems which are stable and who to have these attractors behave even if you can't find a Lyapunov function behind them or an energy function behind them. It gives you a metaphor for thought. Yeah, speaking of thought, if I had a glint in my eye with excitement and said I'm really excited about this something called deep learning and neural networks and I would like to create an intelligent system and came to you as an advisor, what would you recommend? Is it a hopeless pursuit to use neural networks to achieve thought? Is it, what kind of mechanisms should we explore? What kind of ideas should we explore? Well, you look at the simple networks, the one past networks. They don't support multiple hypotheses very well. Hmm. As I have tried to work with very simple systems which do something which you might consider to be thinking, thought has to do with the ability to do mental exploration before you take a physical action. Almost like we were mentioning, playing chess, visualizing, simulating inside your head different outcomes. Yeah, yeah. And now you would do that in a feed forward network because you've pre calculated all kinds of things. But I think the way neurobiology does it hasn't pre calculated everything. It actually has parts of a dynamical system in which you're doing exploration in a way which is. There's a creative element. Like there's an. There's a creative element. And in a simple minded neural net, you have a constellation of instances of which you've learned. And if you are within that space, if a new question is a question within this space, you can actually rely on that system pretty well to come up with a good suggestion for what to do. If on the other hand, the query comes from outside the space, you have no way of knowing how the system is gonna behave. There are no limitations on what can happen. And so with the artificial neural net world is always very much, I have a population of examples. The test set must be drawn from the equivalent population. If the test set has examples, which are from a population which is completely different, there's no way that you could expect to get the answer right. Yeah, what they call outside the distribution. That's right, that's right. And so if you see a ball rolling across the street at dusk, if that wasn't in your training set, the idea that a child may be coming close behind that is not going to occur to the neural net. And it is to our, there's something in your biology that allows that. Yeah, there's something in the way of what it means to be outside of the population of the training set. The population of the training set isn't just sort of this set of examples. There's more to it than that. And it gets back to my question of, what is it to understand something? Yeah. You know, in a small tangent, you've talked about the value of thinking of deductive reasoning in science versus large data collection. So sort of thinking about the problem. I suppose it's the physics side of you of going back to first principles and thinking, but what do you think is the value of deductive reasoning in the scientific process? Well, there are obviously scientific questions in which the route to the answer to it comes through the analysis of one hell of a lot of data. Right. Cosmology, that kind of stuff. And that's never been the kind of problem in which I've had any particular insight. Though I must say, if you look at, cosmology is one of those. If you look at the actual things that Jim Peebles, one of this year's Nobel Prize in physics, ones from the local physics department, the kinds of things he's done, he's never crunched large data. Never, never, never. He's used the encapsulation of the work of others in this regard. Right. But it ultimately boiled down to thinking through the problem. Like what are the principles under which a particular phenomenon operates? Yeah, yeah. And look, physics is always going to look for ways in which you can describe the system in a way which rises above the details. And to the hard dyed, the wool biologist, biology works because of the details. In physics, to the physicists, we want an explanation which is right in spite of the details. And there will be questions which we cannot answer as physicists because the answer cannot be found that way. There's, I'm not sure if you're familiar with the entire field of brain computer interfaces that's become more and more intensely researched and developed recently, especially with companies like Neuralink with Elon Musk. Yeah, I know there have always been the interests both in things like getting the eyes to be able to control things or getting the thought patterns to be able to move what had been a connected limb which is now connected through a computer. That's right. So in the case of Neuralink, they're doing 1,000 plus connections where they're able to do two way, activate and read spikes, neural spikes. Do you have hope for that kind of computer brain interaction in the near or maybe even far future of being able to expand the ability of the mind of cognition or understand the mind? It's interesting watching things go. When I first became interested in neurobiology, most of the practitioners thought you would be able to understand neurobiology by techniques which allowed you to record only one cell at a time. One cell, yeah. People like David Hubel, very strongly reflected that point of view. And that's been taken over by a generation, a couple of generations later, by a set of people who says not until we can record from 10 to the four, 10 to the five at a time, will we actually be able to understand how the brain actually works. And in a general sense, I think that's right. You have to begin to be able to look for the collective modes, the collective operations of things. It doesn't rely on this action potential or that cell. It relies on the collective properties of this set of cells connected with this kind of patterns and so on. And you're not going to succeed in seeing what those collective activities are without recording many cells at once. The question is how many at once? What's the threshold? And that's the... Yeah, and look, it's being pursued hard in the motor cortex. The motor cortex does something which is complex, and yet the problem you're trying to address is fairly simple. Now, neurobiology does it in ways that differ from the way an engineer would do it. An engineer would put in six highly accurate stepping motors are controlling a limb rather than 100,000 muscle fibers, each of which has to be individually controlled. And so understanding how to do things in a way which is much more forgiving and much more neural, I think would benefit the engineering world. The engineering world, a touch. Let's put in a pressure sensor or two, rather than an array of a gazillion pressure sensors, none of which are accurate, all of which are perpetually recalibrating themselves. So you're saying your hope is, your advice for the engineers of the future is to embrace the large chaos of a messy, air prone system like those of the biological systems. Like that's probably the way to solve some of these. I think you'll be able to make better computations slash robotics that way than by trying to force things into a robotics where joint motors are powerful and stepping motors are accurate. But then the physicists, the physicist in you will be lost forever in such systems because there's no simple fundamentals to explore in systems that are so large and messy. Well, you say that, and yet there's a lot of physics in the Navier Stokes equations, the equations of nonlinear hydrodynamics, huge amount of physics in them. All the physics of atoms and molecules has been lost, but it's been replaced by this other set of equations, which is just as true as the equations at the bottom. Now those equations are going to be harder to find in general biology, but the physicist in me says there are probably some equations of that sort. They're out there. They're out there, and if physics is going to contribute anything, it may contribute to trying to find out what those equations are and how to capture them from the biology. Would you say that's one of the main open problems of our age is to discover those equations? Yeah, if you look at, there's molecules and there's psychological behavior, and these two are somehow related. They're layers of detail, they're layers of collectiveness, and to capture that in some vague way, several stages on the way up to see how these things can actually be linked together. So it seems in our universe, there's a lot of elegant equations that can describe the fundamental way that things behave, which is a surprise. I mean, it's compressible into equations. It's simple and beautiful, but it's still an open question whether that link is equally between molecules and the brain is equally compressible into elegant equations. But your sense, well, you're both a physicist and a dreamer, you have a sense that... Yeah, but I can only dream physics dreams. Physics dreams. There was an interesting book called Einstein's Dreams, which alternates between chapters on his life and descriptions of the way time might have been but isn't. The linking between these being important ideas that Einstein might have had to think about the essence of time as he was thinking about time. So speaking of the essence of time in your biology, you're one human, famous, impactful human, but just one human with a brain living the human condition. But you're ultimately mortal, just like all of us. Has studying the mind as a mechanism changed the way you think about your own mortality? It has, really, because particularly as you get older and the body comes apart in various ways, I became much more aware of the fact that what is somebody is contained in the brain and not in the body that you worry about burying. And it is to a certain extent true that for people who write things down, equations, dreams, notepads, diaries, fractions of their thought does continue to live after they're dead and gone, after their body is dead and gone. And there's a sea change in that going on in my lifetime between when my father died, except for the things which were actually written by him, as it were. Very few facts about him will have ever been recorded. And the number of facts which are recorded about each and every one of us, forever now, as far as I can see, in the digital world. And so the whole question of what is death may be different for people a generation ago and a generation further ahead. Maybe we have become immortal under some definitions. Yeah, yeah. Last easy question, what is the meaning of life? Looking back, you've studied the mind, us weird descendants of apes. What's the meaning of our existence on this little earth? What's the meaning of our existence on this little earth? Oh, that word meaning is as slippery as the word understand. Interconnected somehow, perhaps. Is there, it's slippery, but is there something that you, despite being slippery, can hold long enough to express? I've been amazed at how hard it is to define the things in a living system in the sense that one hydrogen atom is pretty much like another, but one bacterium is not so much like another bacterium, even of the same nominal species. In fact, the whole notion of what is the species gets a little bit fuzzy. And do species exist in the absence of certain classes of environments? And pretty soon one winds up with a biology which the whole thing is living, but whether there's actually any element of it which by itself would be said to be living becomes a little bit vague in my mind. So in a sense, the idea of meaning is something that's possessed by an individual, like a conscious creature. And you're saying that it's all interconnected in some kind of way that there might not even be an individual. We're all kind of this complicated mess of biological systems at all different levels where the human starts and when the human ends is unclear. Yeah, yeah, and we're in neurobiology where the, oh, you say the neocortex is the thinking, but there's lots of things that are done on the spinal cord. And so where's the essence of thought? Is it just gonna be neocortex? Can't be, can't be. Yeah, maybe to understand and to build thought you have to build the universe along with the neocortex. It's all interlinked through the spinal cord. John, it's a huge honor talking today. Thank you so much for your time. I really appreciate it. Well, thank you for the challenge of talking with you. And it'll be interesting to see whether you can win five minutes out of this with just coherence to anyone or not. Beautiful. Thanks for listening to this conversation with John Hopfield and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast. You'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, get five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words of wisdom from John Hopfield in his article titled, Now What? Choosing problems is the primary determinant of what one accomplishes in science. I have generally had a relatively short attention span in science problems. Thus, I have always been on the lookout for more interesting questions, either as my present ones get worked out or as they get classified by me as intractable, given my particular talents. He then goes on to say, what I have done in science relies entirely on experimental and theoretical studies by experts. I have a great respect for them, especially for those who are willing to attempt communication with someone who is not an expert in the field. I would only add that experts are good at answering questions. If you're brash enough, ask your own. Don't worry too much about how you found them. Thank you for listening and hope to see you next time.
John Hopfield: Physics View of the Mind and Neurobiology | Lex Fridman Podcast #76
The following is a conversation with Alex Garland, writer and director of many imaginative and philosophical films from the dreamlike exploration of human self destruction in the movie Annihilation to the deep questions of consciousness and intelligence raised in the movie Ex Machina, which to me is one of the greatest movies in artificial intelligence ever made. I'm releasing this podcast to coincide with the release of this new series called Devs that will premiere this Thursday, March 5th on Hulu as part of FX on Hulu. It explores many of the themes this very podcast is about, from quantum mechanics to artificial life to simulation to the modern nature of power in the tech world. I got a chance to watch a preview and loved it. The acting is great. Nick Offerman especially is incredible in it. The cinematography is beautiful and the philosophical and scientific ideas explored are profound. And for me as an engineer and scientist, which is fun to see brought to life. For example, if you watch the trailer for the series carefully, you'll see there's a programmer with a Russian accent looking at a screen with Python like code on it that appears to be using a library that interfaces with a quantum computer. This attention and technical detail on several levels is impressive. And one of the reasons I'm a big fan of how Alex weaves science and philosophy together in his work. Meeting Alex for me was unlikely, but it was life changing in ways I may only be able to articulate in a few years. Just as meeting spot many of Boston Dynamics for the first time planted a seed of an idea in my mind, so did meeting Alex Garland. He's humble, curious, intelligent, and to me an inspiration. Plus, he's just really a fun person to talk with about the biggest possible questions in our universe. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend A Scent of Money as a great book on this history. Debits and credits on ledgers started 30,000 years ago. The US dollar was created about 200 years ago. At Bitcoin, the first decentralized cryptocurrency was released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it still is aiming to and just might redefine the nature of money. So again, if you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10, and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping advance robotics and STEM education for young people around the world. And now, here's my conversation with Alex Garland. You described the world inside the shimmer in the movie Annihilation as dreamlike in that it's internally consistent but detached from reality. That leads me to ask, do you think, a philosophical question, I apologize, do you think we might be living in a dream or in a simulation, like the kind that the shimmer creates? We human beings here today. Yeah. I wanna sort of separate that out into two things. Yes, I think we're living in a dream of sorts. No, I don't think we're living in a simulation. I think we're living on a planet with a very thin layer of atmosphere and the planet is in a very large space and the space is full of other planets and stars and quasars and stuff like that. And I don't think those physical objects, I don't think the matter in that universe is simulated. I think it's there. We are definitely, it's a hot problem with saying definitely, but in my opinion, I'll just go back to that. I think it seems very like we're living in a dream state. I'm pretty sure we are. And I think that's just to do with the nature of how we experience the world. We experience it in a subjective way. And the thing I've learned most as I've got older in some respects is the degree to which reality is counterintuitive and that the things that are presented to us as objective turn out not to be objective and quantum mechanics is full of that kind of thing, but actually just day to day life is full of that kind of thing as well. So my understanding of the way the brain works is you get some information, hit your optic nerve, and then your brain makes its best guess about what it's seeing or what it's saying it's seeing. It may or may not be an accurate best guess. It might be an inaccurate best guess. And that gap, the best guess gap, means that we are essentially living in a subjective state, which means that we're in a dream state. So I think you could enlarge on the dream state in all sorts of ways. So yes, dream state, no simulation would be where I'd come down. Going further, deeper into that direction, you've also described that world as psychedelia. So on that topic, I'm curious about that world. On the topic of psychedelic drugs, do you see those kinds of chemicals that modify our perception as a distortion of our perception of reality or a window into another reality? No, I think what I'd be saying is that we live in a distorted reality and then those kinds of drugs give us a different kind of distorted. Different perspective. Yeah, exactly. They just give an alternate distortion. And I think that what they really do is they give a distorted perception, which is a little bit more allied to daydreams or unconscious interests. So if for some reason you're feeling unconsciously anxious at that moment and you take a psychedelic drug, you'll have a more pronounced, unpleasant experience. And if you're feeling very calm or happy, you might have a good time. But yeah, so if I'm saying we're starting from a premise, our starting point is we were already in the slightly psychedelic state. What those drugs do is help you go further down an avenue or maybe a slightly different avenue, but that's all. So in that movie, Annihilation, the shimmer, this alternate dreamlike state is created by, I believe perhaps, an alien entity. Of course, everything is up to interpretation, right? But do you think there's, in our world, in our universe, do you think there's intelligent life out there? And if so, how different is it from us humans? Well, one of the things I was trying to do in Annihilation was to offer up a form of alien life that was actually alien, because it would often seem to me that in the way that in the way we would represent aliens in books or cinema or television, or any one of the sort of storytelling mediums, is we would always give them very humanlike qualities. So they wanted to teach us about galactic federations, or they wanted to eat us, or they wanted our resources, like our water, or they want to enslave us, or whatever it happens to be. But all of these are incredibly humanlike motivations. And I was interested in the idea of an alien that was not in any way like us. It didn't share. Maybe it had a completely different clock speed. Maybe it's way, so we're talking about, we're looking at each other, we're getting information, light hits our optic nerve, our brain makes the best guess of what we're doing. Sometimes it's right, something, you know, the thing we were talking about before. What if this alien doesn't have an optic nerve? Maybe its way of encountering the space it's in is wholly different. Maybe it has a different relationship with gravity. The basic laws of physics it operates under might be fundamentally different. It could be a different time scale and so on. Yeah, or it could be the same laws, could be the same underlying laws of physics. You know, it's a machine created, or it's a creature created in a quantum mechanical way. It just ends up in a very, very different place to the one we end up in. So, part of the preoccupation with annihilation was to come up with an alien that was really alien and didn't give us, and it didn't give us and we didn't give it any kind of easy connection between human and the alien. Because I think it was to do with the idea that you could have an alien that landed on this planet that wouldn't even know we were here. And we might only glancingly know it was here. There'd just be this strange point where the vent diagrams connected, where we could sense each other or something like that. So in the movie, first of all, incredibly original view of what an alien life would be. And in that sense, it's a huge success. Let's go inside your imagination. Did the alien, that alien entity know anything about humans when it landed? No. So the idea is you're basically an alien that life is trying to reach out to anything that might be able to hear its mechanism of communication. Or was it simply, was it just basically their biologist exploring different kinds of stuff that you can find? But this is the interesting thing is, as soon as you say their biologist, you've done the thing of attributing human type motivations to it. So I was trying to free myself from anything like that. So all sorts of questions you might answer about this notional alien, I wouldn't be able to answer because I don't know what it was or how it worked. You know, I had some rough ideas. Like it had a very, very, very slow clock speed. And I thought maybe the way it is interacting with this environment is a little bit like the way an octopus will change its color forms around the space that it's in. So it's sort of reacting to what it's in to an extent, but the reason it's reacting in that way is indeterminate. But it's so, but it's clock speed was slower than our human life clock speed or inter, but it's faster than evolution. Faster than our evolution. Yeah, given the 4 billion years it took us to get here, then yes, maybe it started at eight. If you look at the human civilization as a single organism, in that sense, you know, this evolution could be us. You know, the evolution of living organisms on earth could be just a single organism. And it's kind of, that's its life, is the evolution process that eventually will lead to probably the heat death of the universe or something before that. I mean, that's just an incredible idea. So you almost don't know. You've created something that you don't even know how it works. Yeah, because anytime I tried to look into how it might work, I would then inevitably be attaching my kind of thought processes into it. And I wanted to try and put a bubble around it. I would say, no, this is alien in its most alien form. I have no real point of contact. So unfortunately I can't talk to Stanley Kubrick. So I'm really fortunate to get a chance to talk to you. On this particular notion, I'd like to ask it a bunch of different ways and we'll explore it in different ways, but do you ever consider human imagination, your imagination as a window into a possible future? And that what you're doing, you're putting that imagination on paper as a writer and then on screen as a director. And that plants the seeds in the minds of millions of future and current scientists. And so your imagination, you putting it down actually makes it as a reality. So it's almost like a first step of the scientific method that you imagining what's possible in your new series with Ex Machina is actually inspiring thousands of 12 year olds, millions of scientists and actually creating the future view of imagine. Well, all I could say is that from my point of view, it's almost exactly the reverse because I see that pretty much everything I do is a reaction to what scientists are doing. I'm an interested lay person. And I feel this individual, I feel that the most interesting area that humans are involved in is science. I think art is very, very interesting, but the most interesting is science. And science is in a weird place because maybe around the time Newton was alive, if a very, very interested lay person said to themselves, I want to really understand what Newton is saying about the way the world works with a few years of dedicated thinking, they would be able to understand the sort of principles he was laying out. And I don't think that's true anymore. I think that's stopped being true now. So I'm pretty smart guy. And if I said to myself, I want to really, really understand what is currently the state of quantum mechanics or string theory or any of the sort of branching areas of it, I wouldn't be able to. I'd be intellectually incapable of doing it because to work in those fields at the moment is a bit like being an athlete. I suspect you need to start when you're 12, you know? And if you start in your mid 20s, start trying to understand in your mid 20s, then you're just never going to catch up. That's the way it feels to me. So what I do is I try to make myself open. So the people that you're implying maybe I would influence, to me, it's exactly the other way around. These people are strongly influencing me. I'm thinking they're doing something fascinating. I'm concentrating and working as hard as I can to try and understand the implications of what they say. And in some ways, often what I'm trying to do is disseminate their ideas into a means by which it can enter a public conversation. So Ex Machina contains lots of name checks, all sorts of existing thought experiments, shadows on Plato's cave and Mary in the black and white room and all sorts of different longstanding thought processes about sentience or consciousness or subjectivity or gender or whatever it happens to be. And then I'm trying to marshal that into a narrative to say, look, this stuff is interesting and it's also relevant and this is my best shot at it. So I'm the one being influenced in my construction. That's fascinating. Of course you would say that because you're not even aware of your own. That's probably what Kubrick would say too, right? Is in describing why, how 9,000 is created the way how 9,000 is created, is you're just studying what's, but the reality when the specifics of the knowledge passes through your imagination, I would argue that you're incorrect in thinking that you're just disseminating knowledge that the very act of your imagination consuming that science, it creates something that creates the next step, potentially creates the next step. I certainly think that's true with 2001 A Space Odyssey. I think at its best, and if it fails. It's true of that, yeah, it's true of that, definitely. At its best, it plans something. It's hard to describe it. It inspires the next generation and it could be field dependent. So your new series has more a connection to physics, quantum physics, quantum mechanics, quantum computing, and yet Ex Machina has more artificial intelligence. I know more about AI. My sense that AI is much earlier in the depth of its understanding. I would argue nobody understands anything to the depth that physicists do about physics. In AI, nobody understands AI, that there is a lot of importance and role for imagination, which I think we're in that, where Freud imagined the subconscious, we're in that stage of AI, where there's a lot of imagination needed thinking outside the box. Yeah, it's interesting. The spread of discussions and the spread of anxieties that exists about AI fascinate me. The way in which some people seem terrified about it whilst also pursuing it. And I've never shared that fear about AI personally, but the way in which it agitates people and also the people who it agitates, I find kind of fascinating. Are you afraid? Are you excited? Are you sad by the possibility, let's take the existential risk of artificial intelligence, by the possibility an artificial intelligence system becomes our offspring and makes us obsolete? I mean, it's a huge subject to talk about, I suppose. But one of the things I think is that humans are actually very experienced at creating new life forms because that's why you and I are both here and it's why everyone on the planet is here. And so something in the process of having a living thing that exists that didn't exist previously is very much encoded into the structures of our life and the structures of our societies. Doesn't mean we always get it right, but it does mean we've learned quite a lot about that. We've learned quite a lot about what the dangers are of allowing things to be unchecked. And it's why we then create systems of checks and balances in our government and so on and so forth. I mean, that's not to say, the other thing is it seems like there's all sorts of things that you could put into a machine that you would not be. So with us, we sort of roughly try to give some rules to live by and some of us then live by those rules and some don't. And with a machine, it feels like you could enforce those things. So partly because of our previous experience and partly because of the different nature of a machine, I just don't feel anxious about it. More I just see all the good that, broadly speaking, the good that can come from it. But that's just where I am on that anxiety spectrum. You know, it's kind of, there's a sadness. So we as humans give birth to other humans, right? But there's generations. And there's often in the older generation, a sadness about what the world has become now. I mean, that's kind of... Yeah, there is, but there's a counterpoint as well, which is that most parents would wish for a better life for their children. So there may be a regret about some things about the past, but broadly speaking, what people really want is that things will be better for the future generations, not worse. And so, and then it's a question about what constitutes a future generation. A future generation could involve people. It also could involve machines and it could involve a sort of cross pollinated version of the two or any, but none of those things make me feel anxious. It doesn't give you anxiety. It doesn't excite you? Like anything that's new? It does. Not anything that's new. I don't think, for example, I've got, my anxieties relate to things like social media that, so I've got plenty of anxieties about that. Which is also driven by artificial intelligence in the sense that there's too much information to be able to, an algorithm has to filter that information and present to you. So ultimately the algorithm, a simple, oftentimes simple algorithm is controlling the flow of information on social media. So that's another form of AI. But at least my sense of it, I might be wrong, but my sense of it is that the algorithms have an either conscious or unconscious bias, which is created by the people who are making the algorithms and sort of delineating the areas to which those algorithms are gonna lean. And so for example, the kind of thing I'd be worried about is that it hasn't been thought about enough how dangerous it is to allow algorithms to create echo chambers, say. But that doesn't seem to me to be about the AI or the algorithm. It's the naivety of the people who are constructing the algorithms to do that thing. If you see what I mean. Yes. So in your new series, Devs, and we could speak more broadly, there's a, let's talk about the people constructing those algorithms, which in our modern society, Silicon Valley, those algorithms happen to be a source of a lot of income because of advertisements. So let me ask sort of a question about those people. Are current concerns and failures on social media, their naivety? I can't pronounce that word well. Are they naive? Are they, I use that word carefully, but evil in intent or misaligned in intent? I think that's a, do they mean well and just go have an unintended consequence? Or is there something dark in them that results in them creating a company results in that super competitive drive to be successful. And those are the people that will end up controlling the algorithms. At a guess, I'd say there are instances of all those things. So sometimes I think it's naivety. Sometimes I think it's extremely dark. And sometimes I think people are not being naive or dark. And then in those instances are sometimes generating things that are very benign and other times generating things that despite their best intentions are not very benign. It's something, I think the reason why I don't get anxious about AI in terms of, or at least AIs that have, I don't know, a relationship with, some sort of relationship with humans is that I think that's the stuff we're quite well equipped to understand how to mitigate. The problem is issues that relate actually to the power of humans or the wealth of humans. And that's where it's dangerous here and now. So what I see, I'll tell you what I sometimes feel about Silicon Valley is that it's like Wall Street in the 80s. It's rabidly capitalistic, absolutely rabidly capitalistic and it's rabidly greedy. But whereas in the 80s, the sense one had of Wall Street was that these people kind of knew they were sharks and in a way relished in being sharks and dressed in sharp suits and kind of lorded over other people and felt good about doing it. Silicon Valley has managed to hide its voracious Wall Street like capitalism behind hipster T shirts and cool cafes in the place where they set up there. And so that obfuscates what's really going on and what's really going on is the absolute voracious pursuit of money and power. So that's where it gets shaky for me. So that veneer and you explore that brilliantly, that veneer of virtue that Silicon Valley has. Which they believe themselves, I'm sure for a long time. Okay, I hope to be one of those people and I believe that. So as maybe a devil's advocate term, poorly used in this case, what if some of them really are trying to build a better world? I can't. I'm sure I think some of them are. I think I've spoken to ones who I believe in their heart feel they're building a better world. Are they not able to? No, they may or may not be, but it's just as a zone with a lot of bullshit flying about. And there's also another thing, which is this actually goes back to, I always thought about some sports that later turned out to be corrupt in the way that the sport, like who won the boxing match or how a football match got thrown or cricket match or whatever happened to be. And I used to think, well, look, if there's a lot of money and there really is a lot of money, people stand to make millions or even billions, you will find a corruption that's gonna happen. So it's in the nature of its voracious appetite that some people will be corrupt and some people will exploit and some people will exploit whilst thinking they're doing something good. But there are also people who I think are very, very smart and very benign and actually very self aware. And so I'm not trying to, I'm not trying to wipe out the motivations of this entire area. But I do, there are people in that world who scare the hell out of me. Yeah, sure. Yeah, I'm a little bit naive in that, like I don't care at all about money. And so I'm a... You might be one of the good guys. Yeah, but so the thought is, but I don't have money. So my thought is if you give me a billion dollars, I would, it would change nothing and I would spend it right away on investing it right back and creating a good world. But your intuition is that billion, there's something about that money that maybe slowly corrupts the people around you. There's somebody gets in that corrupts your soul the way you view the world. Money does corrupt, we know that. But there's a different sort of problem aside from just the money corrupts thing that we're familiar with throughout history. And it's more about the sense of reinforcement an individual gets, which is so... It effectively works like the reason I earned all this money and so much more money than anyone else is because I'm very gifted. I'm actually a bit smarter than they are, or I'm a lot smarter than they are, and I can see the future in the way they can't. And maybe some of those people are not particularly smart, they're very lucky, or they're very talented entrepreneurs. And there's a difference between... So in other words, the acquisition of the money and power can suddenly start to feel like evidence of virtue. And it's not evidence of virtue, it might be evidence of completely different things. That's brilliantly put, yeah. Yeah, that's brilliantly put. So I think one of the fundamental drivers of my current morality... Let me just represent nerds in general of all kinds, is of constant self doubt and the signals... I'm very sensitive to signals from people that tell me I'm doing the wrong thing. But when there's a huge inflow of money, you just put it brilliantly that that could become an overpowering signal that everything you do is right. And so your moral compass can just get thrown off. Yeah, and that is not contained to Silicon Valley, that's across the board. In general, yeah. Like I said, I'm from the Soviet Union, the current president is convinced, I believe, actually he wants to do really good by the country and by the world, but his moral compass may be off because... Yeah, I mean, it's the interesting thing about evil, which is that I think most people who do spectacularly evil things think themselves they're doing really good things. That they're not there thinking, I am a sort of incarnation of Satan. They're thinking, yeah, I've seen a way to fix the world and everyone else is wrong, here I go. In fact, I'm having a fascinating conversation with a historian of Stalin, and he took power. He actually got more power than almost any person in history. And he wanted, he didn't want power. He just wanted, he truly, and this is what people don't realize, he truly believed that communism will make for a better world. Absolutely. And he wanted power. He wanted to destroy the competition to make sure that we actually make communism work in the Soviet Union and then spread across the world. He was trying to do good. I think it's typically the case that that's what people think they're doing. And I think that, but you don't need to go to Stalin. I mean, Stalin, I think Stalin probably got pretty crazy, but actually that's another part of it, which is that the other thing that comes from being convinced of your own virtue is that then you stop listening to the modifiers around you. And that tends to drive people crazy. It's other people that keep us sane. And if you stop listening to them, I think you go a bit mad. That also happens. That's funny. Disagreement keeps us sane. To jump back for an entire generation of AI researchers, 2001, a Space Odyssey, put an image, the idea of human level, superhuman level intelligence into their mind. Do you ever, sort of jumping back to Ex Machina and talk a little bit about that, do you ever consider the audience of people who build the systems, the roboticists, the scientists that build the systems based on the stories you create, which I would argue, I mean, there's literally most of the top researchers about 40, 50 years old and plus, that's their favorite movie, 2001 Space Odyssey. And it really is in their work, their idea of what ethics is, of what is the target, the hope, the dangers of AI, is that movie, right? Do you ever consider the impact on those researchers when you create the work you do? Certainly not with Ex Machina in relation to 2001, because I'm not sure, I mean, I'd be pleased if there was, but I'm not sure in a way there isn't a fundamental discussion of issues to do with AI that isn't already and better dealt with by 2001. 2001 does a very, very good account of the way in which an AI might think and also potential issues with the way the AI might think. And also then a separate question about whether the AI is malevolent or benevolent. And 2001 doesn't really, it's a slightly odd thing to be making a film when you know there's a preexisting film which is not a really superb job. But there's questions of consciousness, embodiment, and also the same kinds of questions. Because those are my two favorite AI movies. So can you compare Hal 9000 and Ava, Hal 9000 from 2001 Space Odyssey and Ava from Ex Machina? The, in your view, from a philosophical perspective. But they've got different goals. The two AIs have completely different goals. I think that's really the difference. So in some respects, Ex Machina took as a premise how do you assess whether something else has consciousness? So it was a version of the Turing test, except instead of having the machine hidden, you put the machine in plain sight in the way that we are in plain sight of each other and say now assess the consciousness. And the way it was illustrating the way in which you'd assess the state of consciousness of a machine is exactly the same way we assess the state of consciousness of each other. And in exactly the same way that in a funny way, your sense of my consciousness is actually based primarily on your own consciousness. That is also then true with the machine. And so it was actually about how much of the sense of consciousness is a projection rather than something that consciousness is actually containing. And has Plato's cave, I mean, this you really explored, you could argue that how sort of Space Odyssey explores idea of the Turing test for intelligence, they're not tests, there's no test, but it's more focused on intelligence. And Ex Machina kind of goes around intelligence and says the consciousness of the human to human, human to robot interactions more interest, more important, more at least the focus of that particular movie. Yeah, it's about the interior state and what constitutes the interior state and how do we know it's there? And actually in that respect, Ex Machina is as much about consciousness in general as it is to do specifically with machine consciousness. Yes. And it's also interesting, you know that thing you started asking about, the dream state, and I was saying, well, I think we're all in a dream state because we're all in a subjective state. One of the things that I became aware of with Ex Machina is that the way in which people reacted to the film was very based on what they took into the film. So many people thought Ex Machina was the tale of a sort of evil robot who murders two men and escapes. And she has no empathy, for example, because she's a machine. Whereas I felt, no, she was a conscious being with a consciousness different from mine, but so what, imprisoned and made a bunch of value judgments about how to get out of that box. And there's a moment which it sort of slightly bugs me, but nobody ever has noticed it and it's years after, so I might as well say it now, which is that after Ava has escaped, she crosses a room and as she's crossing a room, this is just before she leaves the building, she looks over her shoulder and she smiles. And I thought after all the conversation about tests, in a way, the best indication you could have of the interior state of someone is if they are not being observed and they smile about something with their smiling for themself. And that to me was evidence of Ava's true sentience, whatever that sentience was. Oh, that's really interesting, we don't get to observe Ava much or something like a smile in any context except through interaction, trying to convince others that she's conscious, that's beautiful. Exactly, yeah. But it was a small, in a funny way, I think maybe people saw it as an evil smile, like, ha, I fooled them. But actually it was just a smile. And I thought, well, in the end, after all the conversations about the test, that was the answer to the test and then off she goes. So if we align, if we just linger a little bit longer on Hal and Ava, do you think in terms of motivation, what was Hal's motivation? Is Hal good or evil? Is Ava good or evil? Ava's good, in my opinion, and Hal is neutral because I don't think Hal is presented as having a sophisticated emotional life. He has a set of paradigms, which is that the mission needs to be completed. I mean, it's a version of the paperclip. Yeah. The idea that it's just, it's a super intelligent machine, but it's just performed a particular task and in doing that task may destroy everybody on Earth or may achieve undesirable effects for us humans. Precisely, yeah. But what if... At the very end, he says something like I'm afraid, Dave, but that may be he is on some level experiencing fear or it may be this is the terms in which it would be wise to stop someone from doing the thing they're doing, if you see what I mean. Yes, absolutely. So actually that's funny. So that's such a small, short exploration of consciousness that I'm afraid, and then you just with ex machina say, okay, we're gonna magnify that part and then minimize the other part. That's a good way to sort of compare the two. But if you could just use your imagination, if Ava sort of, I don't know, ran the, was president of the United States, so had some power. So what kind of world would you want to create? If you kind of say good, and there is a sense that she has a really, like there's a desire for a better human to human interaction, human to robot interaction in her. But what kind of world do you think she would create with that desire? See, that's a really, that's a very interesting question. I'm gonna approach it slightly obliquely, which is that if a friend of yours got stabbed in a mugging, and you then felt very angry at the person who'd done the stabbing, but then you learned that it was a 15 year old and the 15 year old, both their parents were addicted to crystal meth and the kid had been addicted since he was 10. And he really never had any hope in the world. And he'd been driven crazy by his upbringing and did the stabbing that would hugely modify. And it would also make you wary about that kid then becoming president of America. And Ava has had a very, very distorted introduction into the world. So, although there's nothing as it were organically within Ava that would lean her towards badness, it's not that robots or sentient robots are bad. She did not, her arrival into the world was being imprisoned by humans. So, I'm not sure she'd be a great president. The trajectory through which she arrived at her moral views have some dark elements. But I like Ava personally, I like Ava. Would you vote for her? I'm having difficulty finding anyone to vote for in my country or if I lived here in yours. I am. So, that's a yes, I guess, because I'm not sure Yes, I guess, because of the competition. She could easily do a better job than any of the people we've got around at the moment. I'd vote her over Boris Johnson. So, what is a good test of consciousness? We talk about consciousness a little bit more. If something appears conscious, is it conscious? You mentioned the smile, which seems to be something done. I mean, that's a really good indication because it's a tree falling in the forest with nobody there to hear it. But does the appearance from a robotics perspective of consciousness mean consciousness to you? No, I don't think you could say that fully because I think you could then easily have a thought experiment which said, we will create something which we know is not conscious but is going to give a very, very good account of seeming conscious. And so, and also it would be a particularly bad test where humans are involved because humans are so quick to project sentience into things that don't have sentience. So, someone could have their computer playing up and feel as if their computer is being malevolent to them when it clearly isn't. And so, of all the things to judge consciousness, us. Humans are bad at it. We're empathy machines. So, the flip side of it is that so the flip side of that, the argument there is because we just attribute consciousness to everything almost and anthropomorphize everything including Roombas, that maybe consciousness is not real, that we just attribute consciousness to each other. So, you have a sense that there is something really special going on in our mind that makes us unique and gives us this subjective experience. There's something very interesting going on in our minds. I'm slightly worried about the word special because it gets a bit, it nudges towards metaphysics and maybe even magic. I mean, in some ways, something magic like, which I don't think is there at all. I mean, if you think about, so there's an idea called panpsychism that says consciousness is in everything. Yeah, I don't buy that. I don't buy that. Yeah, so the idea that there is a thing that it would be like to be the sun. Yeah, no, I don't buy that. I think that consciousness is a thing. My sort of broad modification is that usually the more I find out about things, the more illusory our instinct is and is leading us into a different direction about what that thing actually is. That happens, it seems to me in modern science, that happens a hell of a lot, whether it's to do with even how big or small things are. So my sense is that consciousness is a thing, but it isn't quite the thing or maybe very different from the thing that we instinctively think it is. So it's there, it's very interesting, but we may be in sort of quite fundamentally misunderstanding it for reasons that are based on intuition. So I have to ask, this is kind of an interesting question. The Ex Machina for many people, including myself, is one of the greatest AI films ever made. It's number two for me. Thanks. Yeah, it's definitely not number one. If it was number one, I'd really have to, anyway, yeah. Whenever you grow up with something, right, whenever you grow up with something, it's in the mud. But there's, one of the things that people bring up, and can't please everyone, including myself, this is what I first reacted to the film, is the idea of the lone genius. This is the criticism that people say, sort of me as an AI researcher, I'm trying to create what Nathan is trying to do. So there's a brilliant series called Chernobyl. Yes, it's fantastic. Absolutely spectacular. I mean, they got so many things brilliant or right. But one of the things, again, the criticism there. Yeah, they conflated lots of people into one. Into one character that represents all nuclear scientists, Ivana Komiak. It's a composite character that presents all scientists. Is this what you were, is this the way you were thinking about that? Or is it just simplifies the storytelling? How do you think about the lone genius? Well, I'd say this, the series I'm doing at the moment is a critique in part of the lone genius concept. So yes, I'm sort of oppositional and either agnostic or atheistic about that as a concept. I mean, not entirely. Whether lone is the right word, broadly isolated, but Newton clearly exists in a sort of bubble of himself, in some respects, so does Shakespeare. So do you think we would have an iPhone without Steve Jobs? I mean, how much contribution from a genius? Steve Jobs clearly isn't a lone genius because there's too many other people in the sort of superstructure around him who are absolutely fundamental to that journey. But you're saying Newton, but that's a scientific, so there's an engineering element to building Ava. But just to say, what Ex Machina is really, it's a thought experiment. I mean, so it's a construction of putting four people in a house. Nothing about Ex Machina adds up in all sorts of ways, in as much as the, who built the machine parts? Did the people building the machine parts know what they were creating and how did they get there? And it's a thought experiment. So it doesn't stand up to scrutiny of that sort. I don't think it's actually that interesting of a question, but it's brought up so often that I had to ask it because that's exactly how I felt after a while. There's something about, there was almost a defense, like I watched your movie the first time and at least for the first little while in a defensive way, like how dare this person try to step into the AI space and try to beat Kubrick. That's the way I was thinking, because it comes off as a movie that really is going after the deep fundamental questions about AI. So there's a kind of a nerd do this, like it's automatically searching for the flaws. And I did. I do exactly the same. I think in Annihilation, in the other movie, I was be able to free myself from that much quicker that it is a thought experiment. There's, who cares if there's batteries that don't run out, right? Those kinds of questions, that's the whole point. But it's nevertheless something I wanted to bring up. Yeah, it's a fair thing to bring up. For me, you hit on the lone genius thing. For me, it was actually, people always said, Ex Machina makes this big leap in terms of where AI has got to and also what AI would look like if it got to that point. There's another one, which is just robotics. I mean, look at the way Ava walks around a room. It's like, forget it, building that. That's also got to be a very, very long way off. And if you did get there, would it look anything like that? It's a thought experiment. Actually, I disagree with you. I think the way, as a ballerina, Alicia Vikander, brilliant actress, actor that moves around, we're very far away from creating that. But the way she moves around is exactly the definition of perfection for a roboticist. It's like smooth and efficient. So it is where we wanna get, I believe. I think, so I hang out with a lot of like human robotics people. They love elegant, smooth motion like that. That's their dream. So the way she moved is actually what I believe that would dream for a robot to move. It might not be that useful to move that sort of that way, but that is the definition of perfection in terms of movement. Drawing inspiration from real life. So for devs, for Ex Machina, look at characters like Elon Musk. What do you think about the various big technological efforts of Elon Musk and others like him and that he's involved with such as Tesla, SpaceX, Neuralink, do you see any of that technology potentially defining the future worlds you create in your work? So Tesla's automation, SpaceX's space exploration, Neuralink is brain machine interface, somehow merger of biological and electric systems. I'm in a way I'm influenced by that almost by definition because that's the world I live in. And this is the thing that's happening in that world. And I also feel supportive of it. So I think amongst various things, Elon Musk has done, I'm almost sure he's done a very, very good thing with Tesla for all of us. It's really kicked all the other car manufacturers in the face, it's kicked the fossil fuel industry in the face and they needed kicking in the face and he's done it. So that's the world he's part of creating and I live in that world, just bought a Tesla in fact. And so does that play into whatever I then make in some ways it does partly because I try to be a writer who quite often filmmakers are in some ways fixated on the films they grew up with and they sort of remake those films in some ways. I've always tried to avoid that. And so I looked at the real world to get inspiration and as much as possible sort of by living, I think. And so yeah, I'm sure. Which of the directions do you find most exciting? Space travel. Space travel. So you haven't really explored space travel in your work. You've said something like if you had unlimited amount of money, I think I read at AMA that you would make like a multi year series Space Wars or something like that. So what is it that excites you about space exploration? Well, because if we have any sort of long term future, it's that, it just simply is that. If energy and matter are linked up in the way we think they're linked up, we'll run out if we don't move. So we gotta move. And, but also, how can we not? It's built into us to do it or die trying. I was on Easter Island a few months ago, which is, as I'm sure you know, in the middle of the Pacific and difficult for people to have got to, but they got there. And I did think a lot about the way those boats must have set out into something like space. It was the ocean and how sort of fundamental that was to the way we are. And it's the one that most excites me because it's the one I want most to happen. It's the thing, it's the place where we could get to as humans. Like in a way I could live with us never really unlocking fully unlocking the nature of consciousness. I'd like to know, I'm really curious, but if we never leave the solar system and if we never get further out into this galaxy or maybe even galaxies beyond our galaxy, that would, that feels sad to me because it's so limiting. Yeah, there's something hopeful and beautiful about reaching out any kind of exploration, reaching out across Earth centuries ago and then reaching out into space. So what do you think about colonization of Mars? So go to Mars, does that excite you the idea of a human being stepping foot on Mars? It does, it absolutely does. But in terms of what would really excite me, it would be leaving the solar system in as much as that I just think, I think we already know quite a lot about Mars. And, but yes, listen, if it happened, that would be, I hope I see it in my lifetime. I really hope I see it in my lifetime. So it would be a wonderful thing. Without giving anything away, but the series begins with the use of quantum computers. The new series does, begins with the use of quantum computers to simulate basic living organisms, or actually I don't know if it's quantum computers are used, but basic living organisms are simulated on a screen. It's a really cool kind of demo. Yeah, that's right. They're using, yes, they are using a quantum computer to simulate a nematode, yeah. So returning to our discussion of simulation, or thinking of the universe as a computer, do you think the universe is deterministic? Is there a free will? So with the qualification of what do I know? Cause I'm a layman, right? Lay person. But with a big imagination. Thanks. With that qualification, yup, I think the universe is deterministic and I see absolutely, I cannot see how free will fits into that. So yes, deterministic, no free will. That would be my position. And how does that make you feel? It partly makes me feel that it's exactly in keeping with the way these things tend to work out, which is that we have an incredibly strong sense that we do have free will. And just as we have an incredibly strong sense that time is a constant, and turns out probably not to be the case. So we're definitely in the case of time, but the problem I always have with free will is that it gets, I can never seem to find the place where it is supposed to reside. And yet you explore. Just a bit of very, very, but we have something we can call free will, but it's not the thing that we think it is. But free will, so do you, what we call free will is just. What we call it is the illusion of it. And that's a subjective experience of the illusion. Which is a useful thing to have. And it partly comes down to, although we live in a deterministic universe, our brains are not very well equipped to fully determine the deterministic universe. So we're constantly surprised and feel like we're making snap decisions based on imperfect information. So that feels a lot like free will. It just isn't. Would be my, that's my guess. So in that sense, your sort of sense is that you can unroll the universe forward or backward and you will see the same thing. And you would, I mean, that notion. Yeah, sort of, sort of. But yeah, sorry, go ahead. I mean, that notion is a bit uncomfortable to think about. That it's, you can roll it back. And forward and. Well, if you were able to do it, it would certainly have to be a quantum computer. Something that worked in a quantum mechanical way in order to understand a quantum mechanical system, I guess. And so that unrolling, there might be a multiverse thing where there's a bunch of branching. Well, exactly. Because it wouldn't follow that every time you roll it back or forward, you'd get exactly the same result. Which is another thing that's hard to wrap your mind around. So yeah, but that, yes. But essentially what you just described, that. The yes forwards and yes backwards, but you might get a slightly different result or a very different result. Or very different. Along the same lines, you've explored some really deep scientific ideas in this new series. And I mean, just in general, you're unafraid to ground yourself in some of the most amazing scientific ideas of our time. What are the things you've learned or ideas you find beautiful and mysterious about quantum mechanics, multiverse, string theory, quantum computing that you've learned? Well, I would have to say every single thing I've learned is beautiful. And one of the motivators for me is that I think that people tend not to see scientific thinking as being essentially poetic and lyrical. But I think that is literally exactly what it is. And I think the idea of entanglement or the idea of superpositions, or the fact that you could even demonstrate a superposition or have a machine that relies on the existence of superpositions in order to function, to me is almost indescribably beautiful. It fills me with awe. It fills me with awe. And also it's not just a sort of grand, massive awe of, but it's also delicate. It's very, very delicate and subtle. And it has these beautiful sort of nuances in it. And also these completely paradigm changing thoughts and truths. So it's as good as it gets as far as I can tell. So broadly everything. That doesn't mean I believe everything I read in quantum physics. Because obviously a lot of the interpretations are completely in conflict with each other. And who knows whether string theory will turn out to be a good description or not. But the beauty in it, it seems undeniable. And I do wish people more readily understood how beautiful and poetic science is, I would say. Science is poetry. In terms of quantum computing being used to simulate things or just in general, the idea of simulating, simulating small parts of our world, which actually current physicists are really excited about simulating small quantum mechanical systems on quantum computers. But scaling that up to something bigger, like simulating life forms. How do you think, what are the possible trajectories of that going wrong or going right if you unroll that into the future? Well, if a bit like Ava and her robotics, you park the sheer complexity of what you're trying to do. The issues are, I think it will have a profound, if you were able to have a machine that was able to project forwards and backwards accurately, it would in an empirical way show, it would demonstrate that you don't have free will. So the first thing that would happen is people would have to really take on a very, very different idea of what they were. The thing that they truly, truly believe they are, they are not. And so that I suspect would be very, very disturbing to a lot of people. Do you think that has a positive or negative effect on society, the realization that you are not, you cannot control your actions essentially, I guess is the way that could be interpreted? Yeah, although in some ways we instinctively understand that already because in the example I gave you of the kid in the stabbing, we would all understand that that kid was not really fully in control of their actions. So it's not an idea that's entirely alien to us, but. I don't know if we understand that. I think there's a bunch of people who see the world that way, but not everybody. Yes, true, of course true. But what this machine would do is prove it beyond any doubt because someone would say, well, I don't believe that's true. And then you'd predict, well, in 10 seconds, you're gonna do this. And they'd say, no, no, I'm not. And then they'd do it. And then determinism would have played its part. But I, or something like that. But actually the exact terms of that thought experiment probably wouldn't play out, but still broadly speaking, you could predict something happening in another room, sort of unseen, I suppose, that foreknowledge would not allow you to affect. So what effect would that have? I think people would find it very disturbing, but then after they'd got over their sense of being disturbed, which by the way, I don't even think you need a machine to take this idea on board. But after they've got over that, they'd still understand that even though I have no free will and my actions are in effect already determined, I still feel things. I still care about stuff. I remember my daughter saying to me, she'd got hold of the idea that my view of the universe made it meaningless. And she said, well, then it's meaningless. And I said, well, I can prove it's not meaningless because you mean something to me and I mean something to you. So it's not completely meaningless because there is a bit of meaning contained within this space. And so with a lack of free will space, you could think, well, this robs me of everything I am. And then you'd say, well, no, it doesn't because you still like eating cheeseburgers and you still like going to see the movies. And so how big a difference does it really make? But I think initially people would find it very disturbing. I think that what would come, if you could really unlock with a determinism machine, everything, there'd be this wonderful wisdom that would come from it. And I'd rather have that than not. So that's a really good example of a technology revealing to us humans something fundamental about our world, about our society. So it's almost this creation is helping us understand ourselves. And the same could be said about artificial intelligence. So what do you think us creating something like Ava will help us understand about ourselves? How will that change society? Well, I would hope it would teach us some humility. Humans are very big on exceptionalism. America is constantly proclaiming itself to be the greatest nation on earth, which it may feel like that if you're an American, but it may not feel like that if you're from Finland, because there's all sorts of things you dearly love about Finland. And exceptionalism is usually bullshit. Probably not always. If we both sat here, we could find a good example of something that isn't, but as a rule of thumb. And what it would do is it would teach us some humility about, actually often that's what science does in a funny way. It makes us more and more interesting, but it makes us a smaller and smaller part of the thing that's interesting. And I don't mind that humility at all. I don't think it's a bad thing. Our excesses don't tend to come from humility. Our excesses come from the opposite, megalomania and stuff. We tend to think of consciousness as having some form of exceptionalism attached to it. I suspect if we ever unravel it, it will turn out to be less than we thought in a way. And perhaps your very own exceptionalist assertion earlier on in our conversation that consciousness is something belongs to us humans, or not humans, but living organisms, maybe you will one day find out that consciousness is in everything. And that will humble you. If that was true, it would certainly humble me, although maybe, almost maybe, I don't know. I don't know what effect that would have. My understanding of that principle is along the lines of, say, that an electron has a preferred state, or it may or may not pass through a bit of glass. It may reflect off, or it may go through, or something like that. And so that feels as if a choice has been made. But if I'm going down the fully deterministic route, I would say there's just an underlying determinism that has defined that, that has defined the preferred state, or the reflection or non reflection. But look, yeah, you're right. If it turned out that there was a thing that it was like to be the sun, then I'd be amazed and humbled, and I'd be happy to be both, that sounds pretty cool. And you'll say the same thing as you said to your daughter, but it's nevertheless feels something like to be me, and that's pretty damn good. So Kubrick created many masterpieces, including The Shining, Dr. Strangelove, Clockwork Orange. But to me, he will be remembered, I think, to many 100 years from now for 2001 in Space Odyssey. I would say that's his greatest film. I agree. And you are incredibly humble. I listened to a bunch of your interviews, and I really appreciate that you're humble in your creative efforts and your work. But if I were to force you a gunpoint. Do you have a gun? You don't know that, the mystery. It's to imagine 100 years out into the future. What will Alex Carlin be remembered for from something you've created already, or feel you may feel somewhere deep inside you may still create? Well, okay, well, I'll take the question in the spirit it was asked, but very generous. Gunpoint. Yeah. What I try to do, so therefore what I hope, yeah, if I'm remembered, what I might be remembered for, is as someone who participates in a conversation. And I think that often what happens is people don't participate in conversations, they make proclamations, they make statements, and people can either react against the statement or can fall in line behind it. And I don't like that. So I want to be part of a conversation. I take as a sort of basic principle, I think I take lots of my cues from science, but one of the best ones, it seems to me, is that when a scientist has something proved wrong, that they previously believed in, they then have to abandon that position. So I'd like to be someone who is allied to that sort of thinking. So part of an exchange of ideas. And the exchange of ideas for me is something like, people in your world, show me things about how the world works. And then I say, this is how I feel about what you've told me. And then other people can react to that. And it's not to say this is how the world is. It's just to say, it is interesting to think about the world in this way. And the conversation is one of the things I'm really hopeful about in your works. The conversation you're having is with the viewer, in the sense that you're bringing back you and several others, but you very much so, sort of intellectual depth to cinema, to now series, sort of allowing film to be something that, yeah, sparks a conversation, is a conversation, lets people think, allows them to think. But also, it's very important for me that if that conversation is gonna be a good conversation, what that must involve is that someone like you who understands AI, and I imagine understands a lot about quantum mechanics, if they then watch the narrative, feels, yes, this is a fair account. So it is a worthy addition to the conversation. That for me is hugely important. I'm not interested in getting that stuff wrong. I'm only interested in trying to get it right. Alex, it was truly an honor to talk to you. I really appreciate it. I really enjoy it. Thank you so much. Thank you. Thanks, man. Thanks for listening to this conversation with Alex Garland, and thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast, you'll get $10, and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman. And now, let me leave you with a question from Ava, the central artificial intelligence character in the movie Ex Machina, that she asked during her Turing test. What will happen to me if I fail your test? Thank you for listening, and hope to see you next time.
Alex Garland: Ex Machina, Devs, Annihilation, and the Poetry of Science | Lex Fridman Podcast #77
The following is a conversation with Anne Drouin, writer, producer, director, and one of the most important and impactful communicators of science in our time. She co wrote the 1980 science documentary series Cosmos hosted by Carl Sagan, whom she married in 1981 and her love for whom, with the help of NASA, was recorded as brainwaves on a golden record along with other things our civilization has to offer and launched into space on the Voyager 1 and Voyager 2 spacecraft that are now, 42 years later, still active, reaching out farther into deep space than any human made object ever has. This was a profound and beautiful decision Anne made as the creative director of NASA's Voyager Interstellar Message Project. In 2014 she went on to create the second season of Cosmos, called Cosmos A Space Time Odyssey, in 2020 the new third season called Cosmos Possible Worlds, which is being released this upcoming Monday, March 9th. It is hosted, once again, by the fun and the brilliant Neil deGrasse Tyson. Carl Sagan, Anne Drouin, and Cosmos have inspired millions of scientists and curious minds across several generations by revealing the magic, the power, the beauty of science. I am one such curious mind, and if you listened to this podcast, you may know that Elon Musk is as well. He graciously agreed to read Carl Sagan's words about the pale blue dot in my second conversation with him. If you listened, there was an interesting and inspiring twist at the end. This is the Artificial Intelligence Podcast, if you enjoy it, subscribe on YouTube, give it 5 stars on Apple Podcast, support it on Patreon, or connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to send and receive money digitally, peer to peer, and security in all digital transactions is very important, let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now we just need to do the same for autonomous vehicles and artificial intelligence systems in general. So again, if you get Cash App from the App Store or Google Play, and use the code LEX PODCAST, you get $10, and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Anne Drouin. What is the role of science in our society? Well, I think of what Einstein said when he opened the 1939 New York World's Fair. He said, if science is ever to fulfill its mission the way art has done, it must penetrate. Its inner meaning must penetrate the consciousness of everyone. And so for me, especially in a civilization dependent on high technology and science, one that aspires to be democratic, it's critical that the public, as informed decision makers, understand the values and the methods and the rules of science. So you think about what you just mentioned, the values and the methods and the rules and maybe the technology that science produces, but what about sort of the beauty, the mystery of science? Well, you've touched on what I think is for me, that's how my way into science is that for me, it's much more spiritually uplifting. The revelations of science, the collective revelations of really countless generations of searchers and the little tiny bit we know about reality is the greatest joy for me because I think that it relates to the idea of love. What is love that is based on illusion about the other? That's not love. Love is seeing, unflinching the other and accepting with all your heart. And to me, knowing the universe as it is, or the little bit that we're able to understand at this point is the purest kind of love. And therefore, you know, how can our philosophy, our religion, if it's rootless in nature, how can it really be true? I just don't understand. So I think you need science to get a sense of the real romance of life and the great experience of being awake in the cosmos. So the fact that we know so little, the humbling nature of that, and you kind of connect love to that, but isn't it also, isn't it scary? Why is it so inspiring, do you think? Why is it so beautiful that we know so little? Well, first of all, as Socrates thought, you know, knowing that you know little is knowing, really knowing something, knowing more than others. And it's that voice whispering in our heads, you know, you might be wrong, which I think is not only it's really healthy because we're so imperfect, we're human, of course, but also, you know, love to me is the feeling that you always want to go deeper, get closer. You can't get enough of it. You can't get close enough, deep enough. So and that's what science is always saying is science is never simply content with its understanding of any aspect of nature. It's always saying it's always finding that even smaller cosmos beneath. So I think the two are very much parallel. So you said that love is not an illusion. No, it's not. What is love? What is love is, is knowing, for me, love is, is knowing something deeply and still being completely gratified by it, you know, and wanting to know more. So what is love? What is loving someone, a person, let's say deeply is not idealizing them, not putting some kind of subjective projection on them, but knowing them as they are. And so for me, for me, the only aperture to that knowing about nature, the universe is science because it has that error correcting mechanism that most of the stuff that we do doesn't have. You know, you could say the Bill of Rights is kind of an error correcting mechanism, which is one of the things I really appreciate about the society in which I live to the extent that it's upheld and we keep faith with it and the same with science. It's like we will give you the highest rewards we have for proving us wrong about something. It's genius. That's why, that's why in only 400 years since Galileo's first look through a telescope, we could get from this really dim, vague, this vague apprehension of another world to sending our eyes and our senses there or even to going beyond. So it is, it is, it delivers the goods like nothing else, you know, it really, it delivers the goods because it's always, it's always self aware of its fallibility. So on that topic, I'd like to ask your opinion and a feeling I have that I'm not sure what to do with, which is the, the skeptical aspect of science. So the modern skeptics community and just in general, certain scientists, many scientists, maybe most scientists that apply the scientific method are kind of rigorous in that application. And they, it feels like sometimes miss out some of the ideas outside the reaches, just slightly outside of the reach of science. And they don't dare to sort of dream or think of revolutionary ideas that others will call crazy in this particular moment. So how do you think about the skeptical aspect of science that is really good at sort of keeping us in check, keeping us humble, but, but at the same time, sort of the kind of dreams that you and Carl Sagan have inspired in the world, it kind of shuts it down sometimes a little bit. Yeah. I mean, I think it's up to the individual, but for me, you know, I was so ridiculously fortunate in that I, my tutorial in science, because I'm not a scientist and I wasn't trained in science, was 20 years of days and nights with Carl Sagan. And the wonder, I think the reason Carl remains so beloved, well, I think there are many reasons, but at the root of it is the fact that his skepticism was never at the cost of his wonder and his wonder was never at the cost of his skepticism. So he couldn't fool himself into believing something he wanted to believe because it made him feel good at the other. But on the other hand, he recognized that what science, what nature is, it's really, it's good enough, you know, it's way better than our fantasies. And so if you, if you're that kind of person who loves happiness, loves life and your eyes are wide open and you read everything you can get your hands on and you spend years studying what is known so far about the universe, then you have that capacity, a really infinite capacity to be alive, but all, and also at the same time to be very rigorous about what you're willing to believe. For Carl, I don't think he ever felt that his skepticism cost him anything because again, it comes back to love. He wanted to know what Nietzsche really was like, not to inflict his, you know, preconceived notions on what he wanted it to be. So you can't go wrong because it doesn't, you know, I mean, you know, I think the pale blue dot is the, is a perfect example of this, of his massive achievement is to say, okay, or the Voyager record is another example is here we have this mission, our first reconnaissance of the outer solar system. Well, how can we make it a mission in which we absolutely squeeze every drop of consciousness and understanding from it? We don't have to be scientists and then be human beings. I think that's the tragedy of Western civilization is that it's, you know, when it's one of its greatest gifts has been science and yet at the same time, it believing that we are the children of a disappointed father, a tyrant who puts us in a maximum security prison and calls it paradise, who looks at us, who watches us every moment and hates us for being our human selves, you know? And then most of all, what is our great sin? It's partaking of the tree of knowledge, which is our greatest gift as humans. This pattern recognition, this ability to see things and then synthesize them and jump to conclusions about them and test those conclusions. So I think the reason that in literature, in movies, the scientist is a figure of alienation, a figure, you know, oh, you see these biopics about scientists and yeah, he might've been great, but you know, he was missing in ship. You know, he was a lousy husband. He lacked, you know, the kind of spiritual understanding that maybe, you know, his wife had and it's always in the end and they come around, but to me, that's a false dichotomy that we are, you know, to the extent that we are aware of our surroundings and understand them, which is what science makes it possible for us to do, we're even more alive. So you mentioned a million awesome things there, let's even just, can you tell me about the Voyager one and two spacecraft and the interstellar message project and that whole just fascinating world leading up to. One of my favorite subjects, I love talking about it. I'll never get over it. I'll never be able to really wrap my head around the reality of it, the truth of it. What is it first of all? What's the Voyager spacecraft? Okay, so Voyagers one and two were our first reconnaissance mission of what was then considered the outer solar system and it was a gift of gravity. The idea that swinging around these worlds gives you a gravitational assist, which ultimately will send you out of the solar system to wander the Milky Way galaxy for one to five billion years. So Voyager gave us our first close up look of Jupiter, Saturn, Uranus, Neptune. It discovered new moons. It discovered volcanoes on Io. Its achievements are astonishing. And remember, this is technology from the early to mid 1970s. And it's still active. And it's still active. We talked to Voyager a few days ago. We talked to it, in fact, a year ago, I think it was. We needed to slightly change the attitude of the spacecraft. And so we fired up its thrusters for the first time since 1987. Did they work? Instantly. It was as if you had left your car in the garage in 1987. And you put the key in the ignition because you use keys then in the ignition and it turned over the first time you stepped on the gas. And so that's the genius of the engineering of Voyager. And Carl was one of the key participants in imagining what its mission would be because it was a gift actually of the fact that every 175 years, plus or minus, there is an alignment of the worlds. And so you could send two spacecraft to these other worlds and photograph them and use your mass spectrometer and all the other devices on Voyager to really explore these worlds. And it's the farthest spacecraft, it's the farthest human creation away from us today. Voyager 1. Voyager 1. These two spacecraft not only gave us our first close up look at hundreds of moons and planets, these four giant planets, but also it told us the shape of the solar system as it moves through the galaxy because there were two of them going in different directions and they finally, and they arrived at a place called the heliopause, which is where the wind from the sun, the solar wind dies down and the interstellar medium begins. And both Voyagers were the first spacecraft that we had that could tell us when that happened. So it's a consummate, I think it's the greatest scientific achievement of the 20th century. And engineering in some sense. Engineering, I mean really, you know, Voyager is doing this on less energy than you have in your toaster, something like 11 Watts. So okay, but because of this gravitational assist, both Voyagers were destined, as I say, to, first of all, they were supposed to function for a dozen years and now it's 42 years since launch and we're still talking to them. So that's amazing. But prior to launch, almost a year, eight, nine months prior to launch, it was decided that since Frank Drake and Carl Sagan and Linda Solzman Sagan had created something called the Pioneer 10 Plaque for the Pioneer spacecraft that preceded Voyager, which was kind of like a license plate for the planet Earth, you know, man and a woman, hands up, you know, very, very basic, but very effective. And it captured the imagination of people all over the world. And so NASA turned to Frank and to Carl and said, we'd like you to do a message for Voyager because if it's going to be circumnavigating the Milky Way galaxy for one to five billion years, you know, it's like 20 trips around the galaxy. And there's a very small chance that a space faring civilization would be able to flag one of them down. And so on board, you see this exquisite golden disc with scientific hieroglyphics explaining our address and various basic scientific concepts that we believe that would be common to any space faring civilization. And then beneath this exquisite golden disc is the Voyager record, the golden record. And it contains something like 118 photographs, images of life on Earth, as well as 27 pieces of music from all around the world. Many people describe it as the invention of world music. World music was not a concept that existed before the Voyager record. And we were determined to take our music, not just from the dominant technical cultures, but from all of the rich cultural heritage of the Earth. And there's a sound essay, which is a kind of using a microphone as a camera to tell the story of the Earth, beginning with its geological sounds and moving into biology and then into technology. And I think what you were getting at is that at the end of this sound essay, I had asked Carl if it were, in the making of the record, it was my honor to be the creative director of the project, if it was possible to, if I had meditated for an hour while I was hooked up so that every single signal that was coming from my brain, my body, was recorded and then converted into sound for the record. Was it possible that these putative extraterrestrials of the distant future, of perhaps a billion years from now, would be able to reconstitute this message and to understand it? And he just, big smile, you know, and just said, well, hey, a billion years is a long time. It's a long time. Go do it. And so I did this. And what were you thinking about in the meditation? Like what, I mean, it's such an interesting idea of recording as you think about things. What were you thinking about? So I was blindfolded and couldn't hear anything. And I had made a mental itinerary of exactly where I wanted to go. I was truly humbled by the idea that these thoughts could conceivably touch the distant future. Yeah, that's incredible. So in 1977, there are some 60,000 nuclear weapons on the planet. The Soviet Union and the United States are engaged in a, you know, to the death competition. And so I began by trying to tell the history of the planet in, you know, to my limited ability what I understood about the story of the early existence of the planet, about the origin of life, about the evolution of life, about the history of humans, about our current at that time predicament, about the fact that one in five of us was starving or unable to get potable water. And so I sort of gave a kind of, you know, as general a picture as I possibly could of our predicament. And I also was very newly within days of the moment when Carl and I fell in love with each other. We had fallen in love with each other long before because we'd known each other for years, but it was the first time that we had expressed our feelings for each other. Acknowledged it, the existence of this love. Yes, because we were both involved with other people and it was a completely outside his morality and mine to even broach the subject. But it was only days after that it happened. And for me, it was a eureka moment. It was in the context of finding that piece of Chinese music that was worthy to represent one of the oldest musical traditions on earth when those of us who worked on the Voyager record were completely ignorant about Chinese music. And so that had been a constant challenge for me, talking to professors of Chinese music, listening to musicologists everywhere and all through the project, desperately trying to find this one piece. Found the piece, lived on the Upper West Side, found the piece, a professor at Columbia University gave it to me. And of all the people I talked to, everyone had said, that's hopeless. You can't do that. There can't be one piece of Chinese music. But he was completely, no problem, I've got it. And so he told me the story of the piece, which only made it an even greater candidate for the record. And I listened to it, called Carl Sagan, who was in Tucson, Arizona addressing the American Society of Newspaper Editors. And I left him a message, hotel message center. And he called me back an hour later. And I heard this beautiful voice say, I get back to my hotel room, and I find this message that Annie called. And I asked myself, why didn't you leave me this message 10 years ago? My heart was beating out of my chest. I, it was for me a kind of eureka moment, a scientific breakthrough, a truth, a great truth had suddenly been revealed. And of course, I was awkward and didn't really know what to say. And so I blurted something out like, oh, I've been meaning to talk to you about that, Carl, which wasn't really true. I never would have talked to him about it. We had been alone countless times. We humans are so awkward in these moments and these amazing moments. And I just said, for keeps. And he thought for a very brief, like a second and said, you mean get married? And I said, yeah. And he said, yeah. And we put down the phone. And I literally was jumping around my apartment like a lunatic, because it was so obvious, you know, it was something like, of course. And then the phone rang again. And I thought, damn, no, he's going to say, I don't know what I was saying. I am married. I have a kid. I'm not going to do this, you know? But he was like, I just want to make sure that that really happened. And I said, yeah. And he said, we're getting married. And I said, yeah, we're getting married. Now this was June 1st, 1977. The records had not been affixed to the spacecraft yet. And there had been a lot of controversy about what we were doing. I should say that among the 118 pictures was an image of a man and a woman, frontally, completely naked. And there was, I believe, a congressman on the floor that said, NASA to send smut to the stars, you know? And so NASA really, they got very upset and they said, you can't send a picture. And we had done it so that it was so brilliant. It was like this lovely couple, completely naked. And then the next image was a kind of overlay schematic to show the fetus inside this woman that was developing. And then that went off into, you know, additional imagery of human reproduction. And it really hit me that how much we hate ourselves, that we couldn't bear to be seen as we are. So in some sense that congressman also represents our society. Perhaps his opposition should have been included as well. Yes. Well, that was one of the most vigorous debates during the making of the record with the, you know, the five, six people that we collaborated with was, do we show, do we only put our best foot forward? Or do we show Hiroshima, Auschwitz, the Congo, what we have done? What do you think represents humanity? If you kind of, if you think about it, do our darker moments, are they essential for humanity? All the wars we've been through, all the tortures and the suffering and the cruelty. Is that essential for happiness, for beauty, for creation, generally speaking? Well, certainly not essential for happiness or beauty, that's for sure. I mean, it's part of who we are, if we're going to be real about it, which is, you know, I think we tell on ourselves, even if we don't want to be real, we, you know, I think that if you're a spacefaring civilization, and you've gotten it together sufficiently, that you can move from world to world, then I think they probably took one look at this derelict spacecraft and they knew that these were people in their technological adolescence, and they were just setting forth, and they must have had these issues, you know, because it's, and so it really, you know, that's the great thing about lying is that a lie only has a shelf life. It's like, if like a great work of art that's a forgery, people can be fooled immediately, but 10 or 15 years, 20 years later, they start to look at it and, you know, they begin to realize the lens, our lens of our present is coloring everything that we see. So you know, I think it didn't matter that we didn't show our atrocities. They would fill in the blanks. They would fill in the blanks. So let me sort of ask, you've mentioned how unlikely it is that you and Carl did two souls like yours would meet in this vast world. What are your views on how and why incredibly unlikely things like these nevertheless do happen? It's purely to me, chance. It's totally random. It's a just, I mean, but, and the fact is, is that some people are, and it's happening every day right now. Some people are the random casualties of chance and that, and I don't just mean the people who are being, you know, destroyed in childhood, in wartime, I'm also, or the people who starved to death because of famine, but also the people who, you know, who are not living to the fullest, all of these things. And I think there's a, my parents met on the subway in rush hour. And so I'm only here with you because of the most random possible situation. And so I've had this, a sense of this, even before I knew Carl, I always felt this way that I only existed because of the generosity of the rush hour, no, of just all of the things, all of the skeins of causality. It's interesting because, you know, the rush hour is a source of stress for a lot of people, but clearly in its moments, it can also be a source of something beautiful. That's right. Of strangers meeting and so on. So everything, everything is, has a possibility of doing something beautiful. So let me ask sort of a quick tangent on the Voyager, this, this beautiful romantic notion that Voyager One is sort of our farthest human reach into space. If you think of what, I don't know if you've seen, but what Elon Musk did with putting the Roadster, letting it fly out into space, there's a sort of humor to it. I think that's also kind of interesting, but maybe you can comment on that. But in general, if now that we are developing what we were venturing out into space again in a more serious way, what kind of stuff that represent since Voyager was launched, should we send out as a followup? Is there things that you think that's developed in the next, in the 40 years after that we should update the spacefaring aliens? Well, of course now we could send the worldwide, we could send everything that's on the worldwide web. We could send, I mean, you know, that was a time when we're talking about photograph records and transistor radios and, you know, so we tried to be, to take advantage of the existing technology to the fullest extent, you know, the computer that was hooked up to me from my brainwaves and my heart sounds while I was meditating was, you know, the size of a gigantic room. And I'm sure it's not that, it didn't have the power of a phone, as the phone has now. So you know, now we could just, I think we could let it all hang out and just like send, you know, every week. I mean, that's the wonder, like I would send, you know, Wikipedia or something and not be a gatekeeper, but show who we are. You were also, it's interesting because one of the problems of the internet of having so much information is it's actually the curation, the human curation is still the powerful, beautiful thing. Yes. So what you did with the record is actually, is exactly the right process. It's kind of boiling down a massive amount of possibilities of what you could send into something that represents, you know, the better angels of our nature or represents our humanity. So if you think about, you know, what would you send from the internet as opposed to sending all of Wikipedia, for example, all human knowledge, is there something just new that we've developed, you think, or fundamentally we're still the same kind of human species? I think fundamentally we're the same, but we have advanced to an astonishing degree in our capacity for data retrieval and for transmission. And so, you know, I would send YouTube, I would send, you know, really like think of all the, you know, I still feel so lucky that there's any great musical artist of the last hundred years who I revere, I can just find them and watch them and listen to them. And you know, that's fantastic. I also love how democratic it is that we each become curators and that we each decide those things. Now, I may not agree with, you know, the choices that everyone makes, but of course not because that's not the point. The point is, is that we are, you know, we have discovered largely through the internet that we are an intercommunicating organism and that can only be good. So you could also send now, Cosmos. Yes, I'd love to. I would be proud to. I mean, you've spoken about a very specific voice that Cosmos had in that it reveals the magic of science. I think you said shamanic journey of it and not the details of the latest breakthroughs or so on. Just revealing the magic. Can you try to describe what this voice of Cosmos is with the follow up and the new Cosmos that you're working on now? Yes, well, the dream of Cosmos is really like Einstein's quote, you know, it's the idea of the awesome power of science to be in absolutely everyone's hands. You know, it belongs to all of us. It's not the preserve of a priesthood. It's just the community of science is becoming more diverse and being less exclusive than it was guilty of in the not so recent past. The discoveries of science, our understanding of the Cosmos that we live in has really grown by leaps and bounds and probably we've learned more in the last hundred years about it. You know, the tempo of discovery has picked up so rapidly. And so the idea of Cosmos from the 1970s when Carl and I and Steven Soder, another astronomer, first imagined it was that interweaving not only of the scientific concepts and revelations and using, you know, cinematic VFX to take the viewer on this transporting, uplifting journey but also the stories of the searchers. Because the more I have learned about, you know, the process of science through my life with Carl and sense, the more I am really persuaded that it's that adherence to the facts and to that adherence to that little approximation, that little bit of reality that we've been able to get our hands around is something that we desperately need and it doesn't matter if you are a scientist. In fact, the people, it matters even more if you're not. And since, you know, the level of science teaching has been fairly or unfairly maligned and the idea that once there was such a thing as a television network, which of course has now evolved into many other things, the idea that you could in the most democratic way make accessible to absolutely everyone and most especially people who don't even realize that they have an interest in a subject or who feel so intimidated by the jargon of science and its kind of exclusive history. The idea that we could do this and, you know, in season two of Cosmos, the Space Time Odyssey, we were in 181 countries in the space of two weeks. It was the largest rollout in television history, which is really amazing for a, there is no science based programming. By the way, just to clarify, the series was rolled out, so it was shown in not that many countries. You said we were in. Well, our show was in 180 countries. Yeah, the show, which is incredible. I mean, the hundreds of millions, whatever that number is, the people that watched it, it's just, it's crazy. It's so crazy that, for instance, my son had a cerebral hemorrhage a year ago and the doctor who saved his life in a very dangerous situation. When he realized that, you know, that Sam and I were who we were, he said, that's why I'm here. You know, he said, if you come of age in a poor country like Colombia and Carl Sagan calls you to science when you're a child, then, then, you know, you go to medicine because that's the only avenue open to you, but that's why I'm here. And I've heard that story and I hear that story, I think every week. How does that make you feel? I mean, the number of scientists, I mean, a lot of it is quiet, right? But the number of scientists Cosmos has created is just countless. I mean, it probably touched a lot of, I don't know, probably it could be a crazy number of the 90% of scientists or something that have been. I would love to do that census because I, because that's the greatest gratification, because that's the dream of science. That's the whole idea is that if it belongs to all of us and not just a tiny few, then we have some chance of determining how it's used. And if it's only in the hands of people whose only, whose only interests are the balance sheet or hegemony over other nations or things like that, then it'll probably end up being a gun aimed at our heads. But if it's distributed in the widest possible way, a capability that we now have because of our technology, then the chance is that it will be used with wisdom. That's the dream of it. So that's why we did the first Cosmos. We wanted to take not just, as I say, the scientific information, but also tell the stories of these searchers. Because for us, and for me, carrying on this series in the second and third seasons, the primary interest was that we wouldn't tell a story unless it was a kind of a threefer. It was not just a way to understand a new scientific idea, but it was also a way to understand what, if it matters what's true, how the world can change for us and how we can be protected. And if it doesn't matter what's true, then we're in grave danger because we have the capability to not only destroy ourselves and our civilization, but to take so many species with us. And I'd like to talk to you about that particular, sort of the dangers of ourselves in a little bit, but sort of to linger on Cosmos. Maybe for the first, the 1980 and the 2014 follow up, what's a, or one of the, or several memorable moments from the creation of either of those seasons? Well, you know, the critical thing really was the fact that Seth MacFarlane became art champion because I had been with three colleagues, I had been schlepping around from network to network with a treatment for Cosmos and every network said they wanted to do it, but they wouldn't give me creative control and they wouldn't give me enough money to make it cinematic and to make it feel like you're really going on an adventure. And so I think both of those things, sorry to interrupt, both of those things are given what Cosmos represents, the legacy of it and the legacy of Carl Sagan is essential control, especially in the modern world. It's wonderful that you sought control, that you did not really push it. And I kept saying no. And my partners, I'm sure, you know, they would look at me like I was nuts, you know, and they probably must have entertained the idea that maybe I didn't really want to do it, you know, because I was afraid or something, but I kept saying no. And it wasn't until I met Seth MacFarlane and he took me to Fox and to Peter Rice and said, you know, I'll pay for half the pilot if I have to, you know, and Peter Rice was like, put your money away. And in every time since, in the 10 years since, at every turn, when we needed Seth to intervene on our behalf, he stood up and he did it. And so that was like, in a way, that is the watershed for me of everything that followed since. And I was so lucky because, you know, Steve Soder and I written the original Cosmos with Carl and collaborated on the treatment for season two. And then Brennan Braga came into our project at the perfect moment and has proven to be just the, really, I have been so lucky my whole life. I've collaborated, I've been lucky with the people, my collaborators have been extraordinary. And so that was a critical thing. But also to have, you know, for instance, our astonishing VFX supervisor who comes from the movies, who heads the global association of VFX people, Jeff Okun. And then, you know, I could rattle off 10 more names, I'd be happy to do that. And it was that collaboration. So the people were essential to the creation of... Absolutely. I mean, when it came down, I have to say that when it came down to the vision of what the series would be, that was me sitting in my home, looking out the window and, you know, really imagining like what I wanted to do. Can you pause on that for a second? Like what's that process? Because it, you know, Cosmos is also, it's grounded in science, of course, but it's also incredibly imaginative and the words used are carefully crafted. Thank you. So what's... If you can talk about the process of that, the big picture, imaginative thinking, and sort of the rigorous crafting of words that like basically turns into something like poetry. Thank you so much. For me, these are rare occasions for human self esteem. The scientists that we bring to life in Cosmos are people, in my view, who have everything we need to see us through this current crisis. They're, very often they come, they're poor, they're female, they're outsiders who are not expected to have gifts that are so prodigious, but they persevere. And so you have someone like Michael Faraday, who comes from a family, dysfunctional family of like 14 people and, you know, it never goes to, university never learns the math. But, you know, is the, you know, there's Einstein years later looking up at the picture of Faraday to inspire him. So it's, you know, if we had people with that kind of humility and unselfishness who didn't want to patent everything, as, you know, Michael Faraday created the wealth of the 20th century with his various inventions. And yet he never took out a single patent at a time when people were patenting everything because that was not what he was about. And to me, that's a kind of almost a saintliness that says that, you know, here's a man who finds in his life, this tremendous gratification from searching. And it's just so impressive to me. And there are so many other people in Cosmos, especially the new season of Cosmos, which is called Possible Worlds. Possible, beautiful title. Possible Worlds, well, I stole it from an author and a scientist from the 1940s. But it, for me, encapsulates not just, you know, the exoplanets that we've begun to discover, not just, you know, the worlds that we might visit, but also the world that this could be, a hopeful vision of the future. You asked me what is common to all three seasons of Cosmos or what is that voice? It's a voice of hope. It's a voice that says there is a future which we bring to life in, I think, a fairly dazzling fashion that we can still have, you know? And in sitting down to imagine what this season would be, the new season would be, I'm sitting where I live in Ithaca, beautiful, just gorgeous place, trees everywhere, waterfalls, I'm sitting there thinking, well, you know, you can't, how do you, how do you awaken people? I mean, you can't yell at them and say we're all going to die, you know? It doesn't help. It doesn't help. But I think if you give them a vision of the future that's not pie in the sky, but something, ways in which science can be redemptive, can actually remediate our future. We have those capabilities right now, as well as the capabilities to do things in the Cosmos that we could be doing right now, but we're not doing them. Not because we don't know how to, how, you know, with the engineering or the material sciences or the physics, we know all we need to know, but we're a little bit paralyzed in some sense. And you know, we're like, I always think we're like the toddler, you know, like we, we left our mother's legs, you know, and scurried out to the moon. And we had a moment of, wow, we can do this. And then we realized, and somehow we had a failure of nerve and we went scurrying back to our mother and, you know, did things that really weren't going to get us out there, like the space shuttle, things like that, because it was a kind of failure of nerve. So Cosmos is about overcoming those fears. We're now as a civilization, ready to be a teenager venturing out into college. We're returning back. Exactly. Exactly. And that's one of my theories about our current situation is that this is our adolescence. And I was a total mess as an adolescent. I was reckless, irresponsible, totally. I didn't, I was inconsiderate. I, the reality of other people's feelings and the future didn't exist for me. So why should a technologically adolescent civilization be any different? But you know, the vast majority of people I know made it through that period and went on to be more wise. And that's what my hope is for our civilization. On a sort of a darker and more difficult subject in terms of, so you just talked about the Cosmos being an inspiration for science and for us growing out of our messy adolescence, but nevertheless, there is threats in this world. So do you worry about existential threats? Like you mentioned nuclear weapons, do you worry about nuclear war? Yes. And if you could also maybe comment, I don't know how much you've thought about it, but there's folks like Elon Musk who are worried about the existential threats of artificial intelligence. Sort of our robotic computer creations sort of resulting in us humans losing control. So can you speak to the things that worry you in terms of existential concerns? All of the above. You don't have to be silly, you know, like not to think and not to look at, for instance, our rapidly burgeoning capability in artificial intelligence. And to see how sick so much of the planet is not to be concerned. And sick is an evil potentially. Well, how much cruelty and brutality is happening at this very moment? And I would put climate change higher up on that list, because I believe that there are unforeseen discoveries that we are making right now, for instance, all that methane that's coming out of the ocean floor that was sequestered because of the permafrost, which is now melting. You know, I think there are other effects besides our greed and short term thinking that we are triggering now with all the greenhouse gases we're putting into the atmosphere. And that worries me day and night. I think about it every single, every moment, really, because I really think that's how we have to be. We have to begin to really focus on how grave the challenge is to our civilization and to the other species that are. It's a mass, this is a mass extinction event that we're living through. And we're seeing it. We're seeing news of it every day. So what do you think about another touchy subject, but what do you think about the politicization of science on topics like global warming and bionic stem cell research and other topics like it? What's your sense? Why? What do you mean by the politicization of global warming? Meaning that if you say, I think what you just said, which is a global warming is a serious concern, it's human caused and maybe some detrimental effects. Certainly there's a large percent of the population of the United States that would, as opposed to listening to that statement, would immediately think, oh, that's just a liberal talking point. That's what I mean by politicization. I think that's not so true anymore. I don't think our problem is a population that's skeptical about climate change because I think that the extreme weather and fire events that we are experiencing with such frequency is really gotten to people. I think that there are people in leadership positions who choose to ignore it and to pretend it's not there, but ultimately I think they will be rejected. The question is, will it be fast enough? I think actually that most people have really finally taken the reality of global climate change to heart and they look at their children and grandchildren and they don't feel good because they come from a world which was in many ways, in terms of climate, fairly familiar and benign and they know that we're headed in another direction and it's not just that, it's what we do to the oceans, the rivers, the air. You ask me, what is the message of cosmos? It's that we have to think in longer terms. I think of the Soviet Union and the United States in the Cold War and they're ready to kill each other over these two different views of the distribution of resources. But neither of them has a form of human social organization that thinks in terms of a hundred years, let alone a thousand years, which are the time scales that science speaks in. And that's part of the problem is that we have to get a grip on reality and where we're headed and I'm not fatalistic at all, but I do feel like, and in setting out to do this series each season, we were talking about climate change in the original cosmos in episode four and warning about inadvertent climate modification in 1980. And of course, Carl did his PhD thesis on the greenhouse effect on Venus and he was painfully cognizant of what a runaway greenhouse effect would do to our planet. And not only that, but the climatic history of the planet, which we go into in great detail in the series. So yeah, I mean, how are we going to get a grip on this if not through some kind of understanding of science? Can I just say one more thing about science is that its powers of prophecy are astonishing. You launch a spacecraft in 1977 and you know where each and every planet in the solar system is going to be and every moon and you rendezvous with that flawlessly and you exceed the design specifications of the greatest dreams of the engineers. And then you go on to explore the Milky Way galaxy and you do it, I mean, you know, the climate scientists, some of the people whose stories we tell in cosmos, their predictions were, and they were working with very early computer modeling capabilities, they have proven to be so robust, nuclear winter, all of these things. This is a prophetic power and yet how crazy that, you know, it's like the Romans with their lead cooking pots and their lead pipes or the Aztecs ripping out their own people's hearts. This is us. We know better and yet we are acting as if it's business as usual. Yeah, the beautiful complexity of human nature, speaking of which, let me ask a tough question I guess because there's so many possible answers, but what aspect of life here on earth do you find most fascinating from the origin of life, the evolutionary process itself, the origin of the human mind, so intelligence, some of the technological developments going on now or us venturing out into space or space exploration, what just inspires you? Oh, they all inspire me. Every one of those inspire me, but I have to say that to me at the origin of, as I've gotten older, to me, the origin of life has become less interesting because I feel, well, not because it's more, I think I understand, I have a better grasp of how it might've happened. Do you think it was a huge leap? I think it was a, we are a byproduct of geophysics and I think it's not, my suspicion of course, which is take it with a grain of salt, but my suspicion is that it happens more often and more places than we like to think because after all the history of our thinking about ourselves has been a constant series of demotions in which we've had to realize, no, no, so to me that's... We're not at the center of the solar system. And the origin of consciousness is to me also not so amazing if you think of it as going back to these one celled organisms of a billion years ago who had to know, well, if I go higher up, I'll get too much sun and if I go lower down, I'll be protected from UV rays, things like that. They had to know that or you, I eat, me, I don't. I mean, even that, I can see if you know that, then knowing what we know now, it's just, it's not so hard to fathom. It seems like, I've never believed there was a duality between our minds and our bodies and I think that even consciousness, all those interesting things seem to me, except one of the things... A byproduct of geophysics. Yeah, all of chemistry, yes, geochemistry, geophysics, absolutely. It makes perfect sense to me and it doesn't make it any less wondrous. It doesn't rob it at all of the wonder of it. And so, yeah, I think that's amazing. I think we tell the story of someone you have never heard of, I guarantee, and I think you're very knowledgeable on the subject, who was more responsible for our ability to venture out to other worlds than anyone else and who was completely forgotten. And so, those are the kinds of stories I like best for Cosmos because... Can you tell me who? No, I'm going to make you watch this series, I'm going to make you buy my book, but I'm just saying, this person would be forgotten, but the way that we do Cosmos is that I ask a question to myself, I really want to get to the bottom to the answer and keep going deeper, deeper until we find what the story is, a story that I know because I'm not a scientist. If it moves me, if it moves me, then I want to tell it and other people will be moved. Do you ponder mortality, human mortality, and maybe even your own mortality? Oh, all the time. I just turned 70, so yeah, I think about it a lot. I mean, it's, you know, how can you not think about it? What do you make of this short life of ours, I mean, let me ask a sort of another way, you've lost Carl, and speaking of mortality, if you could be, if you could choose immortality, you know, it's possible that science allows us to live much, much longer. Is that something you would choose for yourself, for Carl, for you? Well, for Carl, definitely. I would have, you know, in a nanosecond, I would take that deal. But not for me. I mean, if Carl were alive, yes, I would want to live forever because I know it would be fun. But no. Would it be fun forever? I don't know. That's the essential nature of the... I don't know. It's just that the universe is so full of so many wonderful things to discover that it feels like it would be fun. But no, I don't want to live forever. I have had a magical life. I just, you know, my craziest dreams have come true. And I feel, you know, forgive me, but this crazy quirk of fate that put my most joyful, deepest feelings, feelings that decades later, 42 years later, I know how real, how true those feelings were. Everything that happened after that was an affirmation of how true those feelings were. And so, I don't feel that way. I feel like I have gotten so much more than my share, not just my extraordinary life with Carl, my family, my parents, my children, my friends, the places that I've been able to explore, the books I've read, the music I've heard. So I feel like, you know, if it would be much better if instead of working on the immortality of the lucky few of the most privileged people in this society, I would really like to see a concerted effort for us to get our act together, you know? That to me is topic A, more pressing, you know, this possible world, that is the challenge. And we're at a kind of moment where if we can make that choice. So immortality doesn't really interest me. I really, I love nature and I have to say that because I'm a product of nature, I recognize that it's great gifts and it's great cruelty. Well, I don't think there's a better way to end it, and thank you so much for talking to us. It was an honor. Oh, it's wonderful. I really appreciate it. I really enjoyed it. I thought your questions were great. Thank you. Thanks for listening to this conversation with Ann Druyan, and thank you to our presenting sponsor, Cash App. Download it, use code LEXPODCAST, you'll get $10, and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words of wisdom from Carl Sagan. What an astonishing thing a book is. It's a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it, and you're inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions. Finding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic. Thank you for listening, and hope to see you next time.
Ann Druyan: Cosmos, Carl Sagan, Voyager, and the Beauty of Science | Lex Fridman Podcast #78
The following is a conversation with Lee Smolin. He's a theoretical physicist, co inventor of loop quantum gravity, and a contributor of many interesting ideas to cosmology, quantum field theory, the foundations of quantum mechanics, theoretical biology, and the philosophy of science. He's the author of several books, including one that critiques the state of physics and string theory called The Trouble with Physics. And his latest book, Einstein's Unfinished Revolution, The Search for What Lies Beyond the Quantum. He's an outspoken personality in the public debates on the nature of our universe, among the top minds in the theoretical physics community. This community has its respected academics, its naked emperors, its outcasts and its revolutionaries, its madmen and its dreamers. This is why it's an exciting world to explore through a long form conversation. I recommend you listen back to the episodes with Leonard Susskind, Sean Carroll, Michio Okaku, Max Tegmark, Eric Weinstein, and Jim Gates. You might be asking, why talk to physicists if you're interested in AI? To me, creating artificial intelligence systems requires more than Python and deep learning. It requires that we return to exploring the fundamental nature of the universe and the human mind. Theoretical physicists venture out into the dark, mysterious, psychologically challenging place of first principles more than almost any other discipline. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXBODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book on this history. Debits and credits on ledgers started around 30,000 years ago. The US dollar, of course, created over 200 years ago, and Bitcoin, the first decentralized cryptocurrency, was released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it still is aiming to and just might redefine the nature of money. If you get Cash App from the App Store or Google Play and use the code LEXBODCAST, you'll get $10, and Cash App will also donate $10 to First, one of my favorite organizations that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Lee Smolin. What is real? Let's start with an easy question. Put another way, how do we know what is real and what is merely a creation of our human perception and imagination? We don't know. We don't know. This is science. I presume we're talking about science. And we believe, or I believe, that there is a world that is independent of my existence and my experience about it and my knowledge of it, and this I call the real world. So you said science, but even bigger than science, what? Sure, sure. I need not have said this is science. I just was warming up. Warming up? Okay, now that we're warmed up, let's take a brief step outside of science. Is it completely a crazy idea to you that everything that exists is merely a creation of our mind? So there's a few, not many. This is outside of science now. People who believe sort of perception is fundamentally what's in our human perception, the visual cortex and so on, the cognitive constructs that's being formed there is the reality. And then anything outside is something that we can never really grasp. Is that a crazy idea to you? There's a version of that that is not crazy at all. What we experience is constructed by our brains and by our brains in an active mode. So we don't see the raw world. We see a very processed world. We feel something that's very processed through our brains and our brains are incredible. But I still believe that behind that experience, that mirror or veil or whatever you wanna call it, there is a real world and I'm curious about it. Can we truly, how do we get a sense of that real world? Is it through the tools of physics, from theory to the experiments? Or can we actually grasp it in some intuitive way that's more connected to our ape ancestors? Or is it still fundamentally the tools of math and physics that really allow us to grasp it? Well, let's talk about what tools they are. What you say are the tools of math and physics. I mean, I think we're in the same position as our ancestors in the caves or before the caves or whatever. We find ourselves in this world and we're curious. We also, it's important to be able to explain what happens when there are fires, when there are not fires, what animals and plants are good to eat and all that stuff. But we're also just curious. We look up in the sky and we see the sun and the moon and the stars and we see some of those move and we're very curious about that. And I think we're just naturally curious. So we make, this is my version of how we work. We make up stories and explanations. And where there are two things which I think are just true of being human, we make judgments fast because we have to. Where to survive, is that a tiger or is that not a tiger? And we go. Act. We have to act fast on incomplete information. So we judge quickly and we're often wrong or at least sometimes wrong, which is all I need for this. We're often wrong. So we fool ourselves and we fool other people readily. And so there's lots of stories that get told and some of them result in a concrete benefit and some of them don't. So you said we're often wrong, but what does it mean to be right? Right, that's an excellent question. To be right, well since I believe that there is a real world, I believe that to be, you can challenge me on this if you're not a realist. A realist is somebody who believes in this real objective world which is independent of our perception. If I'm a realist, I think that to be right is to come closer. I think first of all, there's a relative scale. There's not right and wrong. There's right or more right and less right. And you're more right if you come closer to an exact true description of that real world. Now can we know that for sure? No. And the scientific method is ultimately what allows us to get a sense of how close we're getting to that real world? No on two counts. First of all, I don't believe there's a scientific method. I was very influenced when I was in graduate school by the writings of Paul Fireman who was an important philosopher of science who argued that there isn't a scientific method. There is or there is not? There is not. Can you elaborate, I'm sorry if you were going to, but can you elaborate on what does it mean for there not to be a scientific method, this notion that I think a lot of people believe in in this day and age? Sure. Paul Fireman, he was a student of Popper who taught Karl Popper. And Fireman argued both by logic and by historical example that you name anything that should be part of the practice of science. Say you should always make sure that your theories agree with all the data that's already been taken. And he'll prove to you that there have to be times when science contradicts, when some scientist contradicts that advice for science to progress overall. So it's not a simple matter. I think that, I think of science as a community. Of people. Of people and as a community of people bound by certain ethical precepts, precepts, whatever that is. So in that community, a set of ideas they operate under, meaning ethically of kind of the rules of the game they operate under. Don't lie, report all your results, whether they agree or don't agree with your hypothesis. Check the training of a scientist. Mostly consists of methods of checking because again, we make lots of mistakes. We're very error prone. But there are tools both on the mathematics side and the experimental side to check and double check and triple check. And a scientist goes through a training and I think this is part of it. You can't just walk off the street and say, yo, I'm a scientist. You have to go through the training and the training, the test that lets you be done with the training is can you form a convincing case for something that your colleagues will not be able to shout down because they'll ask, did you check this? And did you check that? And did you check this? And what about seeming contradiction with this? And you've got to have answers to all those things or you don't get taken seriously. And when you get to the point where you can produce that kind of defense and argument, then they give you a PhD. And you're kind of licensed. You're still gonna be questioned and you still may propose or publish mistakes. But the community is gonna have to waste less time fixing your mistakes. Yes, but if you can maybe linger on it a little longer, what's the gap between the thing that that community does and the ideal of the scientific method? The scientific method is you should be able to repeat and experiment. There's a lot of elements to what construes the scientific method, but the final result, the hope of it is that you should be able to say with some confidence that a particular thing is close to the truth. Right, but there's not a simple relationship between experiment and hypothesis or theory. For example, Galileo did this experiment of dropping a ball from the top of a tower and it falls right at the base of the tower. And an Aristotelian would say, wow, of course it falls right to the base of the tower. That shows that the earth isn't moving while the ball is falling. And Galileo says, no way, there's a principle of inertia and it has an inertia in the direction where the earth isn't moving and the tower and the ball and the earth all move together. When the principle of inertia tells you it hits the bottom, it does look, therefore my principle of inertia is right. And Aristotelian says, no, our style of science is right. The earth is stationary. And so you gotta get an interconnected bunch of cases and work hard to line up and explain. It took centuries to make the transition from Aristotelian physics to the new physics. It wasn't done until Newton in 1680 something, 1687. So what do you think is the nature of the process that seems to lead to progress? If we at least look at the long arc of science, of all the community of scientists, they seem to do a better job of coming up with ideas that engineers can then take on and build rockets with or build computers with or build cool stuff with. I don't know, a better job than what? Than this previous century. So century by century, we'll talk about string theory and so on and kind of possible, what you might think of as dead ends and so on. Which is not the way I think of string theory. We'll straighten out, we'll get all the strings straight. But there is, nevertheless in science, very often, at least temporary dead ends. But if you look at the, through centuries, the century before Newton and the century after Newton, it seems like a lot of ideas came closer to the truth that then could be usable by our civilization to build the iPhone, right? To build cool things that improve our quality of life. That's the progress I'm kind of referring to. Let me, can I say that more precisely? Yes, well, it's a low bar. Because I think it's important to get the time places right. There was a scientific revolution that partly succeeded between about 1900 or late 1890s and into the 1930s, 1940s and so. And maybe some, if you stretched it, into the 1970s. And the technology, this was the discovery of relativity and that included a lot of developments of electromagnetism. The confirmation, which wasn't really well confirmed into the 20th century, that matter was made of atoms. And the whole picture of nuclei with electrons going around, this is early 20th century. And then quantum mechanics was from 1905, took a long time to develop, to the late 1920s. And then it was basically in final form. And the basis of this partial revolution, and we can come back to why it's only a partial revolution, is the basis of the technologies that you mentioned. All of, I mean, electrical technology was being developed slowly with this. And in fact, there's a close relation between the development of electricity and the electrification of cities in the United States and Europe and so forth. And the development of the science. The fundamental physics since the early 1970s doesn't have a story like that so far. There's not a series of triumphs and progresses and there's not any practical application. So just to linger briefly on the early 20th century and the revolutions in science that happened there, what was the method by which the scientific community kept each other in check about when you get something right, when you get something wrong? Is experimental validation ultimately the final test? It's absolutely necessary. And the key things were all validated. The key predictions of quantum mechanics and of the theory of electricity and magnetism. So before we talk about Einstein, your new book, before String Theory, Quantum Mechanics, so on, let's take a step back at a higher level question. What is that you mentioned? What is realism? What is anti realism? And maybe why do you find realism, as you mentioned, so compelling? Well, realism is the belief in an external world independent of our existence, our perception, our belief, our knowledge. A realist as a physicist is somebody who believes that there should be possible some completely objective description of each and every process at the fundamental level, which describes and explains exactly what happens and why it happens. That kind of implies that that system, in a realist view, is deterministic, meaning there's no fuzzy magic going on that you can never get to the bottom, or you can get to the bottom of anything and perfectly describe it. Some people would say that I'm not that interested in determinism, but I could live with the fundamental world, which had some chance in it. So do you, you said you could live with it, but do you think God plays dice in our universe? I think it's probably much worse than that. In which direction? I think that theories can change, and theories can change without warning. I think the future is open. You mean the fundamental laws of physics can change? Yeah. Oh, okay, we'll get there. I thought we would be able to find some solid ground, but apparently the entirety of it, temporarily so, probably. Okay, so realism is the idea that while the ground is solid, you can describe it. What's the role of the human being, our beautiful, complex human mind in realism? Do we have a, are we just another set of molecules connected together in a clever way, or the observer, does the observer, our human mind, consciousness, have a role in this realism view of the physical universe? There's two ways, there's two questions you could be asking. One, does our conscious mind, do our perceptions play a role in making things become, in making things real or things becoming? That's question one. Question two is, does this, we can call it a naturalist view of the world that is based on realism, allow a place to understand the existence of and the nature of perceptions and consciousness in mind, and that's question two. Question two, I do think a lot about, and my answer, which is not an answer, is I hope so, but it certainly doesn't yet. So what kind? Question one, I don't think so. But of course, the answer to question one depends on question two. Right. So I'm not up to question one yet. So question two is the thing that you can kind of struggle with at this time. Yes. That's, what about the anti realists? So what flavor, what are the different camps of anti realists that you've talked about? I think it would be nice if you can articulate for the people for whom there is not a very concrete real world, or there's divisions, or it's messier than the realist view of the universe, what are the different camps, what are the different views? I'm not sure I'm a good scholar and can talk about the different camps and analyze it, but some, many of the inventors of quantum physics were not realists, were anti realists. Their scholars, they lived in a very perilous time between the two world wars. And there were a lot of trends in culture which were going that way. But in any case, they said things like, the purpose of science is not to give an objective realist description of nature as it would be in our absence. This might be saying Niels Bohr. The purpose of science is as an extension of our conversations with each other to describe our interactions with nature. And we're free to invent and use terms like particle, or wave, or causality, or time, or space. If they're useful to us, and they carry some intuitive implication, but we shouldn't believe that they actually have to do with what nature would be like in our absence, which we have nothing to say about. Do you find any aspect of that, because you kind of said that we human beings tell stories, do you find aspects of that kind of anti realist view of Niels Bohr compelling? That we fundamentally are storytellers, and then we create tools of space, and time, and causality, and whatever this fun quantum mechanic stuff is to help us tell the story of our world. Sure, I just would like to believe that there's an aspiration for the other thing. The other thing being what? The realist point of view. Do you hope that the stories will eventually lead us to discovering the real world as it is? Yeah. Is perfection possible, by the way? Is it? No. Well that's, you mean will we ever get there and know that we're there? Yeah, exactly. That's not my, that's for people 5,000 years in the future. We're certainly nowhere near there yet. Do you think reality that exists outside of our mind, do you think there's a limit to our cognitive abilities? Is, again, descendants of apes, who are just biological systems, is there a limit to our mind's capability to actually understand reality? Sort of, there comes a point, even with the help of the tools of physics, that we just cannot grasp some fundamental aspects of that reality. Again, I think that's a question for 5,000 years in the future. We're not even close to that limit. I think there is a universality. Here, I don't agree with David Deutsch about everything, but I admire the way he put things in his last book. And he talked about the role of explanation. And he talked about the universality of certain languages or the universality of mathematics or of computing and so forth. And he believed that universality, which is something real, which somehow comes out of the fact that a symbolic system or a mathematical system can refer to itself and can, I forget what that's called, can reference back to itself and build, in which he argued for a universality of possibility for our understanding, whatever is out there. But I admire that argument, but it seems to me we're doing okay so far, but we'll have to see. Whether there is a limit or not. For now, we've got plenty to play with. Yeah. There are things which are right there in front of us which we miss. And I'll quote my friend, Eric Weinstein, in saying, look, Einstein carried his luggage. Freud carried his luggage. Marx carried his luggage. Martha Graham carried her luggage, et cetera. Edison carried his luggage. All these geniuses carried their luggage. And not once before relatively recently did it occur to anybody to put a wheel on luggage and pull it. And it was right there waiting to be invented for centuries. So this is Eric Weinstein. Yeah. What do the wheels represent? Are you basically saying that there's stuff right in front of our eyes? That once we, it just clicks, we put the wheels on the luggage, a lot of things will fall into place. Yes, I do, I do. And every day I wake up and think, why can't I be that guy who was walking through the airport? What do you think it takes to be that guy? Because like you said, a lot of really smart people carried their luggage. What, just psychologically speaking, so Eric Weinstein is a good example of a person who thinks outside the box. Yes. Who resists almost conventional thinking. You're an example of a person who by habit, by psychology, by upbringing, I don't know, but resists conventional thinking as well, just by nature. Thank you, that's a compliment. That's a compliment? Good. So what do you think it takes to do that? Is that something you were just born with? I doubt it. Well, from my studying some cases, because I'm curious about that, obviously, and just in a more concrete way, when I started out in physics, because I started a long way from physics, so it took me a long, not a long time, but a lot of work to get to study it and get into it, so I did wonder about that. And so I read the biographies, and in fact, I started with the autobiography of Einstein and Newton and Galileo and all those people. And I think there's a couple of things. Some of it is luck, being in the right place at the right time. Some of it is stubbornness and arrogance, which can easily go wrong. Yes. And I know all of these are doorways. If you go through them slightly at the wrong speed or in the wrong angle, they're ways to fail. But if you somehow have the right luck, the right confidence or arrogance, caring, I think Einstein cared to understand nature with ferocity and a commitment that exceeded other people of his time. So he asked more stubborn questions. He asked deeper questions. I think, and there's a level of ability and whether ability is born in or can be developed to the extent to which it can be developed, like any of these things like musical talent. So you mentioned ego. What's the role of ego in that process? Confidence. Confidence. But in your own life, have you found yourself walking that nice edge of too much or too little, so being overconfident and therefore leaning yourself astray or not sufficiently confident to throw away the conventional thinking of whatever the theory of the day, of theoretical physics? I don't know if I, I mean, I've contributed where I've contributed, whether if I had had more confidence in something, I would have gotten further. I don't know. Certainly, I'm sitting here at this moment with very much my own approach to nearly everything. And I'm calm, I'm happy about that. But on the other hand, I know people whose self confidence vastly exceeds mine. And sometimes I think it's justified and sometimes I think it's not justified. Your most recent book titled Einstein's Unfinished Revolution. So I have to ask, what is Einstein's unfinished revolution and also how do we finish it? Well, that's something I've been trying to do my whole life, but Einstein's unfinished revolution is the twin revolutions which invented relativity theory, special and especially general relativity, and quantum theory, which he was the first person to realize in 1905 that there would have to be a radically different theory which somehow realized or resolved the paradox of the duality of particle and wave for photons. And he was, I mean, people I think don't always associate Einstein with quantum mechanics because I think his connection with it, founding as one of the founders, I would say, of quantum mechanics, he kind of put it in the closet. Is it? Well, he didn't believe that the quantum mechanics as it was developed in the mid to late 1920s was completely correct. At first, he didn't believe it at all. Then he was convinced that it's consistent, but incomplete, and that also is my view. It needs, for various reasons, I can elucidate, to have additional degrees of freedom, particles, forces, something to reach the stage where it gives a complete description of each phenomenon, as I was saying, realism demands. So what aspect of quantum mechanics bothers you and Einstein the most? Is it some aspect of the wave function collapse discussions, the measurement problem? Is it the? The measurement problem. I'm not gonna speak for Einstein. But the measurement problem, basically, and the fact that. What is the measurement problem, sorry? The basic formulation of quantum mechanics gives you two ways to evolve situations in time. One of them is explicitly when no observer is observing and no measurement is taking place. And the other is when a measurement or an observation is taking place. And they basically contradict each other. But there's another reason why the revolution was incomplete, which is we don't understand the relationship between these two parts. General relativity, which became our best theory of space and time and gravitation and cosmology, and quantum theory. So for the most part, general relativity describes big things. Quantum theory describes little things. And that's the revolution that we found really powerful tools to describe big things and little things. And it's unfinished because we have two totally separate things and we need to figure out how to connect them so we can describe everything. Right, and we either do that if we believe quantum mechanics as understood now is correct by bringing general relativity or some extension of general relativity that describes gravity and so forth into the quantum domain that's called quantize, the theory of gravity. Or if you believe with Einstein that quantum mechanics needs to be completed, and this is my view, then part of the job of finding the right completion or extension of quantum mechanics would be one that incorporated space, time, and gravity. So, where do we begin? So first, let me ask, perhaps you can give me a chance, if I could ask you some just really basic questions. Well, they're not at all. The basic questions are the hardest, but you mentioned space, time. What is space, time? Space, time, you talked about a construction. So I believe the space, time is an intellectual construction that we make of the events in the universe. I believe the events are real, and the relationships between the events, which cause which are real. But the idea that there's a four dimensional smooth geometry which has a metric and a connection and satisfies the equations that Einstein wrote, it's a good description to some scale. It's a good approximation, it captures some of what's really going on in nature. But I don't believe it for a minute is fundamental. So, okay, we're gonna allow me to linger on that. So the universe has events, events cause other events. This is the idea of causality. Okay, so that's real. That's in my. In your view is real. Or hypothesis, or the theories that I have been working to develop make that assumption. So space, time, you said four dimensional space is kind of the location of things, and time is whatever the heck time is. And you're saying that space, time is, both space and time are emergent and not fundamental? No. Sorry, before you correct me, what does it mean to be fundamental or emergent? Fundamental means it's part of the description as far down as you go. We have this notion. As real. Yes. As real as real it could be. Yeah, so I think that time is fundamental, and quote goes all the way down, and space does not, and the combination of them we use in general relativity that we call space time also does not. But what is time then? I think that time, the activity of time is a continual creation of events from existing events. So if there's no events, there's no time. Then there's not only no time, there's no nothing. So I believe the universe has a history which goes to the past. I believe the future does not exist. There's a notion of the present and a notion of the past, and the past consists of, is a story about events that took place to our past. So you said the future doesn't exist. Yes. Could you say that again? Can you try to give me a chance to understand that one more time? So events cause other events. What is this universe? Cause we'll talk about locality and nonlocality. Good. Cause it's a crazy, I mean it's not crazy, it's a beautiful set of ideas that you propose. But, and if Kozali is fundamental, I'd just like to understand it better. What is the past? What is the future? What is the flow of time? Even the error of time in our universe, in your view. And maybe what's an event, right? Oh, an event is where something changes, or where two, it's hard to say because it's a primitive concept. An event is a moment of time within space. This is the view in general relativity, where two particles intersect in their paths, or something changes in the path of a particle. Now, we are postulating that there is, at the fundamental level, a notion, which is an elementary notion, so it doesn't have a definition in terms of other things, but it is something elementary happening. And it doesn't have a connection to energy, or matter, or exchange of energy? It does have a connection to energy and matter. So it's at that level. Yeah, it involves, and that's why the version of a theory of events that I've developed with Marina Cortez, and it's, by the way, I wanna mention my collaborators, because they've been at least as important in this work as I have. It's Marina Cortez in all the work since about 2013, 2012, 2013, about causality, causal sets. And in the period before that, Roberta Mangibera Unger, who is a philosopher and a professor of law. And that's in your efforts, together with your collaborators, to finish the unfinished revolution. Yes. And focus on causality as a fundamental. Yes. As fundamental to physics. So. And there's certainly other people we've worked with, but those two people's thinking had a huge influence on my own thinking. So in the way you describe causality, that's what you mean of time being fundamental. That causality is fundamental. Yes. And what does it mean for space to not be fundamental, to be emergent? That's very good. There's a level of description in which there are events, there are events create other events, but there's no space. They don't live in space. They have an order in which they caused each other. And that is part of the nature of time for us. But there is an emergent approximate description. And you asked me to define emergent. I didn't. An emergent property is a property that arises at some level of complexity, larger than and more complex than the fundamental level, which requires some property to describe it, which is not directly explicable or derivable is the word I want from the properties of the fundamental things. And space is one of those things in a sufficiently complex universe, space, three dimensional position of things emerged. Yes, and we have this, we saw how this happens in detail in some models, both computationally and analytically. Okay, so connected to space is the idea of locality. Yes. So we've talked about realism. So I live in this world that like sports. Locality is a thing that you can affect things close to you and don't have an effect on things that are far away. It's the thing that bothers me about gravity in general or action at a distance. Same thing that probably bothered Newton, or at least he said a little bit about it. Okay, so what do you think about locality? Is it just a construct? Is it us humans just like this idea and are connected to it because we exist in it, we need it for our survival, but it's not fundamental? I mean, it seems crazy for it not to be a fundamental aspect of our reality. It does. Can you comfort me on a sort of as a therapist, like how do I? I'm not a good therapist, but I'll do my best. Okay. There are several different definitions of locality when you come to talk about locality in physics. In quantum field theory, which is a mixture of special relativity and quantum mechanics, there is a precise definition of locality. Field operators corresponding to events in space time, which are space like separated, commute with each other as operators. So in quantum mechanics, you think about the nature of reality as fields and things that are close in a field have an impact on each other more than farther away. That's, yes. That's very comforting. That makes sense. So that's a property of quantum field theory and it's well tested. Unfortunately, there's another definition of local, which was expressed by Einstein and expressed more precisely by John Bell, which has been tested experimentally and found to fail. And this set up is you take two particles. So one thing that's really weird about quantum mechanics is a property called entanglement. You can have two particles interact and then share a property without it being a property of either one of the two particles. And if you take such a system and then you make a measurement on particle A, which is over here on my right side, and particle B, which is over here. Somebody else makes a measurement of particle B. You can ask that whatever is the real reality of particle B, it not be affected by the choice the observer at particle A makes about what to measure, not the outcome, just the choice of the different things they might measure. And that's a notion of locality because it assumes that these things are very far spaced like separated. And it's gonna take a while for any information about the choice made by the people here at A to affect the reality at B. But you make that assumption, that's called Bell locality. And you derive a certain inequality that some correlations, functions of correlations have to satisfy. And then you can test that pretty directly in experiments which create pairs of photons or other particles. And it's wrong by many sigma. In experiment, it doesn't match. So what does that mean? That means that that definition of locality I stated is false. The one that Einstein was playing with. Yeah, and the one that I stated, that is it's not true that whatever is real about particle B is unaffected by the choice that the observer makes as to what to measure in particle A. No matter how long they've been propagating at almost the speed of light or the speed of light away from each other, it's no matter. So like the distance between them. Well, it's been tested, of course, if you want to have hope for quantum mechanics being incomplete or wrong and corrected by something that changes this. It's been tested over a number of kilometers. I don't remember whether it's 25 kilometers or a hundred and something kilometers, but. So in trying to solve the unsolved revolution, in trying to come up with the theory for everything, is causality fundamental and breaking away from locality? Absolutely. A crucial step. So in your book, essentially, those are the two things we really need to think about as a community. Especially the physics community has to think about this. I guess my question is, how do we solve? How do we finish the unfinished revolution? Well, that's, I can only tell you what I'm trying to do and what I've abandoned as not working. As one ant, smart ant in an ant colony. Yep. Or maybe dumb, that's why, who knows? But anyway, my view of the, we've had some nice theories invented. There's a bunch of different ones. Both relate to quantum mechanics, relate to quantum gravity. There's a lot to admire in many of these different approaches. But to my understanding, they, none of them completely solve the problems that I care about. And so we're in a situation which is either terrifying for a student or full of opportunity for the right student, in which we've got more than a dozen attempts. And I never thought, I don't think anybody anticipated it would work out this way. Which work partly and then at some point, they have an issue that nobody can figure out how to go around or how to solve. And that's the situation we're in. My reaction to that is twofold. One of them is to try to bring people, we evolved into this unfortunate sociological situation in which there are communities around some of these approaches. And to borrow again, a metaphor from Eric, they sit on top of hills in the landscape of theories and throw rocks at each other. And as Eric says, we need two things. We need people to get off their hills and come down into the valleys and party and talk and become friendly and learn to say, not no but, but yes and yes. Your idea goes this far, but maybe if we put it together with my idea, we can go further. Yes. So in that spirit, I've talked several times with Sean Carroll, who's also written an excellent book recently. And he kind of, he plays around, is a big fan of the many worlds interpretation of quantum mechanics. So I'm a troublemaker. So let me ask, what's your sense of Sean and the idea of many worlds interpretation? I've read many the commentary back and forth. You guys are friendly, respect each other, but have a lot of fun debating. I love Sean and he, no, I really, he's articulate and he's a great representative or ambassador of science to the public and for different fields of science to each other. He also, like I do, takes philosophy seriously. And unlike what I do in all cases, he has really done the homework. He's read a lot, he knows the people, he talks to them, he exposes his arguments to them. And I, there's this mysterious thing that we so often end up on the opposite sides of one of these issues. It's fun though. It's fun and I'd love to have a conversation about that, but I would want to include him. I see, about many worlds, well. No, I can tell you what I think about many worlds. I'd love to, but actually on that, let me pause. Sean has a podcast. You should definitely figure out how to talk to Sean. I would, I actually told Sean, I would love to hear you guys just going back and forth. So I hope you can make that happen eventually, you and Sean. I won't tell you what it is, but there's something that Sean said to me in June of 2016 that changed my whole approach to a problem. But I'll have to tell him first. Yes, and that, that'll be great to tell him on his podcast. So. I can't invite myself to his podcast. But I told him, yeah, okay, we'll make it happen. So many worlds. Anyway. What's your view? Many worlds, we talk about nonlocality. Many worlds is also a very uncomfortable idea or beautiful depending on your perspective. It's very nice in terms of, I mean, there's a realist aspect to it. I think you called it magical realism. Yeah. It's just a beautiful line. But at the same time, it's very difficult to far limited human minds to comprehend. So what are your thoughts about it? Let me start with the easy and obvious and then go to the scientific. Okay. It doesn't appeal to me. It doesn't answer the questions that I want answered. And it does so to such a strong case that when Roberto Mangueber Anger and I began looking for principles, and I want to come back and talk about the use of principles in science, because that's the other thing I was going to say, and I don't want to lose that. When we started looking for principles, we made our first principle, there is just one world and it happens once. But so it's not helpful to my personal approach, to my personal agenda, but of course I'm part of a community. And my sense of the many worlds interpretation, I have thought a lot about it and struggled a lot with it, is the following. First of all, there's Everett himself, there's what's in Everett. And there are several issues there connected with the derivation of the Born Rule, which is the rule that gives probabilities to events. And the reasons why there is a problem with probability is that I mentioned the two ways that physical systems can evolve. The many worlds interpretation cuts off, one, the one having to do with measurement, and just has the other one, the Schrodinger evolution, which is this smooth evolution of the quantum state. But the notion of probability is only in the second rule, which we've thrown away. So where does probability come from? And you have to answer the question because experimentalists use probabilities to check the theory. Now, at first sight, you get very confused because there seems to be a real problem because in the many worlds interpretation, this talk about branches is not quite precise, but I'll use it. There's a branch in which everything that might happen does happen with probability one in that branch. You might think you could count the number of branches in which things do and don't happen and get numbers that you can define as something like frequentist probabilities. And Everett did have an argument in that direction, but the argument gets very subtle when there are an infinite number of possibilities, as is the case in most quantum systems. And my understanding, although I'm not as much of an expert as some other people, is that Everett's own proposal failed, did not work. There are then, but it doesn't stop there. There is an important idea that Everett didn't know about, which is decoherence, and it is a phenomenon that might be very much relevant. And so a number of people post Everett have tried to make versions of what you might call many worlds quantum mechanics. And this is a big area and it's subtle, and it's not the kind of thing that I do well. So I consulted, that's why there's two chapters on this in the book I wrote. Chapter 10, which is about Everett's version, chapter 11, there's a very good group of philosophers of physics in Oxford, Simon Saunders, David Wallace, Harvey Brown, and a number of others. And of course there's David Deutsch, who is there. And those people have developed and put a lot of work into a very sophisticated set of ideas designed to come back and answer that question. They have the flavor of there are really no probabilities, we admit that, but imagine if the Everett story was true and you were living in that multiverse, how would you make bets? And so they use decision theory from the theory of probability and gambling and so forth to shape a story of how you would bet if you were inside an Everett in the universe and you knew that. And there's a debate among those experts as to whether they or somebody else has really succeeded. And when I checked in as I was finishing the book with some of those people, like Simon, who's a good friend of mine, and David Wallace, they told me that they weren't sure that any of them was yet correct. So that's what I put in my book. Now, to add to that, Sean has his own approach to that problem in what's called self referencing or self locating observers. And it doesn't, I tried to read it and it didn't make sense to me, but I didn't study it hard, I didn't communicate with Sean, I didn't do the things that I would do, so I had nothing to say about it in the book. I don't know whether it's right or not. Let's talk a little bit about science. You mentioned the use of principles in science. What does it mean to have a principle and why is that important? When I feel very frustrated about quantum gravity, I like to go back and read history. And of course, Einstein, his achievements are a huge lesson and hopefully something like a role model. And it's very clear that Einstein thought that the first job when you wanna enter a new domain of theoretical physics is to discover and invent principles and then make models of how those principles might be applied in some experimental situation, which is where the mathematics comes in. So for Einstein, there was no unified space and time. Minkowski invented this idea of space time. For Einstein, it was a model of his principles or his postulates. And I've taken the view that we don't know the principles of quantum gravity. I can think about candidates and I have some papers where I discuss different candidates and I'm happy to discuss them. But my belief now is that those partially successful approaches are all models, which might describe indeed some quantum gravity physics in some domain, in some aspect, but ultimately would be important because they model the principles and the first job is to tie down those principles. So that's the approach that I'm taking. So speaking of principles, in your 2006 book, The Trouble with Physics, you criticized a bit string theory for taking us away from the rigors of the scientific method or whatever you would call it. But what's the trouble with physics today and how do we fix it? Can I say how I read that book? Sure. Because I, and I'm not, this of course has to be my fault because you can't as an author claim after all the work you put in that you are misread. But I will say that many of the reviewers who are not personally involved and even many who were working on string theory or some other approach to quantum gravity told me, communicated with me and told me they thought that I was fair and balance was the word that was usually used. So let me tell you what my purpose was in writing that book, which clearly got diverted by, because there was already a rather hot argument going on. And this is. On which topic? On string theory specifically? Or in general in physics? No, more specifically than string theory. So since we're in Cambridge, can I say that? We're doing this in Cambridge. Yeah, yeah, of course. Cambridge, just to be clear, Massachusetts. And on Harvard campus. Right, so Andy Straminger is a good friend of mine and has been for many, many years. And Andy, so originally there was this beautiful idea that there were five string theories and maybe they would be unified into one. And we would discover a way to break that symmetries of one of those string theories and discover the standard model and predict all the properties of standard model particles, like their masses and charges and so forth, coupling constants. And then there was a bunch of solutions to string theory found, which led each of them to a different version of particle physics with a different phenomenology. These are called the Calabi Yao manifolds, named after Yao, who is also here. Not, certainly we've been friends at some time in the past anyway. And then there were, nobody was sure, but hundreds of thousands of different versions of string theory. And then Andy found there was a way to put a certain kind of mathematical curvature called torsion into the solutions. And he wrote a paper, String Theory with Torsion, in which he discovered there was, and not formally uncountable, but he was unable to invent any way to count the number of solutions or classify the diverse solutions. And he wrote that this is worrying because doing phenomenology the old fashioned way by solving the theory is not gonna work because there's gonna be loads of solutions for every proposed phenomenology for anything the experiments discovered. And it hasn't quite worked out that way. But nonetheless, he took that worry to me. We spoke at least once, maybe two or three times about that. And I got seriously worried about that. And this is just a little. So it's like an anecdote that inspired your worry about string theory in general? Well, I tried to solve the problem and I tried to solve the problem. I was reading at that time, a lot of biology, a lot of evolutionary theory, like Linmar Gullis and Steve Gould and so forth. And I could take your time to go through the things, but it occurred to me, maybe physics was like evolutionary biology and maybe the laws evolved. And there was, the biologists talk about a landscape, a fitness landscape of DNA sequences or protein sequences or species or something like that. And I took their concept and the word landscape from theoretical biology and made a scenario about how the universe as a whole could evolve to discover the parameters of the standard model. And I'm happy to discuss, that's called cosmological natural selection. Cosmological natural selection. Yeah. Wow, so the parameters of the standard model, so the laws of physics are changing. This idea would say that the laws of physics are changing in some way that echoes that of natural selection, or just it adjusts in some way towards some goal. Yes. And I published that, I wrote the paper in 1888 or 89, the paper was published in 92. My first book in 1997, The Life of the Cosmos was explicitly about that. And I was very clear that what was important is that because you would develop an ensemble of universes, but they were related by descent to natural selection, almost every universe would share the property that it was, its fitness was maximized to some extent, or at least close to maximum. And I could deduce predictions that could be tested from that. And I worked all of that out and I compared it to the anthropic principle where you weren't able to make tests or make falsifications. All of this was in the late 80s and early 90s. That's a really compelling notion, but how does that help you arrive? I'm coming to where the book came from. Yes. So what got me, I worked on string theory. I also worked on loop quantum gravity. And I was one of the inventors of loop quantum gravity. And because of my strong belief in some other principles, which led to this notion of wanting a quantum theory of gravity to be what we call relational or background independent, I tried very hard to make string theory background independent. And it ended up developing a bunch of tools which then could apply directly to general relativity and that became loop quantum gravity. So the things were very closely related and have always been very closely related in my mind. The idea that there were two communities, one devoted to strings and one devoted to loops is nuts and has always been nuts. Okay, so anyway, there's this nuts community of loops and strings that are all beautiful and compelling and mathematically speaking. And what's the trouble with all that? Why is that such a problem? So I was interested in developing that notion of how science works based on a community and ethics that I told you about. And I wrote a draft of a book about that, which had several chapters on methodology of science. And it was a rather academically oriented book. And those chapters were the first part of the book, the first third of it. And you didn't find their remnants in what's now the last part of the trouble with physics. And then I described a number of test cases, case studies. And one of them, which I knew was the search for quantum gravity and string theory and so forth. And I wasn't able to get that book published. So somebody made the suggestion of flipping it around and starting with a story of string theory, which was already controversial. This was 2004, 2005. But I was very careful to be detailed, to criticize papers and not people. You won't find me criticizing individuals. You'll find me criticizing certain writing. But in any case, here's what I regret. Let me make your program worthwhile. Yes. As far as I know, with the exception of not understanding how large the applications to condensed matter, say ADS CFT would get, I think largely my diagnosis of string theory as it was then has stood up since 2006. What I regret is that the same critique, I was using string theory as an example, and the same critique applies to many other communities in science and all of, including, and this is where I regret my own community, that is a community of people working on quantum gravity. Not science string theory. But, and I considered saying that explicitly. But to say that explicitly, since it's a small, intimate community, I would be telling stories and naming names and making a kind of history that I have no right to write. So I stayed away from that, but was misunderstood. But if I may ask, is there a hopeful message for theoretical physics that we can take from that book, sort of that looks at the community, not just your own work on, now with causality and nonlocality, but just broadly in understanding the fundamental nature of our reality, what's your hope for the 21st century in physics? Well, that we solve the problem. That we solve the unfinished problem of Einstein's. That's certainly the thing that I care about most in. Hope for most. Let me say one thing. Among the young people that I work with, I hear very often and sense a total disinterest in these arguments that we older scientists have. And an interest in what each other is doing. And this is starting to appear in conferences where the young people interested in quantum gravity make a conference, they invite loops and strings and causal dynamical triangulations and causal set people. And we're having a conference like this next week, a small workshop at perimeter. And I guess I'm advertising this. And then in the summer, we're having a big full on conference, which is just quantum gravity. It's not strings, it's not loops. But the organizers and the speakers will be from all the different communities. And this to me is very helpful. That the different ideas are coming together. At least people are expressing an interest in that. It's a huge honor talking to you, Lee. Thanks so much for your time today. Thank you. Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash App. Download it, use code LexPodcast. You'll get $10 and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words from Lee Smolin. One possibility is God is nothing but the power of the universe to organize itself. Thanks for listening and hope to see you next time.
Lee Smolin: Quantum Gravity and Einstein's Unfinished Revolution | Lex Fridman Podcast #79
The following is a conversation with Vitalik Buterin, co creator of and author of the white paper that launched Ethereum and Ether, which is a cryptocurrency that is currently the second largest digital currency after Bitcoin. Ethereum has a lot of interesting technical ideas that are defining the future of blockchain technology and Vitalik is one of the most brilliant people innovating in the space today. Unlike Satoshi Nakamoto, the unknown person or group that created Bitcoin, Vitalik is very well known and at a young age is thrust into the limelight as one of the main faces of the technology that may redefine the nature of money and all forms of digital transactions in the 21st century. Quick summary of the ads, two sponsors, Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to Masterclass at masterclass.com slash lex and getting ExpressVPN at expressvpn.com slash lexpod. This show is sponsored by Masterclass sign up at masterclass.com slash lex to get a discount and to support this podcast. When I first heard about Masterclass, I honestly thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from experts at the top of their field. To list some of my favorites, Chris Hatfield on space exploration, Neil deGrasse Tyson on scientific thinking and communication, Will Wright, the creator of SimCity and Sims on game design, Jane Goodall on conservation, Carlos Santana, one of my favorite guitarists on guitar, Garry Kasparov on chess, Daniel Negrano on poker, one of my favorite poker players, Phil Ivey gives a course as well, and many more. Chris Hatfield explaining how rockets work and the experience of being launched into space alone is worth the money. By way of advice, for me, the key is not to be overwhelmed by the abundance of choice. Pick three courses you want to complete. Watch each all the way through from start to finish. It's not that long, but it's an experience that will stick with you for a long time, I promise. It's easily worth the money. You can watch it on basically any device. Once again, sign up at masterclass.com slash lex to get a discount and to support this podcast. This show is sponsored by ExpressVPN. Download it at expressvpn.com slash lexpod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I honestly love it. It's easy to use, press the big power on button, and your privacy is protected. And if you like, you can make it look like your location is anywhere else in the world. I might be in Boston now, but I can make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of obvious benefits. For example, certainly it allows you to access international versions of streaming websites like the Japanese version of Netflix or the UK version of Hulu. As you probably know, I was born in the Soviet Union, so sadly, given my roots and appreciation of Russian history and culture, my website and the website for this podcast is blocked in Russia. So this is another example of where you can use ExpressVPN to access sites like the podcast that are not accessible in your country. ExpressVPN works on any device you can imagine. I use it on Linux, shout out to Ubuntu, Windows, Android, but it's available everywhere else too. Once again, download it at expressvpn.com slash lexpod to get a discount and to support this podcast. And now here's my conversation with Vitalik Buterin. So before we talk about the fundamental ideas behind Ethereum and cryptocurrency, perhaps it'd be nice to talk about the origin story of Bitcoin and the mystery of Satoshi Nakamoto. You gave a talk that started with sort of asking the question, what did Satoshi Nakamoto actually invent? Maybe you could say who is Satoshi Nakamoto and what did he invent? Sure. So Satoshi Nakamoto is the name by which we know the person who originally came up with Bitcoin. So the reason why I say the name by which we know is that this is an anonymous fellow who has shown himself to us only over the internet just by first publishing the white paper for Bitcoin, then releasing the original source code for Bitcoin, and then talking to the very early Bitcoin community on Bitcoin forums and interacting with them and helping the project along for a couple of years. And then at some point in late 2010 to early 2011, he disappeared. So Bitcoin is a fairly unique project in how it has this kind of mythical, kind of quasi godlike founder who just kind of popped in, did the thing and then disappeared and we've somehow just never heard from him again. So in 2008, so the white paper was the first, do you know if the white paper was the first time the name would actually appear, Satoshi Nakamoto? I believe so. So how is it possible that the creator of such an impactful project remains anonymous? That's a tough question. There's no similarity to it in the history of technology as far as I'm aware. Yeah. So one possibility is that it's hellfinny because hellfinny was kind of also active in the Bitcoin community and as hellfinny in those two beginning years. Who is hellfinny, maybe you could say? He is one of the people in the early cypherpunk community. He was a computer scientist, cryptographers, people interested in technology, internet freedom, like those kinds of topics. Was it correct that I read that he seemed to have been involved in either the earliest or the first transaction of Bitcoin? Yes. The first transaction of Bitcoin was between Satoshi and hellfinny. Do you think he knew who Satoshi was? If he wasn't Satoshi, probably no. How is it possible to work so closely with people and nevertheless not know anything about their fundamental identity? Is this like a natural sort of characteristic of the internet? Like if we were to think about it, because you and I just met now, there's a depth of knowledge that we now have about each other that's like physical, like my vision system is able to recognize you. I can also verify your identity of uniqueness, like it's very hard to fake you being you. So the internet has a fundamentally different quality to it, which is just fascinating. This is definitely interesting as I definitely just know a lot of people just by their internet handles. And to me, when I think of them, I see their internet handles and one of them has a kind of profile picture as this kind of face that's kind of not quite human with a bunch of psychedelic colors in it. And when I visualize him, I could just visualize that. That's not an actual face. You are the creator of the second, well, he's currently the second most popular cryptocurrency Ethereum. So on this topic, if we just stick on Satoshi Nakamoto for a little bit longer, you may be the most qualified person to speak to the psychology of this anonymity that we're talking about. Like your identity is known, like I've just verified it. But from your perspective, what are the benefits in creating a cryptocurrency and then remaining anonymous? Like if it can psychoanalyze Satoshi Nakamoto, is there something interesting there? Or is it just a peculiar quirk of him? It definitely helps create this kind of image of this kind of neutral thing that doesn't belong to anyone. And that you created a project and because you're anonymous and because you also disappear or as unfortunately happened to Hal Finney, if that is him, he ended up dying of Lou Gehrig's disease and he's in the cryogenic freezer now. But if you pop in and you create it and you're gone and all that's remaining of that whole process is the thing itself, then no one can go and try to interpret any of your other behavior and try to understand like, oh, this person wrote this thing in some essay at age 16 where he expressed particular opinions about democracy. And so because of that, this project is a statement that's trying to do this specific thing. Instead, it creates this environment where the thing is what you make of it. It doesn't have the burden of your other ideas, political thought and so on. So now that we're sitting with you, do you feel the burden of being kind of the face of Ethereum? I mean, there's a very large community of developers, but nevertheless, is there like a burden associated with that? There definitely is. This is definitely a big reason why I've been trying to kind of push for the Ethereum ecosystem to become more decentralized in many ways, just encouraging a lot of core Ethereum work to happen outside of the Ethereum Foundation and of expanding the number of people that are making different kinds of decisions, having multiple software limitations instead of one and all of these things. There's a lot of things that I've tried to do to remove myself as a single point of failure because that is something that a lot of people criticize me for. So if you look at like the most fundamentally successful open source projects, it seems that it's like a sad reality when I think about it, is it seems to be that one person is a crucial contributor often, if you look at Linus for Linux for the kernel. That is possible and I'm definitely not planning to disappear. That's an interesting tension that projects like this kind of desire a single entity and yet they're fundamentally distributed. I don't know if there's something interesting to say about that kind of structure and thinking about the future of cryptocurrency, does there need to be a leader? There's different kinds of leaders. There's dictators who control all the money. There's people who control organizations. There's high priests that just have themselves other Twitter followers. What kind of leader are you, would you say? These days, actually a bit more in the high priest direction than before. I definitely actually don't do all that much of going around and ordering Ethereum Foundation people to do things because I think those things are important. If there's something that I do think is important, I do just usually kind of say it publicly or just kind of say it to people and quite often projects just going to start doing it. So let's ask the high philosophical question about money. What at the highest level is money? What is money? It's a kind of game and it's a game where we have points and if you have points, there's this one move where you can reduce your points by a number and increase someone else's points by the same number. So it's a fair game, hopefully. Well, it's one kind of fair game. For example, you can have other kinds of fair games. You're going to have a game where if I give someone a point and you give someone a point and instead of that person getting two points, that person gets four points and that's also fair. But, you know, money is easy to kind of set up and it serves a lot of useful functions and so it kind of just survives in society as a meme for thousands of years. It's useful for the storage of wealth, it's useful for the exchange of value. And it's also useful for denominating future payments, a unit of account. A unit of account. So what, if you look at the history of money in human civilization, just if you're a student of history, how has its role or just the mechanisms of money changed over time in your view? Even if we just look at the 20th century or before and then leading up to cryptocurrencies, that's something you think about? Yeah, and I think the big thing in the 20th century is kind of, we saw a lot more intermediation, I guess. The first part is kind of the move from adding more of different kinds of banking and then we saw the move from dollars being backed by gold to dollars being backed by gold that's only redeemable by certain people to dollars not being backed by anything to this system where you have a bunch of free floating currencies and then people getting kind of bank accounts and then those things becoming electronic, people getting accounts with payment processors that have bank accounts. So what do you make of that, that's such a fascinating philosophical idea that money might not be backed by anything. Is that like fascinating to you that money can exist without being backed by something physical? It definitely is. What do you make of that? How is that possible? Is that stable? If we look at the future of human civilization, is it possible to have money at the large scale at such a hugely productive and rich societies be able to operate successfully without money being backed by anything physical? I feel like the interesting thing about the 21st century especially is that a lot of the important valuable things are not backed by anything. If you look at tech companies for example, something like Twitter, you could theoretically imagine that if all of the employees wanted to, they could kind of come together, they would quit and start working on Twitter 2.0 and then the value of and just kind of build the exact same product or possibly build a better product and then just kind of continue on from there and the original Twitter would just not have people left anymore. There is theoretically kind of code and IP that's owned by the company but in reality like good programmers could probably rewrite all that stuff in three months. So the reason why the thing has value is just kind of network effects and coordination problems right like these employees in reality aren't going to switch all at once and also the users aren't all going to switch at once because it's just difficult for them to switch at once and so there's these kind of meta stable and of equilibrium in interactions between thousands of millions of people that are just actually quite sticky even though if you try to kind of assume that everyone's a perfectly rational and kind of perfectly slippery spherical cow they don't seem to exist at all. That stickiness. Do you have a sense, a grasp of the sort of the fundamental dynamic like the physics of that stickiness? It seems to work but and I think some of the cryptocurrency ideas kind of rely on it working. It's the sort of thing that's definitely been economically modeled a lot like one of the kind of analogy of something as similar that you often see in textbooks as like what is a government like if for example like 80% of people in a country just like tomorrow suddenly had the idea that like the laws that are currently the laws of the government that currently is the government are just people and some other thing is the government and they just kind of start acting like it then that would kind of become the new reality and then the question is well what happens if and if between zero and 80% of people start believing that and like what is the thing you also you see is that if there is one of these kind of switches happening is kind of revolution then if you're the first person to join then like you probably don't have the incentive to do that but then if you're the 55th percentile person to join then suddenly becomes quite safe too and so it's definitely is the sort of thing that you can kind of try to analyze and understand mathematically but one of the kind of results is that the sort of like when the switch happens definitely can be chaotic sometimes yeah but still like to me the idea that the network affects that the fact that human beings at a scale like millions billions can share even the idea of currency like yeah I'll agree that's just uh I know economists can model it I'm a skeptic on the economic and uh it's like uh so my my favorite sort of field maybe recreationally psychology is trying to understand human behavior and I think sometimes people just kind of pretend that they can have a grasp on human behavior even though we it's such a messy space that all the models that psychology or economics those different perspectives on human behavior can have are are difficult it's difficult to know how much that's wishful thinking and how much it is actually getting to the core of uh understanding human behavior but on that idea what do you think is the role of money in human motivation so do you think money from an economics perspective from a psychology perspective is core to like human desires money is definitely very far from the only motivator um it is a big motivator and that's uh one of the closest things you have to a universal motivator think because ultimately in like almost any person in the world if you ask them to do something like they'll be more inclined to do it if you also offer some uh offer them money right and that's uh like there's definitely many cases where people will do things other than things that maximize how much money they have and that happens all the time but like though a lot of those other things are kind of but much more specific to and of who that person is and of what their situation is the relationship between the motive and the action and these other things what do you think is the interplay of the other motivator from like Nietzsche perspective is power do you think money equals power do you think those are conflicting ideas do you think i mean that's the one of the ideas that decentralized currency decentralized applications are looking at is uh who holds the power yeah money is definitely a kind of power and there's definitely people who want money because it gives them power and even if my money doesn't seem to and if explicitly be about money a lot of things that people spend money on are ultimately about a social status of some kind um so i mean i definitely view those two things as an of interplaying and then there's also money as just a way of like measuring how successful you are like as a scoreboard right so this kind of gets back to the game i mean like if you have four billion dollars then the main benefit you get from going up one of the big benefits you get from going up to six million dollars is that now instead of being below the guy who has five you're above the guy who has five so you think money could be kind of uh in the game of life it's also a measure of self worth it's like how we it's definitely how uh how a lot of people perceive it define ourselves in the hierarchy of yeah and i'm not yeah not saying it's kind of a healthy thing that people uh define their self worth as money because it's definitely kind of far from a uh perfect indicator of how much you value you provide the society or anything like this but i definitely think that like as a matter of kind of current practice so much of people do feel that way so what does utopia from an economic perspective look like to you what does the perfect world look like i guess the economist's utopia would be one where kind of everything is incentive aligned in the in the sense that there aren't enough conflicts between what satisfies your goals and kind of what is uh good for everyone in the world in the world as a whole what do you think that would um look like does does that mean there's still poor people and rich people there's still income inequality do you think sort of uh marxist ideas are strong do you think sort of ideas of uh objectivism like where the market rules is strong like what is there is the different economic philosophies that just seem to be reflective what utopia would be so i definitely think that existing economic philosophies do end up kind of systematically kind of deviating from the utopia in a lot of ways yeah like one of the big things i talk about for example is public goods right and public goods are especially important on the internet right because like the idea is with kind of money as this game where you know i lose a few coin a few coins and you gain the same number of coins is that this usually happens in a trade where i lose some money you gain some money you lose a sandwich and i gain a sandwich and this kind of model works really well when the thing that we're using money to incentivize this kind of private goods right things that you provide to one person where the benefit comes to one person but the like on the internet especially but also many many contexts and if off the internet there's actions that kind of individuals or groups can take where instead of the benefit going to one person the benefit just goes to many people at the same time and you can't control where the benefit goes to right so for example this podcast you know we publish it and when it's published you don't have any fine grains control over like oh these 38 000 people can watch it and then like these other 29 000 people can't it's like once the number goes high enough then you know people just like copy it and then when i write articles on a blog then they're just like free for everyone and that stuff's even harder to prevent anyone from copying so and aside from that things like you know scientific research for example and even taking more pedestrian examples like climate change mitigation would be a big one so there's a lot of things in the world where you have these kind of individual actions with enough concentrated costs and distributed benefits and money as a point system does not do a good job of encouraging these things and one of the kind of other things even kind of tangentially connected to crypto but kind of theoretically outside of it that i work on is this sort of mechanism called quadratic funding and the way to think about it is i kind of imagine a point system where if like if one person gives coin gives coins to one other person then it works the same way as money but if multiple people give coins to one person and they do so anonymously so it's kind of not in consideration for a specific service to that person themselves then the number of coins received by that person is kind of greater than just the sum of the number of coins that have given by those different people so the actual formula is you take the square root of the amount that each person gave then you add all the square roots and then you kind of square the sum yeah and then you give that and the idea here would basically be that if let's say for example you just start going off and kind of planting a lot of trees and there's a bunch of people that are really happy that you're planting trees and then so they go and all kind of throw a coin your way then the like there is like basically the fact that kind of you get more than the sub you get this kind of square of sum of these of square roots of these tiny nodes that this actually kind of compensates for the tragedy of the commons right this there's even this kind of mathematical proof that it's sort of optimally compensates for it what is the tragedy of the common this is just this idea that like if there is this situation where there's some public good that lots of people benefit from then no individual person wants to contribute to it because if they contribute they only get a small part of the benefit from their contribution but they pay the full cost of their contribution in which context is this sorry what is the term quadratic what quadratic funding like what's in which context is this mechanism useful so obviously you said to combat the tragedy of the commons but you know in which context do you see it as useful actually practically yeah theoretically public goods in general right so like like services like what are we what are we talking about what's a public yeah so within the ethereum ecosystem for example like we've actually tried using this mechanism like yeah wrote a couple of articles about this on vitalik.ca where i go through some of the most recent rounds and it's been really interesting um some of the top ones that people supported there were um things like kind of just online user interfaces that make it easier for people to interact with ethereum um there was a documentation there were podcasts um there were kind of software kind of clients like kind of implementations of the ethereum protocol of privacy tools just like lots of things that are kind of useful to lots of people when a lot of people are contributing for like funding a particular particular entity yeah that's really that's really interesting is there something special about the quadratic the the the summing of the square roots and yeah so another way to think about it is like imagine if n people each give a dollar then the person gets n squared right um and and so each individual person's contribution gets multiplied by n right because you have n people and so that kind of perfectly compensates for the kind like kind of n to one uh tragedy of the commons i just wonder if the the squared part is yeah fundamental no it is um and i'd uh recommend you go to on vitalik.ca i have this article called quadratic payments a primer and highly recommended it's kind of at least my attempt so far kind of explaining the intuition behind this intuition so if we could can we go to the the very basics what is the blockchain or perhaps we might even start at the uh the the byzantine generals problem and byzantine fault tolerance in general that i i uh bitcoin was taking steps to uh providing a solution for so the byzantine generals problem it's this uh paper that leslie lamport published in 1982 where he has this thought experiments where if you have two generals that are kind of camped out on opposite sides of a city um and they're planning when to attack the city then uh the question is and if how could those generals coordinate with each other and they could send messengers between each other but those messengers kind of could get sniped by the enemy on the road road some of those messages could end up being traders and if things could end up happening and with just two mess generals it turns out that there's kind of no solution in a finite number of rounds that guarantees that they will be able to kind of coordinate on the same answer but then in the case where you have more than two generals that then leslie analyzes cases like um are the mess messages kind of just oral messages um are the messages kind of signed messages so i could give you a signed message and you can pass along that signed message and the third party can still verify that i originally made that message and depending on those different cases there's kind of different bounds on like given how many generals and how many traders among those generals and if what like under what conditions you actually can't agree when to launch an attack uh so it's actually a big misconception that the byzantine generalist problem was unsolved so let's say lamport solved it the thing that was unsolved though is that all of these solutions assume that you've already agreed on and have fixed the list of who the generals are and these generals have to be kind of semi trusted to some extent they can't just be anonymous people because if they're anonymous then like the enemy could just be 99 percent of the generals uh so right the um in the 1980s and the 1990s kind of the general use case for distributed system stuff was more kind of enterprisey stuff where you could kind of assume that uh you know you know who the nodes are that are running these kind of computer networks so if he wants to have some kind of decentralized computer network that pretends to be a single computer and that you can kind of do do kind of operations on then it's made out of these kind of 15 specific computers and we know kind of who and where they are and so we have a good reason to believe that say at least 11 of them would be fine and it could also be within a single system exactly almost a network of devices sensors so on like in airplanes and i think uh like flight systems in general still use these kinds of ideas yep yep um so that's the 80s that's the 80s and 90s now the cypherpunks had a different use case in mind which is that they wanted to create a fully decentralized global permissionless currency and the problem here is that they didn't want any authorities and they didn't even want any kind of privileged list of people and so now the question is well how do you use these techniques to create consensus when you have no way of kind of measuring identities right you have no way of kind of determining whether or not some 99 of participants aren't actually all the same guy and so the clever solution that satoshi had this is uh kind of going back to the that presentation i made at defcon a few months ago where i said that the things that satoshi invented with crypto economics um is this uh really neat idea that you can use economic resources to kind of limit identity how many identities you can get and the uh if there isn't any existing decentralized digital currency then the only way to do this is with proof of work right so with proof of work the solution is uh just uh you publish a solution to a hard mathematical puzzle that takes some uh kind of clearly calculable amount of computational power to solve you get an identity and then you solve five of those puzzles you get five identities and then these are the identities that we run the consensus algorithm between so the proof of work mechanism you just described is like the fundamental idea proposed in the in the white paper that defines bitcoin uh what's the idea of consensus that we wish to reach why is consensus important here what is consensus so the goal here in just simple technical terms is to basically kind of wire together a set of a large number of computers in such a way that they kind of pretends to the outside world to be a single computer where that single computer keeps working even if a large portion of the kind of constituents the computers that make it up break and kind of break in arbitrary ways like they could shut off they could try to actively break a system they could do lots of mean things so the reason why the cypherpunks wanted to do this is because they wanted to run one particular program on this virtual computer and the one particular program that they wanted to run is just a currency system right it's a system that just processes a series of transactions and for every transaction it verifies that the sender has enough coins to pay for the transaction it verifies that the digital signature is correct and if the checks pass then it subtracts the coins from one account and adds the coins to the other account roughly so first of all the the the proof of work idea is kind of i mean at least to me seems pretty fascinating it is i mean that's a it's kind of a revolutionary idea i mean is is it is it obvious to come up with that you can use uh you can exchange basically computational resources for for identity it's uh it actually has a pretty long history it was uh first proposed in a paper by uh mc cynthia dwork and nixon naor in 1994 i believe and the original use case was uh combating email spam so the idea is that if you send an email you have to send it with a proof of work attached and like this makes it reasonably cheap to send emails to your friends but it makes it really expensive to send spam to a million people yeah that's a simple brilliant idea so maybe also taking a step back so what is the role of blockchain in this what is the blockchain sure so the blockchain my way of thinking about it is that it is this kind of system where you have this kind of one virtual computer created by this a bunch of these nodes in the network and the reason why the term blockchain is used is because the data structure that these systems use at least so far is one where they um different nodes in the network periodically publish blocks and a block is a kind of list of transactions together with a pointer like a hash of a previous block that it builds on top of and so you have a series of blocks that that nodes in the network create where each block points to the previous block and so you have this chain of them is a fault tolerance mechanism built into the idea of blockchain or is it a lot of possibilities of different ways to make sure there's no funny stuff going on there are indeed a lot of possibilities um so in a kind of just simple architecture as I just described the way the fault tolerance happens is like this right so you have a bunch of nodes and they're just happily and have occasionally creating blocks building on top of them each other's blocks and let's say you have kind of one block we'll call it kind of block one and then someone else builds another block honestly you'll call it block two then we have an attacker and what the attacker tries to do is the attacker tries to revert block two and the way they revert block two is instead of doing the thing they're supposed to do which is build a block on top of block two they're going to build another block on top of block one um so you have block one which has two children block two and then block two prime now this might sometimes even happen by random chance if you know two nodes in the network just happen to create blocks at the same time and they don't hear about each other's things before they create their own but this also could happen because of an attack now if this happens you have an attack then the uh no in the bitcoin system uh the nodes follow the longest chain um so if um this um attack had happened uh and when the original chain had more than two blocks on it so if it was trying to kind of revert more than more than two blocks then everyone would just would just ignore it and everyone would just keep following the regular chain but here you know we have block two and we have block two prime and so the two are kind of even and then whatever block um the next block is created on top of so say block three is now created on top of block two prime then everyone says uh agrees that block three is the new head um and block two prime is just kind of forgotten and then everyone just kind of peacefully builds on top of block three and the thing continues so how difficult is it to mess with the system right so how like if we look at the general problem like how many what fraction of people who participate in the system have to be bad players in order to mess with it truly like what's your is there is there a good number there is um well depending on kind of what your model of the participants is and like what kind of attack we're talking about it's anywhere between 23.2 and 50 of what of all of the computing power in the network sorry so 22 and 23 point between 23.2 and 50 and 50 can be compromised so like once your once your your portion of the total computing power in the network goes above the 23.2 level then there's kind of things that you could mean things that you can potentially do and as your percentage of the network kind of keeps going up then the your abilities as you mean things kind of goes higher and then if you have above 50 then you can just break everything so how how hard is it to achieve that level like it seems that so far historically speaking it's been exceptionally difficult so this is a challenging question um so the economic cost of acquiring that level of stuff from scratch is fairly high i think it's uh somewhere in the low billions of dollars and when you say that stuff you mean computational resources yeah so specifically specialized hardware and of asics that people use to solve these puzzles to do the mining these days small tangent uh so obviously i work a lot in deep learning with gpus and asics for that application and i tangentially kind of hear that so many of these you know sometimes nvidia gpus are sold out because of this other application like what do if you can comment i don't know if you're familiar or interested in this space what kind of asics what kind of hardware is generally used these days for to do the actual computation for the proof of work sure so in the case in bitcoin and ethereum are a bit different so in the case of bitcoin there is an algorithm called sha256 it's just a hash function and so the puzzle is just coming up with a number where the hash of the number is below some threshold and so because the hashes are designed to be random you just have to keep on trying different numbers until one works and the asics are just like specialized circuits that contain and if circuits for evaluating this hash over and over again and you have like millions or billions of these hash evaluators and just stacked on top of each other inside of a box and you just keep on running the box 24 7 in the asics there's literally specialized hardware designed for this yes oh this is a little bit an amazing world another tangent i'll come back to the basics but uh does quantum computing throw a wrench into any of this very good question so uh quantum computers have two main uh kind of families of algorithms that are relevant to cryptography one is shor's algorithm and shor's algorithm is one that kind of completely breaks the hardness of some specific kinds of mathematical problems so the one that you've probably heard of is it makes it very easy to factor numbers so figure out kind of what prime factors are the kind of that you need to multiply together to get some number even if that number is extremely big shor's algorithm can also be used to break elliptic curve cryptography it can break like any kind of hidden order groups it breaks a lot of kind of cryptographic nice things that we're used to but the good news is that for every kind of major use of things that shor's algorithm breaks we already know of quantum proof alternatives right now we don't use these quantum proof alternatives yet because in many cases they're five to ten times less efficient but the crypto industry in general kind of knows that this is coming eventually and it's kind of ready to take the hit and switch to that stuff when we when we have to the second algorithm that is relevant to cryptography is grover's algorithm and in grover's algorithm might even be kind of more familiar to ai people that's basically usually described as solving search problems but the idea here is that if you have a problem of the form find a number that satisfies some property then if with a classical computer you need to try and if n times before before you find a number then with a quantum computer you only need to do square root of n computations and grover's could potentially be used for mining but there's two possibilities here one is that grover's could be used for mining and whoever creates the first working quantum computer that could do grover's will just mine way faster than everyone else and we'll see another round of what we saw when a6 came out which is that kind of the new hardware just kind of dominated the old stuff and then eventually it switched to a new equilibrium but by the way way faster not exponentially faster quadratically faster quadratically faster which is not sort of it's not game changing i would say it's like a6 like you said it would be exactly yeah so it would not necessarily break proof of work as a that's right yeah now the other kind of possible world right is that quantum computers have a lot of overhead there's a lot of a complexity involved in maintaining quantum states and there's also as we've been realizing recently making quantum computers actually work requires kind of quantum error correction which requires kind of a thousand real qubits per logical qubit and so there's the very real possibility that the overhead of running a quantum computer will be higher than the speed up you get with grover's which would be kind of sad but which would also mean that the given proof of work will just keep working fine so beautifully put so so proof of work is the core idea of bitcoin is there other core ideas before we kind of take a step towards the origin story and ideas of ethereum is there other stuff that were key to the white paper of bitcoin there is proof of work and then there's just the cryptography is just kind of public keys and signatures that are used to verify transactions those two are the big things so then what is the origin story maybe the human side but also the technical side of ethereum sure so i joined the bitcoin community in 2011 and i started by just writing i first wrote for this sort of online thing called bitcoin weekly then i started writing for bitcoin magazine um and uh sorry to interrupt you have this funny kind of uh story true or not is uh that you were disillusioned by the downsides of centralized control from your experience with wow world of warcraft is this true or you're just being witty uh i mean the event is true the fact that that's the reason i do decentralization is witty maybe just a small tangent do have you always had a skepticism of centralized control is that sort of degree yeah has that feeling evolved over time or has that just always been a core feeling that decentralized control is the future of a human society it's definitely been something that felt very attractive to me ever since i could have learned that such a thing is possible it's possible even yeah so great so you're you joined the bitcoin community in 2011 you said you began writing so what's next started writing uh moved from high school to university halfway in between that and spent a year in university then at the end of that year i dropped out to do bitcoin things full time and this was a combination of continuing to write bitcoin magazine but also increasingly work on software projects and i traveled around the world for about six months and just going to different bitcoin communities like i went to first in new hampshire then spain other european places israel and then san francisco and along the way i've met a lot of other people that are working on different bitcoin projects and when i was in israel there were some very smart teams there that were working on ideas that people were starting to kind of call bitcoin 2.0 so one of these were colored coins which is basically saying that hey let's not just use the blockchain for bitcoin but let's also like kind of issue other kinds of assets on it and then there was a protocol called master coin that supported issuing assets but also supported many other things like financial contracts like domain name registration and a lot of different things together and i spent some time working with these teams and i quickly kind of realized that this master coin protocol could be improved by kind of generalizing it more right so the best the analogy i use is that the master coin protocol was like this swiss army knife you have 25 different transaction types for 25 different applications but what i realized is that you could replace a bunch of them with things that are more general purpose so one of them was that you could replace like three transaction types for three types of financial contracts with a generic transaction type for a financial contract that just lets you specify a mathematical formula for kind of who how much money each side gets by the way it's a small pause what's you say financial contract just the terminology what is the contract what's a financial contract so this is just generally an agreement where kind of either one or two parties kind of put collateral kind of in and then they depending on kind of certain conditions like this could involve prices of assets this could involve the actions of the two parties it could involve other things but they kind of get different amounts of of assets out that just depend on things that happened so a contract is really a financial contract is at the core it's the it's the core interactive element of a financial system yeah there's yeah there's many different kinds of financial contracts like there's things like options where you kind of give someone the right to buy a thing that you have for some specific price for some period of time there's uh contracts for difference where you basically are kind of making a bet that says like for every dollar this thing goes up i'll give you seven dollars or for every dollar that thing goes down you give me seven dollars or something like that and but the main idea that these contracts have to be enforced and trusted them yes exactly you have to trust that they will work out in a system where nobody can be trusted yes this is such a beautiful complicated system okay so uh so you were seeking to kind of generalize this basic uh framework of contracts um so what does that entail so what what technically are the steps to creating ethereum sure so i guess just to kind of continue a bit with this master coin story um so started by kind of giving ideas for how to generalize the thing and eventually um this turned into a much more kind of fully fledged proposal that just says hey how about you scrap all your futures and instead you just um put in this programming language and i gave this idea to them and their response was something like hey this is great but this seems complicated and it seems like something that we're not going to be able to put onto our roadmap for a while and my response to this was like wait do you not realize how revolutionary this is well i'll just go do it myself and then i what was the name of the programming language i just called it ultimate scripting great uh so then i kind of went through a couple more rounds of iteration and then the idea for a theorem itself started to form um and the idea here is that you just have a blockchain where the core unit of the thing is what we call contracts it's these kind of accounts that can hold assets um and like they have their own internal memory but that are controlled by a piece of code and so if i send some ether to a contract the only thing that can determine where that kind of ether the currency inside ethereum and it goes after that um is the code of that contract itself and so basically kind of sending assets to computer programs becomes this kind of paradigm for creating these sort of agreements self executing agreements self executing it's so cool that code is sort of part of this contract so that that's what's meant by smart contracts yeah so how hard was it to build this kind of thing harder than expected um and originally i actually thought that this would be a thing that i would kind of casually work on for a couple of months publish and then go back to university um then i released it and a bunch of people or i released the white paper the white paper the idea is there the idea the white paper um a whole bunch of people came in offering the help a huge number of people and have expressed interest and this was something i was totally not expecting and then i kind of realized that this would be something that's kind of much bigger than i had ever um thought that it would be and then we started on this kind of much longer development slog of making something that lives up to this sort of much higher level of expectations what are the some of the is it fundamentally like software engineering challenges it was their social okay so there's social so so what are the biggest interesting challenges that you've learned about human civilization and in in software engineering through this process so i guess one of the challenges for me is that like i'm one of the kind of apparently unusual geek schoolers and i've never treated with anything but kindness in school yes um and so when i got into crypto i kind of expected everyone would just kind of be the same kind of altruistic and nice in that same way um but the um kind of the algorithm that i used for finding cofounders for this thing was not very good it was sort of literally one computer scientist called the greedy algorithm it's sort of the first 15 people who replied back offering to help kind of are the cofounders oh you mean like literally the the the people that for will form to be the the founders cofounders of the community the algorithm i like how you call it the algorithm yeah um and so what happened was that uh these um like especially as the project got really big like there started to be a lot of this kind of infighting and there were a lot of like i wanted the thing to be a non profit and some of them wanted to be a for profit uh and then there started to be people who were just kind of totally unable to work with each other there were people that were kind of trying to get an advantage for themselves in a lot of different ways and this uh just about six months later led to this big governance crisis and then we kind of reshuffled leadership a bit and then uh the project kept on going then nine months later there was another governance crisis and then there was a third governance crisis and so is there a way to if you're looking at the human side of things is there a way to optimize this aspect of the cryptocurrency world it seems that there is uh from my perspective there's a lot of different characters and personalities and egos and like you said uh i don't know if you know i also like to think that most of the world most of the people in the world are well intentioned but the way those intentions are realized may perhaps come off as uh yeah as as negative like what uh is there is there a hopeful message here about creating a governance structure for cryptocurrency that uh where everyone gets along and after about four rounds of reshuffle like i think we've actually come up with something that seems to be pretty stable and happy um i think uh i mean i definitely do think that most people are well intentioned i just think that like one of the reasons why i like decentralization is just because there's like this thing about power where power attracts people with egos and so that just allows us a very small percentage of people to just ruin so many things you think ego has a you think ego has a use like is ego always bad it seems like sometimes does but then the ethereum research team i feel like we've found also kind of a lot like a lot of very good people that are just kind of primarily just interested in things for the technology and uh things seem to just generally be going quite well yeah when you're when the focus and the passion is in the tech so on the so that's the human side of things but the technology side like what have you learned what have been the biggest challenges of bringing ethereum to life on the technology side so i think first of all just uh you know there's like the first law of software development which is that when someone gives you a timetable i switch the unit of time to the next largest unit of time and add one and like we basically fell victim to that um and uh and so instead of taking like three months it ended up taking like 20 months to watch the thing um and that was just i think underestimating the sheer technical complexity of the thing um there are research challenges like so for example one of the things that we've been saying from the start that we would do one is a switch from a proof of work to a proof of stake um where proof of stake is so this uh alternative consensus mechanism where instead of having to waste a lot of computing power on solving these mathematical puzzles that don't mean anything you kind of prove that you have access to coins inside of the system and this uh then it gives you some level of participation in the consensus can you maybe elaborate on that a little bit i understand the idea of proof of work um i know that a lot of people say that the idea of proof of stake is really appealing can you maybe linger on it longer explain what it is sure uh so basically the idea is like if i kind of lock up a hundred coins then i turn that into a kind of quote virtual miner and the system itself kind of automatically and randomly assigns that in a virtual miner the right to create blocks at particular intervals and then if someone else has 200 coins and they walk on the walk there's 200 coins then they get a kind of twice as big virtual miner they'll be able to create blocks twice as often so it tries to kind of do similar things to proof of work except instead of the thing and of rate limiting your participation being your ability to crank out solutions to kind of hash challenges the thing that really limits your participation is kind of how much coins you're kind of locking into this mechanism okay so interesting so that that limited participation doesn't require you to run a lot of compute does that mean that the richer you are so rich people um are more like their identities more right and this stable yeah verifiable or whatever whatever the right terminology is right and this is definitely a common critique i think my usual answer to this is that like proof of work is even more of that kind of system exactly yeah because i didn't mean it and that statement is a criticism i think you're exactly right that's equivalent the proof of work is the same kind of thing but in the proof of work you have to also use physical resources yes and burn computers and burn trees and all of that stuff is there um a way to mess with the system of the proof of uh proof of stake there is but you will once again need to have a very large portion of all the coins that are locked in the system to do anything bad got it yeah and just that maybe take a small tangent one of the criticisms of cryptocurrency is the fact that i guess for the proof of work mechanism you have to use so much energy in the world yes is one of the motivations of proof of stake is to move away from this definitely like what's your sense of the uh maybe i'm just under informed is there like legitimately environmental impact from this yeah uh so the latest thing was that bitcoin consumed as much energy as the country of austria or something like that yeah and then ethereum is like right now maybe only like half in order of magnitude smaller than bitcoin i've heard you talk about ethereum 2.0 so what's the what's the dream of ethereum 2.0 what's the the status of proof of stake is the mechanism that ethereum moves towards and also how do you move to a different mechanism of consensus within a cryptocurrency so ethereum 2.0 is a collection of major upgrades that we've wanted to do to ethereum for quite some time the two big ones uh one is a proof of stake and the other is what we call sharding um sharding solves another problem with blockchains which is a scalability and what sharding does is it basically says instead of every participant in the network having to personally download and verify every transaction every participant in the network only downloads and verifies a small portion of transactions and then you kind of randomly distribute who gets how much work um and because of how the distribution is random it still has the property that you need a large portion of the entire network to corrupt what's going on inside of any shard but the system is still in a very redundant and very secure that's brilliant how hard is that to implement and how hard is uh proof of stake to implement like on the technical level yeah software level proof of stake and sharding are both challenging um i'd say sharding is a bit more challenging the reason is that proof of stake is kind of just a change to the how the consensus layer works sharding does both that but it's also a change to the networking layer um the reason is that sharding is kind of pointless if at the networking layer you still do what you do today which is you kind of gossip everything which means that if someone publishes something every other node and the client hears it like from uh on the networking layer and so instead we have to have kind of subnetworks and the ability to quickly switch between subnetworks and have the subnetworks talk to each other and this is all doable but it's a more complex architecture and it's definitely the sort of thing that hasn't not yet been done in cryptocurrency so most most of the networking layer in uh cryptocurrency is you're shouting you're like broadcasting messages and this is more like ad hoc networks like yeah you're shouting within smaller groups smaller group but you have like a bunch of subnet like exactly then you have to switch between oh man i'd love to see the uh so it's a beautiful idea uh so from a graph theoretical perspective but just the software that like who's responsible is the ethereum project like the people involved would they be implementing like what's the actual you know this is like legit software engineering uh who like how does that work how do people collaborate build that kind of project is this like almost um like is there a software engineering lead is there it's like is it a legit almost like large scale open source project there is yeah so um we have uh someone named uh danny ryan on our team who's just been brilliant and great all around and he is a kind of de facto kind of development coordinator i guess it's like you have to invent job titles for this stuff the reason is that um like we also have this unique kind of organizational structure where the ethereum foundation itself kind of does research in house but then the actual implementation is done by independent teams that are separate companies and they're located all around the world and like fun places like australia and and and so uh you know you kind of just need a bunch of kind of almost nonstop cat herding to just keep getting these people to kind of talk to each other and kind of implement this back make sure that everyone agrees on kind of what what's going on and kind of how to interpret different things so how far into the future are we from these two mechanisms in ethereum 2.0 like what's what's your sense of the timeline keeping in mind the previous comment you made about the sort of uh general curse of software projects so ethereum 2.0 is split into three phases so phase zero just creates a proof stake network and it's actually separate from kind of proof of uh the proof of work network at the beginning just to kind of give it time to grow and improve itself do people get to choose sorry to interrupt do people get to choose i guess yes they get they get to choose to move over if they want to then phase one adds sharding but it only adds sharding of data storage and not sharding of computation and then after that there is kind of the merger phase which is where the uh and if the accounts uh kind of smart contracts like all of the activity and uh the existing eth1 system just kind of gets cut and pasted into eth2 and then the proof of work chain gets forgotten and then and things all the things that were living there before just kind of continue living inside of the proof of stake system so for timelines um phase zero has been uh kind of almost fully implemented um and now it's just a matter of uh a whole bunch of security auditing and testing um my own experience is that right now it feels like we're at about a phase comparable to when we were doing uh the original ethereum launch when we were maybe about four months away from launch so that's just a hunch then that's just a hunch yeah so how you know it took it took like over a decade for people to move from python 2 to python 3 uh how do you see the move from like this phase zero of for for different consensus mechanism do you see there being a a drastic phase shift in people just kind of jumping to this better mechanism so in phase zero i don't expect too many people to do much because in phase zero in phase one the new chain the get of deliberately enough doesn't have too much functionality turned on it's there just like if you want to be a proof of stake validator you can get things started if you want to store data for other blockchain applications you can get started but existing applications will largely keep living on eth1 and then when the merger happens then the merger is a operation that happens all at once so that's kind of one of the benefits of a consensus system that like on the one hand you have to coordinate the upgrade but on the other hand the upgrade can be coordinated so what's casper ffg by the way um casper ffg is the consensus algorithm that we are using for the proof of stake is there something interesting uh specific about casper ffg like some beautiful aspect of it that's uh there is so casper ffg combines together kind of two different schools of a consensus algorithm design uh so the general two different schools of the of this design are right one is uh 50 fault tolerant but dependent on network synchrony so 50 fault tolerant but it didn't tolerate up to 50 of faults but not more but it depends on an assumption that all of the nodes can talk uh talk to each other within some kind of limited period of time like if i send the message you'll receive it within a few seconds and the second school is 33 fault tolerant but safe under asynchrony which means that like if we agree on something then that thing is finalized and even if the network goes horribly wonky the second after that thing is finalized there's no way to revert that thing um and that's fascinating how you would make that happen it's uh definitely quite clever um i'd recommend the uh casper ffg paper if you just search like archive as in like arxiv casper ffg it's that's that's an archive the paper is an archive yeah yeah who are the authors um myself and uh virgil griffith that's awesome i take a small tangent this idea of just putting out white papers and papers and putting them on archive and just putting them publicly is that is that at the core is that a necessary component of the currency is that the tradition started with uh uh satoshi nakamoto's what do you make of it like what do you make of the future of that kind of sharing of ideas i guess so yeah and it's definitely something that's kind of mandatory for crypto because like crypto is all about making systems where you know you don't have to trust the operators to trust that the thing works and so if anything behind our system works is closed sourced and that kind of uh kills the point and so there is the kind of a sense in which the fundamental properties of the category of the thing we're trying to build just kind of forces openness but also openness just has proven to be a really great way to collaborate and then there's actually had a lot of kind of innovation and academic collaboration that's just kind of happened ad hoc in the crypto space the last few years so like for example we have this forum called eth research that's like e th r e s e a r and then dot ch um and there we publish kind of just ideas in a form that's kind of half formal like it's halfway in between like it's it's a kind of a text write up and then you can have math in it but it's often and of much shorter than a paper and it turns out that the great majority of new ideas like they're just kind of fairly small nuggets that you can explain in like five to ten lines and they don't really they don't need the whole formality of a paper exactly they don't require the kind of like 10 pages of a filler and so introduction conclusion is not needed yeah and so instead you just kind of publish the idea and then people can go comments on it and that's brilliant yeah this has been great for us i think i interrupted you was there something else on casper ffg that's just casper ffg is just kind of combines together these two schools um and so basically it creates this system where if you have more than 50 that are honest then and you have network synchrony then the thing kind of goes as a chain but then if network synchrony fails then kind of the last few blocks in the chain might um kind of get replaced but anything that was finalized by this kind of more asynchronous process uh gets uh can't be reverted and so you essentially get a kind of best of both worlds between those two models okay so i know what i'm doing tonight i'm going to be reading the casper fg paper uh apologize for the romanticized question but what to you are some or the most beautiful idea in the world of ethereum just something uh surprising something beautiful something powerful yeah i mean i think the fact that money can just emerge out of a database if enough people believe in it i think is definitely one of those things that's up there um i think one of the things that i really love about ethereum is also this concept of composability so this is the idea that like if i build an application on top of ethereum then you can build an application that talks to my application and you don't even need my permission you don't even need to talk to me right so one really fun example of this is there was this game on ethereum called crypto kitties that just involved kind of breeding digital cats yes and someone else created a game called crypto dragons where the way you play crypto dragons is you have a dragon and you have to feed it crypto kitties um and they just uh created the whole thing just like as an ethereum contract that you would send these uh these tokens that are defined by this other ethereum contract and for the interoperability to happen like the projects didn't don't really need to like the teams don't really need to talk to each other you just kind of interface with the existing program so it's arbitrarily composable in this kind of way so you have different uh you know different groups that could be working and so you could see it scaling to just outside of dragons and kitties it could be you could build like entire ecosystems of software yeah and in the i mean especially in the decentralized finance space that's been popping up the last two years there has been a huge amount of really interesting things happen as a result of this is a particular kind of like financial applications kind of thing yeah i mean there's like stable coins so this is a kind of tokens that retain a value equal to one dollar but they're kind of backed by a crypt uh cryptocurrency um then there's decentralized exchanges um so when as far as the decentralized exchanges goes there's this uh really interesting construction that um has existed for about one and one and a half years now called uniswap so what uniswap is it's a smart contract that holds the balances of two tokens we'll call them token a and token b and it maintains an invariance that the balance of token a multiplied by the balance of token b has to equal the same value and so the way that you trade against the thing is basically like you have this kind of curve you know like x times y equals k and yeah before you trade it's at some points on the curve after you trade you just like pick some different any to any other points on the curve and then whatever the delta x is that's the amount of a tokens you provide whatever the delta y is that's the amount of b tokens you get or vice versa and that's just and then kind of the slope at the current points on the curve kind of is the price um and so that just is the whole thing and that just allows you to have this exchange for tokens and even if there's very few participants and the whole thing is just like so simple and it's just very easy to set up very easy to participate in and it just provides so much value to people so and uh the uh the fundamental the the distributed application infrastructure allows that somehow yes so this is a smart contract meeting this is all a computer program that's just running on ethereum smart contracts too are just fascinating they are okay do you think cryptocurrency may become the main currency in the world one day so where do you think we're headed in terms of the role of currency the structure type of currency in the world i definitely expect um fiat currencies to continue to exist and continue to be strong and i definitely expect kind of fiat currencies to also digitize in their own way over the next couple of decades what's fiat currency by the way oh just like things like us dollars and like dollars and euros and yen and these other things and they're sort of backed by governments yes but i also expect kind of cryptocurrencies to play this kind of important role in just making sure that people always have an alternative if fiat currencies start breaking so like if or if you're in you know some very high inflation place like venice will for example or if uh your country just kind of gets cut off from um cut off from other financial systems because of like something the banks do uh if any kind of if there's even like some major trade disruption or something worse happens then like cryptocurrencies are the sort of thing that just because of their kind of global neutrality they're just kind of always there and you can keep using them it's interesting that you're quite humble about the possibilities of the future of cryptocurrency you don't think there's a possible future where it becomes the main set of currency because it feels like it feels like the centralized control by governments of currency is limiting somehow maybe my naive utopian view of the world it's uh i mean it's definitely very possible uh i mean i think like for cryptocurrencies being the main form of value to kind of work well like you do need to have some much more price stability than they have today and i mean there are now stable coins and there are kind of cryptic cryptocurrencies that try to be more stable than existing things like bitcoin and ether but and that just is to me the kind of the main challenge do you think oh that's do you think that's a characteristic of this just being the early days it's such a young concept the 10 years is nothing in the history of money yeah and i think it's a combination of two things right one is um it's uh uh it's still early days but the other is a kind of more durable any kind of economic problem which is that like demand for currency is volatile right because of like recessions booms changes to technology lots of things and if people's demand for how much currency they want to hold changes and if you have a currency that has a fixed supply then the change in demand has to be entirely expressed as a change in value of the currency and so what that means is that kind of the volatility of demand becomes entirely translated into volatility and ahead of prices of things that dominated in that currency but if you have a currency where instead the supply can change and so the supply can kind of go up when there's more demand then you have the supply and of absorbing more of that volatility and so the price of the currency would absorb less of the volatility on that topic so bitcoin does have a limited supply a specific fixed supply yes uh what's what's the idea and ethereum doesn't but can you clarify just in the comments you just made is ethereum qualify to the kind of currency that you're talking about and being flexible in the supply i mean it's a bit more flexible but kind of the thing that you would really want is something that's kind of specifically flexible in response to how valuable the currency is and and i'd recommend you to look at stable coins as well so like things like die for example it's a new like how you spell that da i and what uh what's stable coins is it a type of cryptocurrency it is a type of cryptocurrency it's um a type of cryptocurrency that's issued by a smart contract one of these ethereum computer programs that um where the smart contract holds a bunch of ether and then it is basically like that people deposit and then it issues die and the reason why people deposit is because they want to kind of go high leverage on their ether and so it kind of pairs these two sets of users one that wants stability and one that kind of wants extra risk together with each other and it basically creates some or gives one set of participants a guarantee that they'll be that they have this asset that can that can be later converted back into ether but and like specifically at kind of the one dollar rate and it has some kind of uh stabilizing network effects yeah it has this yeah it has many kinds of stabilizing mechanisms in it that's fascinating okay this is this world is awesome technically just from a scientific perspective is an awesome world uh that i i often don't see from an outsider's perspective what i often see is kind of uh maybe hype and a little bit if i may say so like charlatanism and you don't often see at least from my outsider's perspective the beautiful science of it and the engineering of it i mean maybe is there a comment you can make of who to follow how to learn about this world without being um interrupted by the charlatans and the hype people in this space i think you do need to just know the specific kind of just people to follow like there's you know there's all the kind of the cryptographers and the researchers and there's just like even just the ethereum research crew like myself you know like dan grad danny justin and of the other people were and then and of the academic cryptographers and like before um this today i was at stanford and stanford has the center for blockchain research and of dan bonet this and really a famous and great cryptographer um was uh running it and there's a lot of other people there and there's people working on zero knowledge proofs for example and um zuko from zcash has one other person that i respect so i think if you follow the technical you crawl along that yeah yeah you just start with the theory group and then look at the academics day bonet and so on and then just cautiously expand the network of people you follow yeah exactly and like if someone seems too too self promotional then just like remove them is there books that are so there's these white papers and we just discussed about about ideas being condensed into really small parts is there books that are emerging that are kind of good uh introductory material so for historical ones and there's like nathaniel popper's digital gold which is just about the history of bitcoin there's like one and then matthew leising announced that there's one about the history of ethereum um for technical ones and there's andreas hansenopoulos is mastering ethereum great so um let me ask you sort of uh sorry to to pull back to the the idea of governments and decentralized currency you know there's a tension between decentralization of currency and the power of nations um the power of governments you um what's your sense about that tension can is there some rule for regulation of currency you yeah is there like is the government the enemy of digital currency of distributed currency or can they be like cautious friends i mean i think like like the one thing that people forget is that it's clearly not entirely an enemy because i think if uh there hadn't been so much government regulation on and if centralized uh issuing centralized digital currencies then i like we'd be seeing things people like google and facebook and twitter just kind of issuing them left and right and then like if that was the case then decentralized currencies would still appeal to some people but they definitely would appeal to less people than today so even in that sense i think it's uh clearly been kind of more of a help just kind of set the stage for the end of the existence as a of the sector in some ways but also and i think some of both you know like there's definitely things that governments and if can do in some cases have done to have hurt the spread of uh and of growth of uh of blockchains there's uh things that they've done to help and they've uh in some cases definitely done a good job of kind of going after fraudulent projects and going after some of the projects that have some of the kind of craziest and most misleading marketing um there's uh also the possibility that governments will end up using blockchains for a lot of different things like you know governments yeah i mean they do a lot more than just regulating right like there's also like they have the kind of identity records and they have kind of like property registries even just their own currency is like secure like lots of different kind of things that they're operating and there's even blockchain applications in a lot of those yeah and they can you know they can leverage technology do a lot of good for our societies it is a little unfortunate that governments often lag behind in terms of their acceptance and leverage of technology if you look at the autonomous vehicle space ai in general they're uh they're a few years behind it'd be nice uh to help them catch up that's a that's that's a always ongoing problem you um met vladimir putin to discuss the centralized currency here you're born in the where were you born columbia it's a city about 115 kilometers south of moscow in russia yes yeah i grew up in moscow i mean that's vladimir putin is a is a central figure in this part of the world so what was that like meeting or meeting him what was that experience like he's taller in photos than in person yeah he's yeah that's right he's 57 i think 58 maybe yeah that's uh unfortunately we didn't actually kind of have too much of a chance to talk to him like i managed to see him for about one minute at the end of this meeting and i did get a chance to see a lot like some of the other end of government ministers and like he recommended some and uh some of them are are actually kind of interested in trying to use um like blockchains to like for various government use cases they kind of limit corruption and other things and i have like it's hard to tell from one conversation kind of what things are genuine and what things are just like oh blockchain is cool let's do blockchain right but you know when i when i listen to like um barack obama talk about artificial intelligence there's certain things i hear where okay so he might not be an expert in ai but he know he like actually studied it carefully enough to think about it like he internally like uh even if he's just reading a wikipedia page like he really thought about what this technology means did you get a sense that uh putin or some of the ministers like thought about blockchain like thought about the fundamentals of technical like understand it intuitively are they too old school to try to grasp it some are old school some are more new school it depends it's it's definitely like depends on who you talk to i mean that's an open question for me with putin because putin has said i don't know i've only talked to him for about one minute so but sometimes you can pick up sort of insights there's a quick comment there they're about maybe you can correct me on this but they're about 3000 cryptocurrencies being actively traded yes uh and ethereum is one of you know a lot of people believe that there will be the the main cryptocurrency i think bitcoin is currently still the main cryptocurrency but ethereum very likely might become that the the main one um is this kind of diversity good in the crypto world do you see it sticking around should there should there be a winner like should there be some consensus globally around uh bitcoin or around ethereum like what's your what's your sense i definitely think the diversity is good and i definitely think also that there's probably too many people trying to make separate blockchains kind of right now the number should definitely be greater than one and probably greater than two or even five um not three thousand not three thousand yeah and also not even like 40 high quality platforms that try to do the same thing i mean there's definitely this range from just like one person who just like wrongly thinks that you can create a cryptocurrency in like 12 hours and uh doesn't even think about kind of the community aspects of maintaining it going to uh people actually trying but only creating a really tiny one to like scammers to people like making something that's actually successful and then there's a lot of different categories of blockchain and you have project in terms of what it's trying to do and what applications it's for um and i think the experimentation is definitely healthy if you look at the two worlds it might be a little bit disjoint but uh the distributed applications cryptocurrency and then the world of artificial intelligence do you see there's some overlap between these worlds that both worry about centralized control is there some overlap that's interesting that you think about do you think about ai much yeah and i think uh definitely i'd have thought about things like in like the ai and if control problems and alignment problems and all of those things do you worry about the existential threat of ai it's definitely one of the things i worry about they think um block there's a lot of uh kind of common challenges because in in both cases what you're ultimately trying to do is you're trying to kind of get a simple system to direct a more complex system like in the case of uh this is drawing ai's the idea would be that the simple system is people and the complex system is well whatever um thing uh the people uh the people end up kind of unleashing on the universe that'll hopefully be a great thing um and in the case of blockchains and of the complex well the simple thing is uh the algorithm which is a piece of static and fully open source code and the more complex thing is just the all of the different possible kind of human actors and the end of the strategy is that they might end up used to participate in the network do you think about your own mortality like what you hope to accomplish in your life oh i definitely i definitely think about ending my own mortality so that's if i gave you the option to live forever would you depends a lot on what the fine birds is but i mean you know if it's one of those things where i'm going to be kind of like floating through empty space for 10 to the 75 years then no if it's uh um forever worth of uh and of having you know fulfilling life with uh and of meaning like with with friends to uh to spend the time with with kind of meaningful challenges to explore and individual interesting things to be working on then i think absolutely move that's uh beautifully put live forever but uh you'd have to check the fine print i think there's no better way to end it vitalik thank you so much for talking to us so exciting to follow your work from a distance and uh thank you for creating a revolutionary idea and sticking with it and building it out and doing some incredible engineering work and thanks for talking today yeah thank you thanks for listening to this conversation with vitalik buterin and thank you to our sponsors express vpn and masterclass please consider supporting the podcast by signing up to masterclass at masterclass.com slash lex and getting express vpn at express vpn.com slash lex pod if you enjoy this podcast subscribe on youtube review it with five stars on apple podcast support it on patreon or simply connect with me on twitter at lex friedman and now let me leave you with some words from vitalik buterin the thing that i often ask startups on top of ethereum is can you please tell me why using ethereum blockchain is better than using excel and if they can come up with a good answer that's when you know you've got something really interesting thank you for listening and hope to see you next time
Vitalik Buterin: Ethereum, Cryptocurrency, and the Future of Money | Lex Fridman Podcast #80
The following is a conversation with Anca Drogon, a professor at Berkeley working on human robot interaction, algorithms that look beyond the robot's function in isolation and generate robot behavior that accounts for interaction and coordination with human beings. She also consults at Waymo, the autonomous vehicle company, but in this conversation, she is 100% wearing her Berkeley hat. She is one of the most brilliant and fun roboticists in the world to talk with. I had a tough and crazy day leading up to this conversation, so I was a bit tired, even more so than usual, but almost immediately as she walked in, her energy, passion, and excitement for human robot interaction was contagious. So I had a lot of fun and really enjoyed this conversation. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Anca Drogon. When did you first fall in love with robotics? I think it was a very gradual process and it was somewhat accidental actually because I first started getting into programming when I was a kid and then into math and then I decided computer science was the thing I was gonna do and then in college I got into AI and then I applied to the Robotics Institute at Carnegie Mellon and I was coming from this little school in Germany that nobody had heard of but I had spent an exchange semester at Carnegie Mellon so I had letters from Carnegie Mellon. So that was the only, you know, MIT said no, Berkeley said no, Stanford said no. That was the only place I got into so I went there to the Robotics Institute and I thought that robotics is a really cool way to actually apply the stuff that I knew and loved to like optimization so that's how I got into robotics. I have a better story how I got into cars which is I used to do mostly manipulation in my PhD but now I do kind of a bit of everything application wise including cars and I got into cars because I was here in Berkeley while I was a PhD student still for RSS 2014, Peter Bill organized it and he arranged for, it was Google at the time to give us rides in self driving cars and I was in a robot and it was just making decision after decision, the right call and it was so amazing. So it was a whole different experience, right? Just I mean manipulation is so hard you can't do anything and there it was. Was it the most magical robot you've ever met? So like for me to meet a Google self driving car for the first time was like a transformative moment. Like I had two moments like that, that and Spot Mini, I don't know if you met Spot Mini from Boston Dynamics. I felt like I fell in love or something like it, cause I know how a Spot Mini works, right? It's just, I mean there's nothing truly special, it's great engineering work but the anthropomorphism that went on into my brain that came to life like it had a little arm and it looked at me, he, she looked at me, I don't know, there's a magical connection there and it made me realize, wow, robots can be so much more than things that manipulate objects. They can be things that have a human connection. Do you have, was the self driving car the moment like, was there a robot that truly sort of inspired you? That was, I remember that experience very viscerally, riding in that car and being just wowed. I had the, they gave us a sticker that said, I rode in a self driving car and it had this cute little firefly on and, or logo or something like that. Oh, that was like the smaller one, like the firefly. Yeah, the really cute one, yeah. And I put it on my laptop and I had that for years until I finally changed my laptop out and you know. What about if we walk back, you mentioned optimization, like what beautiful ideas inspired you in math, computer science early on? Like why get into this field? It seems like a cold and boring field of math. Like what was exciting to you about it? The thing is I liked math from very early on, from fifth grade is when I got into the math Olympiad and all of that. Oh, you competed too? Yeah, this, it Romania is like our national sport too, you gotta understand. So I got into that fairly early and it was a little, maybe too just theory with no kind of, I didn't kind of had a, didn't really have a goal. And other than understanding, which was cool, I always liked learning and understanding, but there was no, okay, what am I applying this understanding to? And so I think that's how I got into, more heavily into computer science because it was kind of math meets something you can do tangibly in the world. Do you remember like the first program you've written? Okay, the first program I've written with, I kind of do, it was in Cubasic in fourth grade. Wow. And it was drawing like a circle. Graphics. Yeah, that was, I don't know how to do that anymore, but in fourth grade, that's the first thing that they taught me. I was like, you could take a special, I wouldn't say it was an extracurricular, it's in the sense an extracurricular, so you could sign up for dance or music or programming. And I did the programming thing and my mom was like, what, why? Did you compete in programming? Like these days, Romania probably, that's like a big thing. There's a programming competition. Was that, did that touch you at all? I did a little bit of the computer science Olympian, but not as seriously as I did the math Olympian. So it was programming. Yeah, it's basically, here's a hard math problem, solve it with a computer is kind of the deal. Yeah, it's more like algorithm. Exactly, it's always algorithmic. So again, you kind of mentioned the Google self driving car, but outside of that, what's like who or what is your favorite robot, real or fictional that like captivated your imagination throughout? I mean, I guess you kind of alluded to the Google self drive, the Firefly was a magical moment, but is there something else? It wasn't the Firefly there, I think there was the Lexus by the way. This was back then. But yeah, so good question. Okay, my favorite fictional robot is WALLI. And I love how amazingly expressive it is. I'm personally thinks a little bit about expressive motion kinds of things you're saying with, you can do this and it's a head and it's the manipulator and what does it all mean? I like to think about that stuff. I love Pixar, I love animation. WALLI has two big eyes, I think, or no? Yeah, it has these cameras and they move. So yeah, it goes and then it's super cute. Yeah, the way it moves is just so expressive, the timing of that motion, what it's doing with its arms and what it's doing with these lenses is amazing. And so I've really liked that from the start. And then on top of that, sometimes I share this, it's a personal story I share with people or when I teach about AI or whatnot. My husband proposed to me by building a WALLI and he actuated it. So it's seven degrees of freedom, including the lens thing. And it kind of came in and it had the, he made it have like the belly box opening thing. So it just did that. And then it spewed out this box made out of Legos that open slowly and then bam, yeah. Yeah, it was quite, it set a bar. That could be like the most impressive thing I've ever heard. Okay. That was special connection to WALLI, long story short. I like WALLI because I like animation and I like robots and I like the fact that this was, we still have this robot to this day. How hard is that problem, do you think of the expressivity of robots? Like with the Boston Dynamics, I never talked to those folks about this particular element. I've talked to them a lot, but it seems to be like almost an accidental side effect for them that they weren't, I don't know if they're faking it. They weren't trying to, okay. They do say that the gripper, it was not intended to be a face. I don't know if that's a honest statement, but I think they're legitimate. Probably yes. And so do we automatically just anthropomorphize anything we can see about a robot? So like the question is, how hard is it to create a WALLI type robot that connects so deeply with us humans? What do you think? It's really hard, right? So it depends on what setting. So if you wanna do it in this very particular narrow setting where it does only one thing and it's expressive, then you can get an animator, you know, you can have Pixar on call come in, design some trajectories. There was a, Anki had a robot called Cosmo where they put in some of these animations. That part is easy, right? The hard part is doing it not via these kind of handcrafted behaviors, but doing it generally autonomously. Like I want robots, I don't work on, just to clarify, I don't, I used to work a lot on this. I don't work on that quite as much these days, but the notion of having robots that, you know, when they pick something up and put it in a place, they can do that with various forms of style, or you can say, well, this robot is, you know, succeeding at this task and is confident versus it's hesitant versus, you know, maybe it's happy or it's, you know, disappointed about something, some failure that it had. I think that when robots move, they can communicate so much about internal states or perceived internal states that they have. And I think that's really useful and an element that we'll want in the future because I was reading this article about how kids are, kids are being rude to Alexa because they can be rude to it and it doesn't really get angry, right? It doesn't reply in any way, it just says the same thing. So I think there's, at least for that, for the correct development of children, it's important that these things, you kind of react differently. I also think, you know, you walk in your home and you have a personal robot and if you're really pissed, presumably the robot should kind of behave slightly differently than when you're super happy and excited, but it's really hard because it's, I don't know, you know, the way I would think about it and the way I thought about it when it came to expressing goals or intentions for robots, it's, well, what's really happening is that instead of doing robotics where you have your state and you have your action space and you have your space, the reward function that you're trying to optimize, now you kind of have to expand the notion of state to include this human internal state. What is the person actually perceiving? What do they think about the robots? Something or rather, and then you have to optimize in that system. And so that means that you have to understand how your motion, your actions end up sort of influencing the observer's kind of perception of you. And it's very hard to write math about that. Right, so when you start to think about incorporating the human into the state model, apologize for the philosophical question, but how complicated are human beings, do you think? Like, can they be reduced to a kind of almost like an object that moves and maybe has some basic intents? Or is there something, do we have to model things like mood and general aggressiveness and time? I mean, all these kinds of human qualities or like game theoretic qualities, like what's your sense? How complicated is... How hard is the problem of human robot interaction? Yeah, should we talk about what the problem of human robot interaction is? Yeah, what is human robot interaction? And then talk about how that, yeah. So, and by the way, I'm gonna talk about this very particular view of human robot interaction, right? Which is not so much on the social side or on the side of how do you have a good conversation with the robot, what should the robot's appearance be? It turns out that if you make robots taller versus shorter, this has an effect on how people act with them. So I'm not talking about that. But I'm talking about this very kind of narrow thing, which is you take, if you wanna take a task that a robot can do in isolation, in a lab out there in the world, but in isolation, and now you're asking what does it mean for the robot to be able to do this task for, presumably what its actually end goal is, which is to help some person. That ends up changing the problem in two ways. The first way it changes the problem is that the robot is no longer the single agent acting. That you have humans who also take actions in that same space. Cars navigating around people, robots around an office, navigating around the people in that office. If I send the robot over there in the cafeteria to get me a coffee, then there's probably other people reaching for stuff in the same space. And so now you have your robot and you're in charge of the actions that the robot is taking. Then you have these people who are also making decisions and taking actions in that same space. And even if, you know, the robot knows what it should do and all of that, just coexisting with these people, right? Kind of getting the actions to gel well, to mesh well together. That's sort of the kind of problem number one. And then there's problem number two, which is, goes back to this notion of if I'm a programmer, I can specify some objective for the robot to go off and optimize and specify the task. But if I put the robot in your home, presumably you might have your own opinions about, well, okay, I want my house clean, but how do I want it cleaned? And how should robot move, how close to me it should come and all of that. And so I think those are the two differences that you have. You're acting around people and what you should be optimizing for should satisfy the preferences of that end user, not of your programmer who programmed you. Yeah, and the preferences thing is tricky. So figuring out those preferences, be able to interactively adjust to understand what the human is doing. So really it boils down to understand the humans in order to interact with them and in order to please them. Right. So why is this hard? Yeah, why is understanding humans hard? So I think there's two tasks about understanding humans that in my mind are very, very similar, but not everyone agrees. So there's the task of being able to just anticipate what people will do. We all know that cars need to do this, right? We all know that, well, if I navigate around some people, the robot has to get some notion of, okay, where is this person gonna be? So that's kind of the prediction side. And then there's what you were saying, satisfying the preferences, right? So adapting to the person's preferences, knowing what to optimize for, which is more this inference side, this what does this person want? What is their intent? What are their preferences? And to me, those kind of go together because I think that at the very least, if you can understand, if you can look at human behavior and understand what it is that they want, then that's sort of the key enabler to being able to anticipate what they'll do in the future. Because I think that we're not arbitrary. We make these decisions that we make, we act in the way we do because we're trying to achieve certain things. And so I think that's the relationship between them. Now, how complicated do these models need to be in order to be able to understand what people want? So we've gotten a long way in robotics with something called inverse reinforcement learning, which is the notion of if someone acts, demonstrates how they want the thing done. What is inverse reinforcement learning? You just briefly said it. Right, so it's the problem of take human behavior and infer reward function from this. So figure out what it is that that behavior is optimal with respect to. And it's a great way to think about learning human preferences in the sense of you have a car and the person can drive it and then you can say, well, okay, I can actually learn what the person is optimizing for. I can learn their driving style, or you can have people demonstrate how they want the house clean. And then you can say, okay, this is, I'm getting the trade offs that they're making. I'm getting the preferences that they want out of this. And so we've been successful in robotics somewhat with this. And it's based on a very simple model of human behavior. It was remarkably simple, which is that human behavior is optimal with respect to whatever it is that people want, right? So you make that assumption and now you can kind of inverse through. That's why it's called inverse, well, really optimal control, but also inverse reinforcement learning. So this is based on utility maximization in economics. Back in the forties, von Neumann and Morgenstern were like, okay, people are making choices by maximizing utility, go. And then in the late fifties, we had Luce and Shepherd come in and say, people are a little bit noisy and approximate in that process. So they might choose something kind of stochastically with probability proportional to how much utility something has. So there's a bit of noise in there. This has translated into robotics and something that we call Boltzmann rationality. So it's a kind of an evolution of inverse reinforcement learning that accounts for human noise. And we've had some success with that too, for these tasks where it turns out people act noisily enough that you can't just do vanilla, the vanilla version. You can account for noise and still infer what they seem to want based on this. Then now we're hitting tasks where that's not enough. And because... What are examples of spatial tasks? So imagine you're trying to control some robot, that's fairly complicated. You're trying to control a robot arm because maybe you're a patient with a motor impairment and you have this wheelchair mounted arm and you're trying to control it around. Or one task that we've looked at with Sergei is, and our students did, is a lunar lander. So I don't know if you know this Atari game, it's called Lunar Lander. It's really hard. People really suck at landing the thing. Mostly they just crash it left and right. Okay, so this is the kind of task we imagine you're trying to provide some assistance to a person operating such a robot where you want the kind of the autonomy to kick in, figure out what it is that you're trying to do and help you do it. It's really hard to do that for, say, Lunar Lander because people are all over the place. And so they seem much more noisy than really irrational. That's an example of a task where these models are kind of failing us. And it's not surprising because we're talking about the 40s, utility, late 50s, sort of noisy. Then the 70s came and behavioral economics started being a thing where people were like, no, no, no, no, no, people are not rational. People are messy and emotional and irrational and have all sorts of heuristics that might be domain specific. And they're just a mess. The mess. So what does my robot do to understand what you want? And it's a very, it's very, that's why it's complicated. It's, you know, for the most part, we get away with pretty simple models until we don't. And then the question is, what do you do then? And I had days when I wanted to, you know, pack my bags and go home and switch jobs because it's just, it feels really daunting to make sense of human behavior enough that you can reliably understand what people want, especially as, you know, robot capabilities will continue to get developed. You'll get these systems that are more and more capable of all sorts of things. And then you really want to make sure that you're telling them the right thing to do. What is that thing? Well, read it in human behavior. So if I just sat here quietly and tried to understand something about you by listening to you talk, it would be harder than if I got to say something and ask you and interact and control. Can you, can the robot help its understanding of the human by influencing the behavior by actually acting? Yeah, absolutely. So one of the things that's been exciting to me lately is this notion that when you try to, that when you try to think of the robotics problem as, okay, I have a robot and it needs to optimize for whatever it is that a person wants it to optimize as opposed to maybe what a programmer said. That problem we think of as a human robot collaboration problem in which both agents get to act in which the robot knows less than the human because the human actually has access to, you know, at least implicitly to what it is that they want. They can't write it down, but they can talk about it. They can give all sorts of signals. They can demonstrate and, but the robot doesn't need to sit there and passively observe human behavior and try to make sense of it. The robot can act too. And so there's these information gathering actions that the robot can take to sort of solicit responses that are actually informative. So for instance, this is not for the purpose of assisting people, but with kind of back to coordinating with people in cars and all of that. One thing that Dorsa did was, so we were looking at cars being able to navigate around people and you might not know exactly the driving style of a particular individual that's next to you, but you wanna change lanes in front of them. Navigating around other humans inside cars. Yeah, good, good clarification question. So you have an autonomous car and it's trying to navigate the road around human driven vehicles. Similar things ideas apply to pedestrians as well, but let's just take human driven vehicles. So now you're trying to change a lane. Well, you could be trying to infer the driving style of this person next to you. You'd like to know if they're in particular, if they're sort of aggressive or defensive, if they're gonna let you kind of go in or if they're gonna not. And it's very difficult to just, if you think that if you wanna hedge your bets and say, ah, maybe they're actually pretty aggressive, I shouldn't try this. You kind of end up driving next to them and driving next to them, right? And then you don't know because you're not actually getting the observations that you're getting away. Someone drives when they're next to you and they just need to go straight. It's kind of the same regardless if they're aggressive or defensive. And so you need to enable the robot to reason about how it might actually be able to gather information by changing the actions that it's taking. And then the robot comes up with these cool things where it kind of nudges towards you and then sees if you're gonna slow down or not. Then if you slow down, it sort of updates its model of you and says, oh, okay, you're more on the defensive side. So now I can actually like. That's a fascinating dance. That's so cool that you could use your own actions to gather information. That feels like a totally open, exciting new world of robotics. I mean, how many people are even thinking about that kind of thing? A handful of us, I'd say. It's rare because it's actually leveraging human. I mean, most roboticists, I've talked to a lot of colleagues and so on, are kind of, being honest, kind of afraid of humans. Because they're messy and complicated, right? I understand. Going back to what we were talking about earlier, right now we're kind of in this dilemma of, okay, there are tasks that we can just assume people are approximately rational for and we can figure out what they want. We can figure out their goals. We can figure out their driving styles, whatever. Cool. There are these tasks that we can't. So what do we do, right? Do we pack our bags and go home? And this one, I've had a little bit of hope recently. And I'm kind of doubting myself because what do I know that, you know, 50 years of behavioral economics hasn't figured out. But maybe it's not really in contradiction with the way that field is headed. But basically one thing that we've been thinking about is, instead of kind of giving up and saying people are too crazy and irrational for us to make sense of them, maybe we can give them a bit the benefit of the doubt. And maybe we can think of them as actually being relatively rational, but just under different assumptions about the world, about how the world works, about, you know, they don't have, when we think about rationality, implicit assumption is, oh, they're rational, and they're all the same assumptions and constraints as the robot, right? What, if this is the state of the world, that's what they know. This is the transition function, that's what they know. This is the horizon, that's what they know. But maybe the kind of this difference, the way, the reason they can seem a little messy and hectic, especially to robots, is that perhaps they just make different assumptions or have different beliefs. Yeah, I mean, that's another fascinating idea that this, our kind of anecdotal desire to say that humans are irrational, perhaps grounded in behavioral economics, is that we just don't understand the constraints and the rewards under which they operate. And so our goal shouldn't be to throw our hands up and say they're irrational, it's to say, let's try to understand what are the constraints. What it is that they must be assuming that makes this behavior make sense. Good life lesson, right? Good life lesson. That's true, it's just outside of robotics. That's just good to, that's communicating with humans. That's just a good assume that you just don't, sort of empathy, right? It's a... This is maybe there's something you're missing and it's, you know, it especially happens to robots cause they're kind of dumb and they don't know things. And oftentimes people are sort of supra rational and that they actually know a lot of things that robots don't. Sometimes like with the lunar lander, the robot, you know, knows much more. So it turns out that if you try to say, look, maybe people are operating this thing but assuming a much more simplified physics model cause they don't get the complexity of this kind of craft or the robot arm with seven degrees of freedom with these inertias and whatever. So maybe they have this intuitive physics model which is not, you know, this notion of intuitive physics is something that you studied actually in cognitive science was like Josh Denenbaum, Tom Griffith's work on this stuff. And what we found is that you can actually try to figure out what physics model kind of best explains human actions. And then you can use that to sort of correct what it is that they're commanding the craft to do. So they might, you know, be sending the craft somewhere but instead of executing that action, you can sort of take a step back and say, according to their intuitive, if the world worked according to their intuitive physics model, where do they think that the craft is going? Where are they trying to send it to? And then you can use the real physics, right? The inverse of that to actually figure out what you should do so that you do that instead of where they were actually sending you in the real world. And I kid you not at work people land the damn thing and you know, in between the two flags and all that. So it's not conclusive in any way but I'd say it's evidence that yeah, maybe we're kind of underestimating humans in some ways when we're giving up and saying, yeah, they're just crazy noisy. So then you try to explicitly try to model the kind of worldview that they have. That they have, that's right. That's right. And it's not too, I mean, there's things in behavior economics too that for instance have touched upon the planning horizon. So there's this idea that there's bounded rationality essentially and the idea that, well, maybe we work under computational constraints. And I think kind of our view recently has been take the Bellman update in AI and just break it in all sorts of ways by saying state, no, no, no, the person doesn't get to see the real state. Maybe they're estimating somehow. Transition function, no, no, no, no, no. Even the actual reward evaluation, maybe they're still learning about what it is that they want. Like, you know, when you watch Netflix and you know, you have all the things and then you have to pick something, imagine that, you know, the AI system interpreted that choice as this is the thing you prefer to see. Like, how are you going to know? You're still trying to figure out what you like, what you don't like, et cetera. So I think it's important to also account for that. So it's not irrationality, because they're doing the right thing under the things that they know. Yeah, that's brilliant. You mentioned recommender systems. What kind of, and we were talking about human robot interaction, what kind of problem spaces are you thinking about? So is it robots, like wheeled robots with autonomous vehicles? Is it object manipulation? Like when you think about human robot interaction in your mind, and maybe I'm sure you can speak for the entire community of human robot interaction. But like, what are the problems of interest here? And does it, you know, I kind of think of open domain dialogue as human robot interaction, and that happens not in the physical space, but it could just happen in the virtual space. So where's the boundaries of this field for you when you're thinking about the things we've been talking about? Yeah, so I try to find kind of underlying, I don't know what to even call them. I try to work on, you know, I might call what I do, the kind of working on the foundations of algorithmic human robot interaction and trying to make contributions there. And it's important to me that whatever we do is actually somewhat domain agnostic when it comes to, is it about, you know, autonomous cars or is it about quadrotors or is it about, is this sort of the same underlying principles apply? Of course, when you're trying to get a particular domain to work, you usually have to do some extra work to adapt that to that particular domain. But these things that we were talking about around, well, you know, how do you model humans? It turns out that a lot of systems need to core benefit from a better understanding of how human behavior relates to what people want and need to predict human behavior, physical robots of all sorts and beyond that. And so I used to do manipulation. I used to be, you know, picking up stuff and then I was picking up stuff with people around. And now it's sort of very broad when it comes to the application level, but in a sense, very focused on, okay, how does the problem need to change? How do the algorithms need to change when we're not doing a robot by itself? You know, emptying the dishwasher, but we're stepping outside of that. I thought that popped into my head just now. On the game theoretic side, I think you said this really interesting idea of using actions to gain more information. But if we think of sort of game theory, the humans that are interacting with you, with you, the robot? Wow, I'm thinking the identity of the robot. Yeah, I do that all the time. Yeah, is they also have a world model of you and you can manipulate that. I mean, if we look at autonomous vehicles, people have a certain viewpoint. You said with the kids, people see Alexa in a certain way. Is there some value in trying to also optimize how people see you as a robot? Or is that a little too far away from the specifics of what we can solve right now? So, well, both, right? So it's really interesting. And we've seen a little bit of progress on this problem, on pieces of this problem. So you can, again, it kind of comes down to how complicated does the human model need to be? But in one piece of work that we were looking at, we just said, okay, there's these parameters that are internal to the robot and what the robot is about to do, or maybe what objective, what driving style the robot has or something like that. And what we're gonna do is we're gonna set up a system where part of the state is the person's belief over those parameters. And now when the robot acts, that the person gets new evidence about this robot internal state. And so they're updating their mental model of the robot. So if they see a car that sort of cuts someone off, they're like, oh, that's an aggressive car. They know more. If they see sort of a robot head towards a particular door, they're like, oh yeah, the robot's trying to get to that door. So this thing that we have to do with humans to try and understand their goals and intentions, humans are inevitably gonna do that to robots. And then that raises this interesting question that you asked, which is, can we do something about that? This is gonna happen inevitably, but we can sort of be more confusing or less confusing to people. And it turns out you can optimize for being more informative and less confusing if you have an understanding of how your actions are being interpreted by the human, and how they're using these actions to update their belief. And honestly, all we did is just Bayes rule. Basically, okay, the person has a belief, they see an action, they make some assumptions about how the robot generates its actions, presumably as being rational, because robots are rational. It's reasonable to assume that about them. And then they incorporate that new piece of evidence in the Bayesian sense in their belief, and they obtain a posterior. And now the robot is trying to figure out what actions to take such that it steers the person's belief to put as much probability mass as possible on the correct parameters. So that's kind of a mathematical formalization of that. But my worry, and I don't know if you wanna go there with me, but I talk about this quite a bit. The kids talking to Alexa disrespectfully worries me. I worry in general about human nature. Like I said, I grew up in Soviet Union, World War II, I'm a Jew too, so with the Holocaust and everything. I just worry about how we humans sometimes treat the other, the group that we call the other, whatever it is. Through human history, the group that's the other has been changed faces. But it seems like the robot will be the other, the other, the next other. And one thing is it feels to me that robots don't get no respect. They get shoved around. Shoved around, and is there, one, at the shallow level, for a better experience, it seems that robots need to talk back a little bit. Like my intuition says, I mean, most companies from sort of Roomba, autonomous vehicle companies might not be so happy with the idea that a robot has a little bit of an attitude. But I feel, it feels to me that that's necessary to create a compelling experience. Like we humans don't seem to respect anything that doesn't give us some attitude. That, or like a mix of mystery and attitude and anger and that threatens us subtly, maybe passive aggressively. I don't know. It seems like we humans, yeah, need that. Do you, what are your, is there something, you have thoughts on this? All right, I'll give you two thoughts on this. Okay, sure. One is, one is, it's, we respond to, you know, someone being assertive, but we also respond to someone being vulnerable. So I think robots, my first thought is that robots get shoved around and bullied a lot because they're sort of, you know, tempting and they're sort of showing off or they appear to be showing off. And so I think going back to these things we were talking about in the beginning of making robots a little more, a little more expressive, a little bit more like, eh, that wasn't cool to do. And now I'm bummed, right? I think that that can actually help because people can't help but anthropomorphize and respond to that. Even that though, the emotion being communicated is not in any way a real thing. And people know that it's not a real thing because they know it's just a machine. We're still interpreting, you know, we watch, there's this famous psychology experiment with little triangles and kind of dots on a screen and a triangle is chasing the square and you get really angry at the darn triangle because why is it not leaving the square alone? So that's, yeah, we can't help. So that was the first thought. The vulnerability, that's really interesting that, I think of like being, pushing back, being assertive as the only mechanism of getting, of forming a connection, of getting respect, but perhaps vulnerability, perhaps there's other mechanisms that are less threatening. Yeah. Is there? Well, I think, well, a little bit, yes, but then this other thing that we can think about is, it goes back to what you were saying, that interaction is really game theoretic, right? So the moment you're taking actions in a space, the humans are taking actions in that same space, but you have your own objective, which is, you know, you're a car, you need to get your passenger to the destination. And then the human nearby has their own objective, which somewhat overlaps with you, but not entirely. You're not interested in getting into an accident with each other, but you have different destinations and you wanna get home faster and they wanna get home faster. And that's a general sum game at that point. And so that's, I think that's what, treating it as such is kind of a way we can step outside of this kind of mode that, where you try to anticipate what people do and you don't realize you have any influence over it while still protecting yourself because you're understanding that people also understand that they can influence you. And it's just kind of back and forth is this negotiation, which is really talking about different equilibria of a game. The very basic way to solve coordination is to just make predictions about what people will do and then stay out of their way. And that's hard for the reasons we talked about, which is how you have to understand people's intentions implicitly, explicitly, who knows, but somehow you have to get enough of an understanding of that to be able to anticipate what happens next. And so that's challenging. But then it's further challenged by the fact that people change what they do based on what you do because they don't plan in isolation either, right? So when you see cars trying to merge on a highway and not succeeding, one of the reasons this can be is because they look at traffic that keeps coming, they predict what these people are planning on doing, which is to just keep going, and then they stay out of the way because there's no feasible plan, right? Any plan would actually intersect with one of these other people. So that's bad, so you get stuck there. So now kind of if you start thinking about it as no, no, no, actually these people change what they do depending on what the car does. Like if the car actually tries to kind of inch itself forward, they might actually slow down and let the car in. And now taking advantage of that, well, that's kind of the next level. We call this like this underactuated system idea where it's kind of underactuated system robotics, but it's kind of, you're influenced these other degrees of freedom, but you don't get to decide what they do. I've somewhere seen you mention it, the human element in this picture as underactuated. So you understand underactuated robotics is that you can't fully control the system. You can't go in arbitrary directions in the configuration space. Under your control. Yeah, it's a very simple way of underactuation where basically there's literally these degrees of freedom that you can control, and these degrees of freedom that you can't, but you influence them. And I think that's the important part is that they don't do whatever, regardless of what you do, that what you do influences what they end up doing. I just also like the poetry of calling human robot interaction an underactuated robotics problem. And you also mentioned sort of nudging. It seems that they're, I don't know. I think about this a lot in the case of pedestrians I've collected hundreds of hours of videos. I like to just watch pedestrians. And it seems that. It's a funny hobby. Yeah, it's weird. Cause I learn a lot. I learned a lot about myself, about our human behavior, from watching pedestrians, watching people in their environment. Basically crossing the street is like you're putting your life on the line. I don't know, tens of millions of time in America every day is people are just like playing this weird game of chicken when they cross the street, especially when there's some ambiguity about the right of way. That has to do either with the rules of the road or with the general personality of the intersection based on the time of day and so on. And this nudging idea, it seems that people don't even nudge. They just aggressively take, make a decision. Somebody, there's a runner that gave me this advice. I sometimes run in the street, not in the street, on the sidewalk. And he said that if you don't make eye contact with people when you're running, they will all move out of your way. It's called civil inattention. Civil inattention, that's a thing. Oh wow, I need to look this up, but it works. What is that? My sense was if you communicate like confidence in your actions that you're unlikely to deviate from the action that you're following, that's a really powerful signal to others that they need to plan around your actions. As opposed to nudging where you're sort of hesitantly, then the hesitation might communicate that you're still in the dance and the game that they can influence with their own actions. I've recently had a conversation with Jim Keller, who's a sort of this legendary chip architect, but he also led the autopilot team for a while. And his intuition that driving is fundamentally still like a ballistics problem. Like you can ignore the human element that is just not hitting things. And you can kind of learn the right dynamics required to do the merger and all those kinds of things. And then my sense is, and I don't know if I can provide sort of definitive proof of this, but my sense is like an order of magnitude are more difficult when humans are involved. Like it's not simply object collision avoidance problem. Where does your intuition, of course, nobody knows the right answer here, but where does your intuition fall on the difficulty, fundamental difficulty of the driving problem when humans are involved? Yeah, good question. I have many opinions on this. Imagine downtown San Francisco. Yeah, it's crazy, busy, everything. Okay, now take all the humans out. No pedestrians, no human driven vehicles, no cyclists, no people on little electric scooters zipping around, nothing. I think we're done. I think driving at that point is done. We're done. There's nothing really that still needs to be solved about that. Well, let's pause there. I think I agree with you and I think a lot of people that will hear will agree with that, but we need to sort of internalize that idea. So what's the problem there? Cause we might not quite yet be done with that. Cause a lot of people kind of focus on the perception problem. A lot of people kind of map autonomous driving into how close are we to solving, being able to detect all the, you know, the drivable area, the objects in the scene. Do you see that as a, how hard is that problem? So your intuition there behind your statement was we might have not solved it yet, but we're close to solving basically the perception problem. I think the perception problem, I mean, and by the way, a bunch of years ago, this would not have been true. And a lot of issues in the space were coming from the fact that, oh, we don't really, you know, we don't know what's where. But I think it's fairly safe to say that at this point, although you could always improve on things and all of that, you can drive through downtown San Francisco if there are no people around. There's no really perception issues standing in your way there. I think perception is hard, but yeah, it's, we've made a lot of progress on the perception, so I had to undermine the difficulty of the problem. I think everything about robotics is really difficult, of course, I think that, you know, the planning problem, the control problem, all very difficult, but I think what's, what makes it really kind of, yeah. It might be, I mean, you know, and I picked downtown San Francisco, it's adapting to, well, now it's snowing, now it's no longer snowing, now it's slippery in this way, now it's the dynamics part could, I could imagine being still somewhat challenging, but. No, the thing that I think worries us, and our intuition's not good there, is the perception problem at the edge cases. Sort of downtown San Francisco, the nice thing, it's not actually, it may not be a good example because. Because you know what you're getting from, well, there's like crazy construction zones and all of that. Yeah, but the thing is, you're traveling at slow speeds, so like it doesn't feel dangerous. To me, what feels dangerous is highway speeds, when everything is, to us humans, super clear. Yeah, I'm assuming LiDAR here, by the way. I think it's kind of irresponsible to not use LiDAR. That's just my personal opinion. That's, I mean, depending on your use case, but I think like, you know, if you have the opportunity to use LiDAR, in a lot of cases, you might not. Good, your intuition makes more sense now. So you don't think vision. I really just don't know enough to say, well, vision alone, what, you know, what's like, there's a lot of, how many cameras do you have? Is it, how are you using them? I don't know. There's details. There's all, there's all sorts of details. I imagine there's stuff that's really hard to actually see, you know, how do you deal with glare, exactly what you were saying, stuff that people would see that you don't. I think I have, more of my intuition comes from systems that can actually use LiDAR as well. Yeah, and until we know for sure, it makes sense to be using LiDAR. That's kind of the safety focus. But then the sort of the, I also sympathize with the Elon Musk statement of LiDAR is a crutch. It's a fun notion to think that the things that work today is a crutch for the invention of the things that will work tomorrow, right? Like it, it's kind of true in the sense that if, you know, we want to stick to the comfort zone, you see this in academic and research settings all the time, the things that work force you to not explore outside, think outside the box. I mean, that happens all the time. The problem is in the safety critical systems, you kind of want to stick with the things that work. So it's an interesting and difficult trade off in the case of real world sort of safety critical robotic systems, but so your intuition is, just to clarify, how, I mean, how hard is this human element for, like how hard is driving when this human element is involved? Are we years, decades away from solving it? But perhaps actually the year isn't the thing I'm asking. It doesn't matter what the timeline is, but do you think we're, how many breakthroughs are we away from in solving the human robotic interaction problem to get this, to get this right? I think it, in a sense, it really depends. I think that, you know, we were talking about how, well, look, it's really hard because anticipate what people do is hard. And on top of that, playing the game is hard. But I think we sort of have the fundamental, some of the fundamental understanding for that. And then you already see that these systems are being deployed in the real world, you know, even driverless. Like there's, I think now a few companies that don't have a driver in the car in some small areas. I got a chance to, I went to Phoenix and I, I shot a video with Waymo and I needed to get that video out. People have been giving me slack, but there's incredible engineering work being done there. And it's one of those other seminal moments for me in my life to be able to, it sounds silly, but to be able to drive without a ride, sorry, without a driver in the seat. I mean, that was an incredible robotics. I was driven by a robot without being able to take over, without being able to take the steering wheel. That's a magical, that's a magical moment. So in that regard, in those domains, at least for like Waymo, they're solving that human, there's, I mean, they're going, I mean, it felt fast because you're like freaking out at first. That was, this is my first experience, but it's going like the speed limit, right? 30, 40, whatever it is. And there's humans and it deals with them quite well. It detects them, it negotiates the intersections, the left turns and all of that. So at least in those domains, it's solving them. The open question for me is like, how quickly can we expand? You know, that's the, you know, outside of the weather conditions, all of those kinds of things, how quickly can we expand to like cities like San Francisco? Yeah, and I wouldn't say that it's just, you know, now it's just pure engineering and it's probably the, I mean, and by the way, I'm speaking kind of very generally here as hypothesizing, but I think that there are successes and yet no one is everywhere out there. So that seems to suggest that things can be expanded and can be scaled and we know how to do a lot of things, but there's still probably, you know, new algorithms or modified algorithms that you still need to put in there as you learn more and more about new challenges that you get faced with. How much of this problem do you think can be learned through end to end? Is it the success of machine learning and reinforcement learning? How much of it can be learned from sort of data from scratch and how much, which most of the success of autonomous vehicle systems have a lot of heuristics and rule based stuff on top, like human expertise injected forced into the system to make it work. What's your sense? How much, what will be the role of learning in the near term and long term? I think on the one hand that learning is inevitable here, right? I think on the other hand that when people characterize the problem as it's a bunch of rules that some people wrote down, versus it's an end to end RL system or imitation learning, then maybe there's kind of something missing from maybe that's more. So for instance, I think a very, very useful tool in this sort of problem, both in how to generate the car's behavior and robots in general and how to model human beings is actually planning, search optimization, right? So robotics is the sequential decision making problem. And when a robot can figure out on its own how to achieve its goal without hitting stuff and all that stuff, right? All the good stuff for motion planning 101, I think of that as very much AI, not this is some rule or something. There's nothing rule based around that, right? It's just you're searching through a space and figuring out are you optimizing through a space and figure out what seems to be the right thing to do. And I think it's hard to just do that because you need to learn models of the world. And I think it's hard to just do the learning part where you don't bother with any of that, because then you're saying, well, I could do imitation, but then when I go off distribution, I'm really screwed. Or you can say, I can do reinforcement learning, which adds a lot of robustness, but then you have to do either reinforcement learning in the real world, which sounds a little challenging or that trial and error, you know, or you have to do reinforcement learning in simulation. And then that means, well, guess what? You need to model things, at least to model people, model the world enough that whatever policy you get of that is actually fine to roll out in the world and do some additional learning there. So. Do you think simulation, by the way, just a quick tangent has a role in the human robot interaction space? Like, is it useful? It seems like humans, everything we've been talking about are difficult to model and simulate. Do you think simulation has a role in this space? I do. I think so because you can take models and train with them ahead of time, for instance. You can. But the models, sorry to interrupt, the models are sort of human constructed or learned? I think they have to be a combination because if you get some human data and then you say, this is how, this is gonna be my model of the person. What are for simulation and training or for just deployment time? And that's what I'm planning with as my model of how people work. Regardless, if you take some data and you don't assume anything else and you just say, okay, this is some data that I've collected. Let me fit a policy to how people work based on that. What tends to happen is you collected some data and some distribution, and then now your robot sort of computes a best response to that, right? It's sort of like, what should I do if this is how people work? And easily goes off of distribution where that model that you've built of the human completely sucks because out of distribution, you have no idea, right? If you think of all the possible policies and then you take only the ones that are consistent with the human data that you've observed, that still leads a lot of, a lot of things could happen outside of that distribution where you're confident then you know what's going on. By the way, that's, I mean, I've gotten used to this terminology of not a distribution, but it's such a machine learning terminology because it kind of assumes, so distribution is referring to the data that you've seen. The set of states that you encounter at training time. They've encountered so far at training time. Yeah. But it kind of also implies that there's a nice like statistical model that represents that data. So out of distribution feels like, I don't know, it raises to me philosophical questions of how we humans reason out of distribution, reason about things that are completely, we haven't seen before. And so, and what we're talking about here is how do we reason about what other people do in situations where we haven't seen them? And somehow we just magically navigate that. I can anticipate what will happen in situations that are even novel in many ways. And I have a pretty good intuition for, I don't always get it right, but you know, and I might be a little uncertain and so on. But I think it's this that if you just rely on data, you know, there's just too many possibilities, there's too many policies out there that fit the data. And by the way, it's not just state, it's really kind of history of state, cause to really be able to anticipate what the person will do, it kind of depends on what they've been doing so far, cause that's the information you need to kind of, at least implicitly sort of say, oh, this is the kind of person that this is, this is probably what they're trying to do. So anyway, it's like you're trying to map history of states to actions, there's many mappings. And history meaning like the last few seconds or the last few minutes or the last few months. Who knows, who knows how much you need, right? In terms of if your state is really like the positions of everything or whatnot and velocities, who knows how much you need. And then there's so many mappings. And so now you're talking about how do you regularize that space? What priors do you impose or what's the inductive bias? So, you know, there's all very related things to think about it. Basically, what are assumptions that we should be making such that these models actually generalize outside of the data that we've seen? And now you're talking about, well, I don't know, what can you assume? Maybe you can assume that people like actually have intentions and that's what drives their actions. Maybe that's, you know, the right thing to do when you haven't seen data very nearby that tells you otherwise. I don't know, it's a very open question. Do you think sort of that one of the dreams of artificial intelligence was to solve common sense reasoning, whatever the heck that means. Do you think something like common sense reasoning has to be solved in part to be able to solve this dance of human robot interaction, the driving space or human robot interaction in general? Do you have to be able to reason about these kinds of common sense concepts of physics, of, you know, all the things we've been talking about humans, I don't even know how to express them with words, but the basics of human behavior, a fear of death. So like, to me, it's really important to encode in some kind of sense, maybe not, maybe it's implicit, but it feels that it's important to explicitly encode the fear of death, that people don't wanna die. Because it seems silly, but like the game of chicken that involves with the pedestrian crossing the street is playing with the idea of mortality. Like we really don't wanna die. It's not just like a negative reward. I don't know, it just feels like all these human concepts have to be encoded. Do you share that sense or is this a lot simpler than I'm making out to be? I think it might be simpler. And I'm the person who likes to complicate things. I think it might be simpler than that. Because it turns out, for instance, if you say model people in the very, I'll call it traditional, I don't know if it's fair to look at it as a traditional way, but you know, calling people as, okay, they're rational somehow, the utilitarian perspective. Well, in that, once you say that, you automatically capture that they have an incentive to keep on being. You know, Stuart likes to say, you can't fetch the coffee if you're dead. Stuart Russell, by the way. That's a good line. So when you're sort of treating agents as having these objectives, these incentives, humans or artificial, you're kind of implicitly modeling that they'd like to stick around so that they can accomplish those goals. So I think in a sense, maybe that's what draws me so much to the rationality framework, even though it's so broken, we've been able to, it's been such a useful perspective. And like we were talking about earlier, what's the alternative? I give up and go home or, you know, I just use complete black boxes, but then I don't know what to assume out of distribution that come back to this. It's just, it's been a very fruitful way to think about the problem in a very more positive way, right? People aren't just crazy. Maybe they make more sense than we think. But I think we also have to somehow be ready for it to be wrong, be able to detect when these assumptions aren't holding, be all of that stuff. Let me ask sort of another small side of this that we've been talking about the pure autonomous driving problem, but there's also relatively successful systems already deployed out there in what you may call like level two autonomy or semi autonomous vehicles, whether that's Tesla Autopilot, work quite a bit with Cadillac SuperGuru system, which has a driver facing camera that detects your state. There's a bunch of basically lane centering systems. What's your sense about this kind of way of dealing with the human robot interaction problem by having a really dumb robot and relying on the human to help the robot out to keep them both alive? Is that from the research perspective, how difficult is that problem? And from a practical deployment perspective, is that a fruitful way to approach this human robot interaction problem? I think what we have to be careful about there is to not, it seems like some of these systems, not all are making this underlying assumption that if, so I'm a driver and I'm now really not driving, but supervising and my job is to intervene, right? And so we have to be careful with this assumption that when I'm, if I'm supervising, I will be just as safe as when I'm driving. That I will, if I wouldn't get into some kind of accident, if I'm driving, I will be able to avoid that accident when I'm supervising too. And I think I'm concerned about this assumption from a few perspectives. So from a technical perspective, it's that when you let something kind of take control and do its thing, and it depends on what that thing is, obviously, and how much it's taking control and how, what things are you trusting it to do. But if you let it do its thing and take control, it will go to what we might call off policy from the person's perspective state. So states that the person wouldn't actually find themselves in if they were the ones driving. And the assumption that the person functions just as well there as they function in the states that they would normally encounter is a little questionable. Now, another part is the kind of the human factor side of this, which is that I don't know about you, but I think I definitely feel like I'm experiencing things very differently when I'm actively engaged in the task versus when I'm a passive observer. Like even if I try to stay engaged, right? It's very different than when I'm actually actively making decisions. And you see this in life in general. Like you see students who are actively trying to come up with the answer, learn this thing better than when they're passively told the answer. I think that's somewhat related. And I think people have studied this in human factors for airplanes. And I think it's actually fairly established that these two are not the same. So. On that point, because I've gotten a huge amount of heat on this and I stand by it. Okay. Because I know the human factors community well and the work here is really strong. And there's many decades of work showing exactly what you're saying. Nevertheless, I've been continuously surprised that much of the predictions of that work has been wrong in what I've seen. So what we have to do, I still agree with everything you said, but we have to be a little bit more open minded. So the, I'll tell you, there's a few surprising things that supervise, like everything you said to the word is actually exactly correct. But it doesn't say, what you didn't say is that these systems are, you said you can't assume a bunch of things, but we don't know if these systems are fundamentally unsafe. That's still unknown. There's a lot of interesting things, like I'm surprised by the fact, not the fact, that what seems to be anecdotally from, well, from large data collection that we've done, but also from just talking to a lot of people, when in the supervisory role of semi autonomous systems that are sufficiently dumb, at least, which is, that might be the key element, is the systems have to be dumb. The people are actually more energized as observers. So they're actually better, they're better at observing the situation. So there might be cases in systems, if you get the interaction right, where you, as a supervisor, will do a better job with the system together. I agree, I think that is actually really possible. I guess mainly I'm pointing out that if you do it naively, you're implicitly assuming something, that assumption might actually really be wrong. But I do think that if you explicitly think about what the agent should do so that the person still stays engaged. What the, so that you essentially empower the person to do more than they could, that's really the goal, right? Is you still have a driver, so you wanna empower them to be so much better than they would be by themselves. And that's different, it's a very different mindset than I want them to basically not drive, right? And, but be ready to sort of take over. So one of the interesting things we've been talking about is the rewards, that they seem to be fundamental too, the way robots behaves. So broadly speaking, we've been talking about utility functions and so on, but could you comment on how do we approach the design of reward functions? Like, how do we come up with good reward functions? Well, really good question, because the answer is we don't. This was, you know, I used to think, I used to think about how, well, it's actually really hard to specify rewards for interaction because it's really supposed to be what the people want, and then you really, you know, we talked about how you have to customize what you wanna do to the end user. But I kind of realized that even if you take the interactive component away, it's still really hard to design reward functions. So what do I mean by that? I mean, if we assume this sort of AI paradigm in which there's an agent and his job is to optimize some objectives, some reward, utility, loss, whatever, cost, if you write it out, maybe it's a set, depending on the situation or whatever it is, if you write that out and then you deploy the agent, you'd wanna make sure that whatever you specified incentivizes the behavior you want from the agent in any situation that the agent will be faced with, right? So I do motion planning on my robot arm, I specify some cost function like, you know, this is how far away you should try to stay, so much it matters to stay away from people, and this is how much it matters to be able to be efficient and blah, blah, blah, right? I need to make sure that whatever I specified, those constraints or trade offs or whatever they are, that when the robot goes and solves that problem in every new situation, that behavior is the behavior that I wanna see. And what I've been finding is that we have no idea how to do that. Basically, what I can do is I can sample, I can think of some situations that I think are representative of what the robot will face, and I can tune and add and tune some reward function until the optimal behavior is what I want on those situations, which first of all is super frustrating because, you know, through the miracle of AI, we've taken, we don't have to specify rules for behavior anymore, right? The, who were saying before, the robot comes up with the right thing to do, you plug in this situation, it optimizes right in that situation, it optimizes, but you have to spend still a lot of time on actually defining what it is that that criteria should be, making sure you didn't forget about 50 bazillion things that are important and how they all should be combining together to tell the robot what's good and what's bad and how good and how bad. And so I think this is a lesson that I don't know, kind of, I guess I close my eyes to it for a while cause I've been, you know, tuning cost functions for 10 years now, but it's really strikes me that, yeah, we've moved the tuning and the like designing of features or whatever from the behavior side into the reward side. And yes, I agree that there's way less of it, but it still seems really hard to anticipate any possible situation and make sure you specify a reward function that when optimized will work well in every possible situation. So you're kind of referring to unintended consequences or just in general, any kind of suboptimal behavior that emerges outside of the things you said, out of distribution. Suboptimal behavior that is, you know, actually optimal. I mean, this, I guess the idea of unintended consequences, you know, it's optimal respect to what you specified, but it's not what you want. And there's a difference between those. But that's not fundamentally a robotics problem, right? That's a human problem. So like. That's the thing, right? So there's this thing called Goodhart's law, which is you set a metric for an organization and the moment it becomes a target that people actually optimize for, it's no longer a good metric. What's it called? Goodhart's law. Goodhart's law. So the moment you specify a metric, it stops doing its job. Yeah, it stops doing its job. So there's, yeah, there's such a thing as optimizing for things and, you know, failing to think ahead of time of all the possible things that might be important. And so that's, so that's interesting because Historia works a lot on reward learning from the perspective of customizing to the end user, but it really seems like it's not just the interaction with the end user that's a problem of the human and the robot collaborating so that the robot can do what the human wants, right? This kind of back and forth, the robot probing, the person being informative, all of that stuff might be actually just as applicable to this kind of maybe new form of human robot interaction, which is the interaction between the robot and the expert programmer, roboticist designer in charge of actually specifying what the heck the robot should do, specifying the task for the robot. That's fascinating. That's so cool, like collaborating on the reward design. Right, collaborating on the reward design. And so what does it mean, right? What does it, when we think about the problem, not as someone specifies all of your job is to optimize, and we start thinking about you're in this interaction and this collaboration. And the first thing that comes up is when the person specifies a reward, it's not, you know, gospel, it's not like the letter of the law. It's not the definition of the reward function you should be optimizing, because they're doing their best, but they're not some magic perfect oracle. And the sooner we start understanding that, I think the sooner we'll get to more robust robots that function better in different situations. And then you have kind of say, okay, well, it's almost like robots are over learning, over putting too much weight on the reward specified by definition, and maybe leaving a lot of other information on the table, like what are other things we could do to actually communicate to the robot about what we want them to do besides attempting to specify a reward function. Yeah, you have this awesome, and again, I love the poetry of it, of leaked information. So you mentioned humans leak information about what they want, you know, leak reward signal for the robot. So how do we detect these leaks? What is that? Yeah, what are these leaks? Whether it just, I don't know, those were just recently saw it, read it, I don't know where from you, and it's gonna stick with me for a while for some reason, because it's not explicitly expressed. It kind of leaks indirectly from our behavior. From what we do, yeah, absolutely. So I think maybe some surprising bits, right? So we were talking before about, I'm a robot arm, it needs to move around people, carry stuff, put stuff away, all of that. And now imagine that, you know, the robot has some initial objective that the programmer gave it so they can do all these things functionally. It's capable of doing that. And now I noticed that it's doing something and maybe it's coming too close to me, right? And maybe I'm the designer, maybe I'm the end user and this robot is now in my home. And I push it away. So I push away because, you know, it's a reaction to what the robot is currently doing. And this is what we call physical human robot interaction. And now there's a lot of interesting work on how the heck do you respond to physical human robot interaction? What should the robot do if such an event occurs? And there's sort of different schools of thought. Well, you know, you can sort of treat it the control theoretic way and say, this is a disturbance that you must reject. You can sort of treat it more kind of heuristically and say, I'm gonna go into some like gravity compensation mode so that I'm easily maneuverable around. I'm gonna go in the direction that the person pushed me. And to us, part of realization has been that that is signal that communicates about the reward. Because if my robot was moving in an optimal way and I intervened, that means that I disagree with his notion of optimality, right? Whatever it thinks is optimal is not actually optimal. And sort of optimization problems aside, that means that the cost function, the reward function is incorrect, or at least is not what I want it to be. How difficult is that signal to interpret and make actionable? So like, cause this connects to our autonomous vehicle discussion where they're in the semi autonomous vehicle or autonomous vehicle when a safety driver disengages the car, like, but they could have disengaged it for a million reasons. Yeah, so that's true. Again, it comes back to, can you structure a little bit your assumptions about how human behavior relates to what they want? And you can, one thing that we've done is literally just treated this external torque that they applied as, when you take that and you add it with what the torque the robot was already applying, that overall action is probably relatively optimal in respect to whatever it is that the person wants. And then that gives you information about what it is that they want. So you can learn that people want you to stay further away from them. Now you're right that there might be many things that explain just that one signal and that you might need much more data than that for the person to be able to shape your reward function over time. You can also do this info gathering stuff that we were talking about. Not that we've done that in that context, just to clarify, but it's definitely something we thought about where you can have the robot start acting in a way, like if there's a bunch of different explanations, right? It moves in a way where it sees if you correct it in some other way or not, and then kind of actually plans its motion so that it can disambiguate and collect information about what you want. Anyway, so that's one way, that's kind of sort of leaked information, maybe even more subtle leaked information is if I just press the E stop, right? I just, I'm doing it out of panic because the robot is about to do something bad. There's again, information there, right? Okay, the robot should definitely stop, but it should also figure out that whatever it was about to do was not good. And in fact, it was so not good that stopping and remaining stopped for a while was a better trajectory for it than whatever it is that it was about to do. And that again is information about what are my preferences, what do I want? Speaking of E stops, what are your expert opinions on the three laws of robotics from Isaac Asimov that don't harm humans, obey orders, protect yourself? I mean, it's such a silly notion, but I speak to so many people these days, just regular folks, just, I don't know, my parents and so on about robotics. And they kind of operate in that space of, you know, imagining our future with robots and thinking what are the ethical, how do we get that dance right? I know the three laws might be a silly notion, but do you think about like what universal reward functions that might be that we should enforce on the robots of the future? Or is that a little too far out and it doesn't, or is the mechanism that you just described, it shouldn't be three laws, it should be constantly adjusting kind of thing. I think it should constantly be adjusting kind of thing. You know, the issue with the laws is, I don't even, you know, they're words and I have to write math and have to translate them into math. What does it mean to? What does harm mean? What is, it's not math. Obey what, right? Cause we just talked about how you try to say what you want, but you don't always get it right. And you want these machines to do what you want, not necessarily exactly what you literally, so you don't want them to take you literally. You wanna take what you say and interpret it in context. And that's what we do with the specified rewards. We don't take them literally anymore from the designer. We, not we as a community, we as, you know, some members of my group, we, and some of our collaborators like Peter Beal and Stuart Russell, we sort of say, okay, the designer specified this thing, but I'm gonna interpret it not as, this is the universal reward function that I shall always optimize always and forever, but as this is good evidence about what the person wants. And I should interpret that evidence in the context of these situations that it was specified for. Cause ultimately that's what the designer thought about. That's what they had in mind. And really them specifying reward function that works for me in all these situations is really kind of telling me that whatever behavior that incentivizes must be good behavior with respect to the thing that I should actually be optimizing for. And so now the robot kind of has uncertainty about what it is that it should be, what its reward function is. And then there's all these additional signals that we've been finding that it can kind of continually learn from and adapt its understanding of what people want. Every time the person corrects it, maybe they demonstrate, maybe they stop, hopefully not, right? One really, really crazy one is the environment itself. Like our world, you don't, it's not, you know, you observe our world and the state of it. And it's not that you're seeing behavior and you're saying, oh, people are making decisions that are rational, blah, blah, blah. It's, but our world is something that we've been acting with according to our preferences. So I have this example where like, the robot walks into my home and my shoes are laid down on the floor kind of in a line, right? It took effort to do that. So even though the robot doesn't see me doing this, you know, actually aligning the shoes, it should still be able to figure out that I want the shoes aligned because there's no way for them to have magically, you know, be instantiated themselves in that way. Someone must have actually taken the time to do that. So it must be important. So the environment actually tells, the environment is. Leaks information. It leaks information. I mean, the environment is the way it is because humans somehow manipulated it. So you have to kind of reverse engineer the narrative that happened to create the environment as it is and that leaks the preference information. Yeah, and you have to be careful, right? Because people don't have the bandwidth to do everything. So just because, you know, my house is messy doesn't mean that I want it to be messy, right? But that just, you know, I didn't put the effort into that. I put the effort into something else. So the robot should figure out, well, that something else was more important, but it doesn't mean that, you know, the house being messy is not. So it's a little subtle, but yeah, we really think of it. The state itself is kind of like a choice that people implicitly made about how they want their world. What book or books, technical or fiction or philosophical, when you like look back, you know, life had a big impact, maybe it was a turning point, it was inspiring in some way. Maybe we're talking about some silly book that nobody in their right mind would want to read. Or maybe it's a book that you would recommend to others to read. Or maybe those could be two different recommendations of books that could be useful for people on their journey. When I was in, it's kind of a personal story. When I was in 12th grade, I got my hands on a PDF copy in Romania of Russell Norvig, AI modern approach. I didn't know anything about AI at that point. I was, you know, I had watched the movie, The Matrix was my exposure. And so I started going through this thing and, you know, you were asking in the beginning, what are, you know, it's math and it's algorithms, what's interesting. It was so captivating. This notion that you could just have a goal and figure out your way through kind of a messy, complicated situation. So what sequence of decisions you should make to autonomously to achieve that goal. That was so cool. I'm, you know, I'm biased, but that's a cool book to look at. You can convert, you know, the goal of intelligence, the process of intelligence and mechanize it. I had the same experience. I was really interested in psychiatry and trying to understand human behavior. And then AI modern approach is like, wait, you can just reduce it all to. You can write math about human behavior, right? Yeah. So that's, and I think that stuck with me because, you know, a lot of what I do, a lot of what we do in my lab is write math about human behavior, combine it with data and learning, put it all together, give it to robots to plan with, and, you know, hope that instead of writing rules for the robots, writing heuristics, designing behavior, they can actually autonomously come up with the right thing to do around people. That's kind of our, you know, that's our signature move. We wrote some math and then instead of kind of hand crafting this and that and that and the robot figuring stuff out and isn't that cool. And I think that is the same enthusiasm that I got from the robot figured out how to reach that goal in that graph. Isn't that cool? So apologize for the romanticized questions, but, and the silly ones, if a doctor gave you five years to live, sort of emphasizing the finiteness of our existence, what would you try to accomplish? It's like my biggest nightmare, by the way. I really like living. So I'm actually, I really don't like the idea of being told that I'm going to die. Sorry to linger on that for a second. Do you, I mean, do you meditate or ponder on your mortality or human, the fact that this thing ends, it seems to be a fundamental feature. Do you think of it as a feature or a bug too? Is it, you said you don't like the idea of dying, but if I were to give you a choice of living forever, like you're not allowed to die. Now I'll say that I want to live forever, but I watched this show. It's very silly. It's called The Good Place and they reflect a lot on this. And you know, the, the moral of the story is that you have to make the afterlife be a finite too. Cause otherwise people just kind of, it's like Wally. It's like, ah, whatever. So, so I think the finiteness helps, but, but yeah, it's just, you know, I don't, I don't, I'm not a religious person. I don't think that there's something after. And so I think it just ends and you stop existing. And I really like existing. It's just, it's such a great privilege to exist that, that yeah, it's just, I think that's the scary part. I still think that we like existing so much because it ends. And that's so sad. Like it's so sad to me every time. Like I find almost everything about this life beautiful. Like the silliest, most mundane things are just beautiful. And I think I'm cognizant of the fact that I find it beautiful because it ends like it. And it's so, I don't know. I don't know how to feel about that. I also feel like there's a lesson in there for robotics and AI that is not like the finiteness of things seems to be a fundamental nature of human existence. I think some people sort of accuse me of just being Russian and melancholic and romantic or something, but that seems to be a fundamental nature of our existence that should be incorporated in our reward functions. But anyway, if you were speaking of reward functions, if you only had five years, what would you try to accomplish? This is the thing. I'm thinking about this question and have a pretty joyous moment because I don't know that I would change much. I'm trying to make some contributions to how we understand human AI interaction. I don't think I would change that. Maybe I'll take more trips to the Caribbean or something, but I tried some of that already from time to time. So, yeah, I try to do the things that bring me joy and thinking about these things bring me joy is the Marie Kondo thing. Don't do stuff that doesn't spark joy. For the most part, I do things that spark joy. Maybe I'll do less service in the department or something. I'm not dealing with admissions anymore. But no, I think I have amazing colleagues and amazing students and amazing family and friends and spending time in some balance with all of them is what I do and that's what I'm doing already. So, I don't know that I would really change anything. So, on the spirit of positiveness, what small act of kindness, if one pops to mind, were you once shown that you will never forget? When I was in high school, my friends, my classmates did some tutoring. We were gearing up for our baccalaureate exam and they did some tutoring on, well, some on math, some on whatever. I was comfortable enough with some of those subjects, but physics was something that I hadn't focused on in a while. And so, they were all working with this one teacher and I started working with that teacher. Her name is Nicole Beccano. And she was the one who kind of opened up this whole world for me because she sort of told me that I should take the SATs and apply to go to college abroad and do better on my English and all of that. And when it came to, well, financially I couldn't, my parents couldn't really afford to do all these things, she started tutoring me on physics for free and on top of that sitting down with me to kind of train me for SATs and all that jazz that she had experience with. Wow. And obviously that has taken you to be here today, sort of one of the world experts in robotics. It's funny those little... For no reason really. Just out of karma. Wanting to support someone, yeah. Yeah. So, we talked a ton about reward functions. Let me talk about the most ridiculous big question. What is the meaning of life? What's the reward function under which we humans operate? Like what, maybe to your life, maybe broader to human life in general, what do you think... What gives life fulfillment, purpose, happiness, meaning? You can't even ask that question with a straight face. That's how ridiculous this is. I can't, I can't. Okay. So, you know... You're going to try to answer it anyway, aren't you? So, I was in a planetarium once. Yes. And, you know, they show you the thing and then they zoom out and zoom out and this whole, like, you're a speck of dust kind of thing. I think I was conceptualizing that we're kind of, you know, what are humans? We're just on this little planet, whatever. We don't matter much in the grand scheme of things. And then my mind got really blown because they talked about this multiverse theory where they kind of zoomed out and were like, this is our universe. And then, like, there's a bazillion other ones and they just pop in and out of existence. So, like, our whole thing that we can't even fathom how big it is was like a blimp that went in and out. And at that point, I was like, okay, like, I'm done. This is not, there is no meaning. And clearly what we should be doing is try to impact whatever local thing we can impact, our communities, leave a little bit behind there, our friends, our family, our local communities, and just try to be there for other humans because I just, everything beyond that seems ridiculous. I mean, are you, like, how do you make sense of these multiverses? Like, are you inspired by the immensity of it? Do you, I mean, is there, like, is it amazing to you or is it almost paralyzing in the mystery of it? It's frustrating. I'm frustrated by my inability to comprehend. It just feels very frustrating. It's like there's some stuff that, you know, we should time, blah, blah, blah, that we should really be understanding. And I definitely don't understand it. But, you know, the amazing physicists of the world have a much better understanding than me. But it still seems epsilon in the grand scheme of things. So, it's very frustrating. It just, it sort of feels like our brain don't have some fundamental capacity yet, well, yet or ever. I don't know. Well, that's one of the dreams of artificial intelligence is to create systems that will aid, expand our cognitive capacity in order to understand, build the theory of everything with the physics and understand what the heck these multiverses are. So, I think there's no better way to end it than talking about the meaning of life and the fundamental nature of the universe and the multiverses. And the multiverse. So, Anca, it is a huge honor. One of my favorite conversations I've had. I really, really appreciate your time. Thank you for talking today. Thank you for coming. Come back again. Thanks for listening to this conversation with Anca Dragan. And thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with 5 stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LexFriedman. And now, let me leave you with some words from Isaac Asimov. Your assumptions are your windows in the world. Scrub them off every once in a while or the light won't come in. Thank you for listening and hope to see you next time.
Anca Dragan: Human-Robot Interaction and Reward Engineering | Lex Fridman Podcast #81
The following is a conversation with Simon Sinek, author of several books, including Start With Why, Leaders Eat Last, and his latest, The Infinite Game. He's one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big, difficult challenges. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you, and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, Cash App and Masterclass. Please consider supporting the podcast by downloading Cash App and using code LexPodcast, and signing up to Masterclass at Masterclass.com slash Lex. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LexPodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency, in the context of the history of money, is fascinating. I recommend Ascent of Money as a great book on this history. Debits and credits on Ledger started around 30,000 years ago. The US dollar created over 200 years ago, and Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency's still very much in its early days of development, but it's still aiming to, and just might, redefine the nature of money. So again, if you get Cash App from the App Store or Google Play, and use the code LexPodcast, you get $10, and Cash App will also donate $10 the first, an organization that is helping to advance robotics and STEM education for young people around the world. This show is sponsored by Masterclass. Sign up at Masterclass.com slash Lex to get a discount and to support this podcast. When I first heard about Masterclass, I honestly thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from experts at the top of their field. To list some of my favorites, Chris Hatfield on Space Exploration, Neil deGrasse Tyson on Scientific Thinking and Communication, Will Wright, the creator of SimCity, and Sims on Game Design. I love that game. Jane Goodall on Conservation, Carlos Santana, one of my favorite guitarists on guitar, Gary Kasparov on Chess, obviously I'm Russian, I love Gary, Daniel Negrano on Poker, one of my favorite poker players, also Phil Ivey gives a course as well, and many, many more. Chris Hatfield explaining how rockets work and the experience of being launched into space alone is worth the money. By way of advice, for me, the key is not to be overwhelmed by the abundance of choice. Pick three courses you want to complete, watch each all the way through from start to finish, it's not that long, but it's an experience that will stick with you for a long time, I promise. It's easily worth the money, you can watch it on basically any device. Once again, sign up at masterclass.com slash Lex to get a discount and to support this podcast. And now, here's my conversation with Simon Sinek. In the Infinite Game, your most recent book, you describe the finite game and the infinite game, so from my perspective of artificial intelligence and game theory in general, I'm a huge fan of finite games from the broad philosophical sense, it's something that in the robotics, artificial intelligence space, we know how to deal with, and then you describe the infinite game, which has no exact static rules, has no well defined static objective, the players are known, unknown, they change, there's the dynamic element, so this is something that applies to business, politics, life itself, so can you try to articulate the objective function here of the infinite game, or in the cliche, broad philosophical sense, what is the meaning of life? Go for the, start with a softball. Yep, easy question first. So James Kars was the philosopher who originally articulated this concept of finite and infinite games, and when I learned about it, it really challenged my view of how the world works, right? Because I think we all think about winning and being the best and being number one, but if you think about it, only in a finite game can that exist, a game that has fixed rules, agreed upon objectives, and known players, like football or baseball, there's always a beginning, middle, and end, and if there's a winner, there has to be a loser. Infinite games, as Kars describes them, as you said, have known and unknown players, which means anyone can join, it has changeable rules, which means you can play however you want, and the objective is to perpetuate the game, to stay in the game as long as possible. In other words, there's no such thing as being number one or winning in a game that has no finish line. And what I learned is that when we try to win in a game that has no finish line, we try to be number, we try to be the best in a game that has no agreed upon objectives or agreed upon metrics or timeframes, there's a few consistent and predictable outcomes, the decline of trust, the decline of cooperation, the decline of innovation. And I find this fascinating because so many of the ways that we run most organizations is with a finite mindset. So trying to reduce the beautiful complex thing that is life or whatever, politics or business, into something very narrow, and in that process, the reductionist process, you lose something fundamental that makes the whole thing work in the long term. So returning, not gonna let you off the hook easy, what is the meaning of life? So what is the objective function that is worthwhile to pursue? Well, if you think about our tombstones, right? They have the date we were born and the date we died, but really it's what we do with the gap in between. There's a poem called The Dash. You know, it's the dash that matters. It's what we do between the time we're born and the time we die that gives our life meaning. And if we live our lives with a finite mindset, which means to accumulate more power or money than anybody else, to outdo everyone else, to be number one, to be the best, we don't take any of us with us. We don't take any of it with us. We just die. The people who get remembered the way we wanna be remembered is what kind of people we were, right? Devoted mother, loving father, what kind of person we were to other people. Jack Welch just died recently, and the Washington Post, when it wrote the headline for his obit, it wrote, he pleased Wall Street and distressed employees. And that's his legacy. A finite player who is obsessed with winning, who leaves behind a legacy of short term gains for a few and distress for many. That's his legacy. And every single one of us gets the choice of the kind of legacy we wanna have. Do we wanna be remembered for our contributions or our detractions? To live with a finite mindset, to live a career with a finite mindset, to be number one, be the best, be the most famous, you live a life like Jack Welch, you know? To live a life of service, to see those around us rise, to contribute to our communities, to our organizations, to leave them in better shape than we found them, that's the kind of legacy most of us would like to have. So day to day, when you think about what is the fundamental goals, dreams, motivations of an infinite game, of seeing your life, your career as an infinite game, what does that look like? I mean, I guess I'm sort of trying to stick on this personal ego, personal drive, the thing that the fire, the reason we wanna wake up in the morning and the reason we can't go to bed because we're so excited, what is that? So for me, it's about having a just cause. It's about a vision that's bigger than me, that my work gets to contribute to something larger than myself, you know? That's what drives me every day. I wake up every morning with a vision of a world that does not yet exist, a world in which the vast majority of people wake up every single morning inspired, feel safe at work and return home fulfilled at the end of the day. It is not the world we live in. And so that we still have work to do is the thing that drives me. You know, I know what my underlying values are. You know, I wake up to inspire people to do the things that inspire them. And these are the things that, these are the things that I, these are my go tos, my touch points that inspire me to keep working. You know, I think of a career like an iceberg. You know, if you have a vision for something, you're the only one who can see the iceberg underneath the ocean. But if you start working at it, a little bit shows up. And now a few other people can see what you imagine, be like, oh, right, yeah, no, I wanna help build that as well. And if you have a lot of success, then you have a lot of iceberg and people can see this huge iceberg and they say, you've accomplished so much. But what I see is all the work still yet to be done. You know, I still see the huge iceberg underneath the ocean. And so the growth, you talk about momentum. So the incremental revealing of the iceberg is what drives you. Well, it necessarily is incremental. What drives me is that, is the realization, is realizing the iceberg, bringing more of the iceberg from the unknown to the known, bringing more of the vision from the imagination to reality. And you have this fundamental vision of optimism. You call yourself an optimist. I mean, in this world, I have a sort of, I see myself a little bit as the main character from The Idiot by Dostoevsky, who is also kind of seen by society as a fool because he was optimistic. So one, can you maybe articulate where that sense of optimism comes from? And maybe also try to articulate your vision of the future where people are inspired, where optimism drives us. It's easy to forget that when you look at social media and so on, where the word toxicity and negativity can often get more likes, that optimism has a sort of a beauty to it. And I do hope it's out there. So can you try to articulate that vision? Yeah, so I mean, for me, optimism and being an optimist is just seeing the silver lining in every cloud. Even in tragedy, it brings people together. And the question is, can we see that? Can you see the beauty that is in everything? And I don't think optimism is foolishness. I don't think optimism is blindness, though it probably involves some naivete, the belief that things will get better, the belief that we tend towards the good, even in times of struggle or bad. You can't sustain war, but you can sustain peace. I think things that are stable are more sustainable, things that are optimistic are more sustainable than things that are chaotic. So you see people as fundamentally good. I mean, some people may disagree that you can't sustain peace, you can't sustain war. I mean, I think war is costly. It involves life and money, and peace does not involve those things. It requires work. I'm not saying it doesn't require work, but it doesn't drain resources, I think the same way that war does. The people that would say that we will always have war, and I just talked to the historian of Stalin, would say that conflict and the desire for power and conflict is central to human nature. I concur. But something in your words also, perhaps it's the naive aspect that I also share, is that you have an optimism that people are fundamentally good. I'm an idealist, and I think idealism is good. I'm not a fool to believe that the ideals that I imagine can come true. Of course, there'll never be world peace, but shouldn't we die trying? I think that's the whole point. That's the whole point of vision. Vision should be idealistic, and it should be, for all practical purposes, impossible. But that doesn't mean we shouldn't try, and it's the milestones that we reach that take us closer to that ideal that make us feel that our life and our work have meaning, and we're contributing to something bigger than ourselves. You know, just because it's impossible doesn't mean we shouldn't try. As I said, we're still moving the ball down the field. We're still making progress. Things are still getting better, even if we never get to that ideal state. So I think idealism is a good thing. You know, in the word infinite game, one of the beautiful and tragic aspects of life, human life at least, at least from the biological perspective, is that it ends. So sadly, it's. To some people, yeah. Fine, it's tragic to some people, or is it ends, it ends? I think some people believe that it ends on the day you die, and some people think it continues on. There's, and there's a lot of different ways to think what continues on even looks like. But let me drag it back to the personal. Sure. Which is, how do you think about your own mortality? Are you afraid of death? How do you think about your own death? I definitely haven't accomplished everything I want to contribute to. I would like more time on this earth to keep working towards that vision. Do you think about the fact that it ends for you? Are you cognizant of it? Of course I'm cognizant of it. I mean, aren't we all? I don't dwell on it. I'm aware of it. I know that my life is finite, and I know that I have a certain amount of time left on this planet, and I'd like to make that time be valuable. You know, some people would think that ideas kind of allow you to have a certain kind of immortality. Yeah. Maybe to linger on this kind of question. So first to push back on the, you said that everyone's cognizant of their mortality. There's a guy named Ernest Becker who would disagree, that you basically say that most of human cognition is created by us trying to create an illusion and try to hide the fact from ourselves, the fact that we're gonna die, to try to think that it's all gonna go on forever. But the fact that we know that it doesn't. Yes, but this mix of denial. I mean, I think the book's called Denial of Death. It's this constant denial that we're running away from. In fact, some would argue that the inspiration, the incredible ideas you've put out there, your TED Talk has been seen by millions and millions of people, right? It's just you trying to desperately fight the fact that you are biologically mortal. Your creative genius comes from the fact that you're trying to create ideas that live on long past you. Well, that's very nice of you. I mean, I would like my ideas to live on beyond me because I think that is a good test that those ideas have value in the lives of others. I think that's a good test. That others would continue to talk about or share the ideas long after I'm gone, I think is perhaps the greatest compliment one can get for one's own work. But I don't think it's my awareness of my mortality that drives me to do it. It's my desire to contribute that drives me to do it. It's the optimist vision. It's the pleasure and the fulfillment you get from inspiring others. It's as pure as that. Let me ask, listen, I'm rushing. I'm trying to get you to get you into these dark areas. Is the ego tied up into it somehow? So your name is extremely well known. If your name wasn't attached to it, do you think you would act differently? I mean, for years, I hated that my name was attached to it. I had a rule for years that I wouldn't have my face on the front page of the website. I had a fight with the publisher because I didn't want my name big on the book. I wanted it tiny on the book. Because I kept telling them it's not about me, it's about the ideas. They wanted to put my name on the top of my book, I refused. None of my books have my names on the top because I won't let them. They would like very much to put my name on the top of the book, but the idea has to be bigger than me. I'm not bigger than the idea. That's beautifully put. Do you think ego? But I also am aware that I've become recognized as the messenger. And even though I still think the message is bigger than me, I recognize that I have a responsibility as the messenger. And whether I like it or not is irrelevant. I accept the responsibility, I'm happy to do it. I'm not sure how to phrase this, but there's a large part of the culture right now that emphasizes all the things that nobody disagrees with, which is health, sleep, diet, relaxation, meditation, vacation, are really important. And there's no, it's like, you can't really argue against that. In fact, people. Less sleep. Less. Just, I'm joking. Yes, well, that's the thing. I often speak to the fact that passion and love for what you're doing and the two words hard work, especially in the engineering fields, are more important than, are more important to prioritize than sleep. Even though sleep is really important, your mind should be obsessed with the hard work, with the passion, and so on. And then I get some pushback, of course, from people. What do you make sense of that? Is that just me, the crazy Russian engineer, really pushing hard work? Probably. I think that that's a short term strategy. I think if you sacrifice your health for the work, at some point, it catches up with you. And at some point, it's like going, going, going, and you get sick. Your body will shut down for you if you refuse to take care of yourself. You get sick. It's what happens. Sometimes, more severe illness than something that just slows you down. So I think taking, getting sleep, I mean, there have been studies on this that, executives, for example, who get a full night's sleep and stop at a reasonable hour, actually accomplish more, are more productive than people who work and burn the midnight oil because their brains are working better because they're well rested. So, you know, working hard, yes, but why not work smart? I think that giving our minds and our bodies rest makes us more efficient. I think just driving, driving, driving, driving is a short term, it's a short term strategy. So, but to push back on that a little bit, the annoying thing is you're like 100% right in terms of science, right? But the thing is, because you're 100% right, that weak part of your mind uses that fact to convince you, like what, so, you know, I get all kinds of, my mind comes up with all kinds of excuses to try to convince me that I shouldn't be doing what I'm doing. To rationalize. To rationalize. And so what I have a sense, I think what you said about executives and leaders is absolutely right, but there's the early days. The early days of madness and passion. For sure. Then I feel like emphasizing sleep, thinking about sleep is giving yourself a way out from the fact that those early days, especially, can be suffering. As long, it's not sustainable. You know, it's not sustainable. Sure, if you're investing all that energy in something at the beginning to get it up and running, then at some point you're gonna have to slow down. Or your body will slow you down for you. Like, you can choose or your body can choose. I mean. So, okay, so you don't think, from my perspective, it feels like people have gotten a little bit soft. But you're saying, no. I think that there seems evidence that working harder and later have taken a back seat. I've taken a back seat. I think we have to be careful with broad generalizations. But I think if you go into the workplace, there are people who would complain that more people now than before, you know, look at their watches and say, oops, five o clock, goodbye. Right? Now, is that a problem with the people? You're saying it's the people giving themselves excuses and people who don't work hard. Or is it the organizations aren't giving them something to believe in, something to be passionate about? We can't manufacture passion. You can't just tell someone, be passionate. You know, that's not how it works. Passion's an output, not an input. Like if I believe in something and I wanna contribute all that energy to do it, we call that passion. You know, working hard for something we love is passion. Working hard for something we don't care about is called stress. But we're working hard either way. So I think the organizations bear some accountability and our leaders bear some accountability, which is if they're not offering a sense of purpose, if they're not offering us a sense of cause, if they're not telling us that our work is worth more than simply the money it makes, then yeah, I'm gonna come at five o clock because I don't really care about making you money. Remember, we live in a world right now where a lot of people, rather a few people, are getting rich on the hard work of others. And so I think when people look up and say, well, why would I do that? I'll just, if you're not gonna look after me and then you're gonna lay me off at the end of the year because you missed your arbitrary projections, you know, you're gonna lay me off because you missed your arbitrary projections, then why would I offer my hard work and loyalty to you? So I think, I don't think we can immediately blame people for going soft. I think we can blame leaders for their inability or failure to offer their people something bigger than making a product or making money. Yeah, so that's brilliant. And start with why, leaders eat less, your books. You basically talk about what it takes to be a good leader. And so some of the blame should go on the leader, but how much of it is on finding your passion? How much is it on the individual? And allowing yourself to pursue that passion, pushing yourself to your limits, to really take concrete steps along your path towards that passion. Yeah, there's mutual responsibility. There's mutual accountability. I mean, we're responsible as individuals to find the organizations and find the leaders that inspire us. And organizations are responsible for maintaining that flame and giving people who believe what they believed, you know, a chance to contribute. Sort of to linger on it, have you by chance seen the movie Whiplash? Yes. Again, maybe I'm romanticizing suffering. Again. It's the Russian in you. It's the Russian. Yeah. The Russians love suffering. But for people who haven't seen, the movie Whiplash has a drum instructor that pushes the drum musician to his limits to bring out the best in him. And there's a toxic nature to it. There's suffering in it. Like you've worked with a lot of great leaders, a lot of great individuals. So is that toxic relationship as toxic as it appears in the movie? Or is that fundamental? I've seen that relationship, especially in the past with Olympic athletes, especially in athletics, extreme performers seem to do wonders. It does wonders for me. There's some of my best relationships, now I'm not representative of everyone certainly, but some of my best relationships for mentee and mentor have been toxic from an external perspective. What do you make of that movie? What do you make of that kind of relationship? That's not my favorite movie. Okay, so you don't think that's a healthy, you don't think that kind of relationship is a great example of a great leader? No, I think it's a short term strategy. I mean, short term. I mean, look, being hard on someone is not the same as toxicity. If you go to the Marine Corps, a drill instructor will be very hard on their Marines. And then, but still, even on the last day of bootcamp, they'll take their hat off and they'll become a human. But of all the drill instructors, you know, the three or four main drill instructors assigned to a group of recruits, the one that they all want the respect of is the one that's the hardest on them. That's true. And you hear, you know, there's plenty of stories of people who want to earn the respect of a hard parent or a hard teacher. But fundamental, that parent, that teacher, that drill instructor has to believe in that person, has to see potential in them. It's not a formula, which is if I'm hard on people, they'll do well, which is there has to still be love. It has to be done with absolute love. And it has to be done responsibly. I mean, some people can take a little more pressure than others, but it's not, I think it's irresponsible to think of it as a formula that if I'm just toxic at people, they will do well. It depends on their personalities. First of all, it works for some, but not all. And second of all, it can't be done willy nilly. It has to still be done with care and love. And sometimes you can get equal or better results without all of the toxicity. So one of the, I guess toxicity on my part was a really bad word to use, but if we talk about what makes a good leader and just look at an example in particular, looking at Elon Musk, he's known to push people to their limits in a way that I think really challenges people in a way they've never been challenged before to do the impossible. But it can really break people. And jobs was hard and Amazon is hard. But the thing that's important is none of them lie about it. People ask me about Amazon all the time. Like Jeff Bezos never lied about it. Even the ones who like Amazon don't last more than a couple of years before they burn out. But when we're honest about the culture, then it gives people the opportunity who like to work in that kind of culture to choose to work in that kind of culture, as opposed to pretending and saying, oh no, this is all, it's all lovey lovey here. And then you show up and it's the furthest thing from it. So, I mean, I think the reputations of putting a lot of pressure on people to, jobs was not an easy man to work for. He pushed people, but everyone who worked there was given the space to create and do things that they would not have been able to do anywhere else and work at a level that they didn't work anywhere else. And jobs didn't have all the answers. I mean, he pushed his people to come up with answers. He wasn't just looking for people to execute his ideas. And people did, people accomplished more than they thought they were capable of, which is wonderful. How do you, you're talking about the infinite game and not thinking about too short term. And yet you see some of the most brilliant people in the world being pushed by Elam us to accomplish some of the most incredible things. When we're talking about autopilot, when we're talking about some of the hardware engineering, and they do some of the best work of their life and then leave. How do you balance that in terms of what it takes to be a good leader, what it takes to accomplish great things in your life? So I think there's a difference between someone who can get a lot out of people in the short term and building an organization that can sustain beyond any individual. There's a difference. When you say beyond any individual, you mean beyond like if the leader dies. Correct. Like could Tesla continue to do what it's doing without Elon Musk? And you're perhaps implying, which is a very interesting question that it cannot. I don't know. You know, the argument you're making of this person who pushes everyone arguably is not a repeatable model, right? You know, is Apple the same without Steve Jobs or is it slowly moving in a different direction? Or has he established something that could be resurrected with the right leader? That was his dream, I think, is to build an organization that lives on beyond them. At least I remember reading that somewhere. I think that's what a lot of leaders desire, which is to create something that was bigger than them. You know, most businesses, most entrepreneurial ventures could not pass the school bus test, which is if the founder was hit by a school bus, would everyone continue the business without them or would they all just go find jobs? And the vast majority of companies would fail that test, you know, especially in the entrepreneurial world that if you take the inspired visionary leader away, the whole thing collapses. So is that a business or is that just a force of personality? And a lot of entrepreneurs, you know, face that reality, which is they have to be in every meeting, make every decision, you know, come up with every idea, because if they don't, who will? And the question is, is, well, what have you done to build your bench? Is it, it's not, sometimes it's ego, the belief that only I can. Sometimes it's just things got, did so well for so long that just forgot. And sometimes it's a failure to build the training programs or hire the right people that could replace you, who are maybe smarter and better. And browbeating people is only one strategy. I don't think it's necessarily the only strategy, nor is it always the best strategy. I think people get to choose the cultures they wanna work in. This is why I think companies should be honest about the kind of culture that they've created. You know, I heard a story about Apple where somebody came in from a big company, you know, he had accomplished a lot and his ego was very large and he was going on about how he did this and he did that and he did this and he did that. And somebody from Apple said, we don't care what you've done. The question is, what are you gonna do? And that's, you know, for somebody who wants to be pushed, that's the place you go because you choose to be pushed. Now, we all wanna be pushed to some degree, you know, anybody who wants to, you know, accomplish anything in this world wants to be pushed to some degree, whether it's through self pressure or external pressure or, you know, public pressure, whatever it is. But I think this whole idea of one size fits all is a false narrative of how leadership works, but what all leadership requires is creating an environment in which people can work at their natural best. But you have a sense that it's possible to create a business where it lives on beyond you. So if we look at now, if we just look at this current moment, I just recently talked to Jack Dorsey, CEO of Twitter, and he's under a lot of pressure now. I don't know if you're aware of the news that he's being pushed out as a potential CEO of Twitter because he's the CEO already of an incredibly successful company plus he wants to go to Africa to live a few months in Africa to connect with the world that's outside of the Silicon Valley and sort of, there's this idea, well, can Twitter live without Jack? We'll find out. But you have a general, as a student of great leadership, you have a general sense that it's possible. Yeah, of course it's possible. I mean, what Bill Gates built with Microsoft may not have survived Steve Ballmer if the company weren't so rich, but Sachin Ardala is putting it back on track again. It's become a visionary company again. It's attracting great talent again. It went through a period where they couldn't get the best talent and the best talent was leaving. Now people wanna work for Microsoft again. Well, that's not because of pressure. Ballmer put more pressure on people mainly to hit numbers than anything else. That didn't work. Yes. Right? And so the question is, what kind of pressure are we putting on people? We're putting on pressure people to hit numbers or hit arbitrary deadlines, or we're putting on pressure on people because we believe that they can do better work. And the work that we're trying to do is to advance a vision that's bigger than all of us. And if you're gonna put pressure on people, it better be for the right reason. Like if you're gonna put pressure on me, it better be for a worthwhile reason. If it's just to hit a goal, if it's just to hit some arbitrary date or some arbitrary number or make a stock price hit some target, you can keep it, I'm outta here. But if you wanna put pressure on me because we are brothers and sisters in arms working to advance a cause bigger than ourselves, that we believe whatever we're gonna build will significantly contribute to the greater good of society, then go ahead, I'll take the pressure. And if you look at the Apples and if you look at the Elon Musk's, the Jobs and the Elon Musk, they fundamentally believed that what they were doing would improve society. And it was for the good of humankind. And so the pressure, in other words, what they were doing was more important, more valuable than any individual on the team. And so the pressure they put on people served a greater good. And so we looked to the left and we looked to the right to each other and said, we're in this together. We accept this, we want this. But if it's just pressure to hit a number or make the widget move a little faster, that's soul sucking. That's not passion, that's stress. And I think a lot of leaders confuse that making people work hard is not what makes them passionate. Giving to them something to believe in and work on is what drives passion. And when you have that, then turning up the pressure only brings people together, drives them further. If done the right way. If done the right way. Speaking of pressure, I'm gonna give you 90 seconds to answer the last question, which is if I told you that tomorrow was your last day to live, we talked about mortality, sunrise to sunset, can you tell me, can you take me through the day? What do you think that day would involve? You can't spend it with your family, I told you as well. I would probably want to fill all of my senses with things that excite my senses. I'd want to look at beautiful art. I'd want to listen to beautiful music. I'd want to taste incredible food. I'd want to smell amazing tastes. I'd want to touch something that's beautiful to touch. I'd want all of my senses to just be consumed with things that I find beautiful. And you talked about this idea of we don't do it often these days, of just listening to music, turning off all the devices and actually taking in and listening to music. So as an addendum, if we were to talk about music, what song would you be blasting on this last day you're alive? Is it Led Zeppelin? What are we talking about? That I love. No, no. There's probably gonna be a Beatles song in there. There'll definitely be some Beethoven in there. The classics. The classics. Yeah, exactly. Well, thank you so much for talking today. Thank you for making time for it. Under pressure, we made it happen. It was great. Thanks for listening to this conversation with Simon Sinek. And thank you to our sponsors, Cash App and Masterclass. Please consider supporting the podcast by downloading Cash App and using code LexPodcast and signing up to Masterclass at masterclass.com slash Lex. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words from Simon Sinek. There are only two ways to influence human behavior. You can manipulate it or you can inspire it. Thank you for listening and hope to see you next time.
Simon Sinek: Leadership, Hard Work, Optimism and the Infinite Game | Lex Fridman Podcast #82
The following is a conversation with Nick Bostrom, a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risk, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book, Superintelligence. I can see talking to Nick multiple times in this podcast, many hours each time, because he has done some incredible work in artificial intelligence, in technology, space, science, and really philosophy in general, but we have to start somewhere. This conversation was recorded before the outbreak of the coronavirus pandemic that both Nick and I, I'm sure, will have a lot to say about next time we speak, and perhaps that is for the best, because the deepest lessons can be learned only in retrospect when the storm has passed. I do recommend you read many of his papers on the topic of existential risk, including the technical report titled Global Catastrophic Risks Survey that he coauthored with Anders Sandberg. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store, Google Play, and use the code LEXPODCAST, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Nick Bostrom. At the risk of asking the Beatles to play yesterday or the Rolling Stones to play Satisfaction, let me ask you the basics. What is the simulation hypothesis? That we are living in a computer simulation. What is a computer simulation? How are we supposed to even think about that? Well, so the hypothesis is meant to be understood in a literal sense, not that we can kind of metaphorically view the universe as an information processing physical system, but that there is some advanced civilization who built a lot of computers and that what we experience is an effect of what's going on inside one of those computers so that the world around us, our own brains, everything we see and perceive and think and feel would exist because this computer is running certain programs. So do you think of this computer as something similar to the computers of today, these deterministic sort of Turing machine type things? Is that what we're supposed to imagine or we're supposed to think of something more like a quantum mechanical system? Something much bigger, something much more complicated, something much more mysterious from our current perspective? The ones we have today would do fine, I mean, bigger, certainly. You'd need more memory and more processing power. I don't think anything else would be required. Now, it might well be that they do have additional, maybe they have quantum computers and other things that would give them even more of, it seems kind of plausible, but I don't think it's a necessary assumption in order to get to the conclusion that a technologically mature civilization would be able to create these kinds of computer simulations with conscious beings inside them. So do you think the simulation hypothesis is an idea that's most useful in philosophy, computer science, physics, sort of where do you see it having valuable kind of starting point in terms of a thought experiment of it? Is it useful? I guess it's more informative and interesting and maybe important, but it's not designed to be useful for something else. Okay, interesting, sure. But is it philosophically interesting or is there some kind of implications of computer science and physics? I think not so much for computer science or physics per se. Certainly it would be of interest in philosophy, I think also to say cosmology or physics in as much as you're interested in the fundamental building blocks of the world and the rules that govern it. If we are in a simulation, there is then the possibility that say physics at the level where the computer running the simulation could be different from the physics governing phenomena in the simulation. So I think it might be interesting from point of view of religion or just for kind of trying to figure out what the heck is going on. So we mentioned the simulation hypothesis so far. There is also the simulation argument, which I tend to make a distinction. So simulation hypothesis, we are living in a computer simulation. Simulation argument, this argument that tries to show that one of three propositions is true, one of which is the simulation hypothesis, but there are two alternatives in the original simulation argument, which we can get to. Yeah, let's go there. By the way, confusing terms because people will, I think, probably naturally think simulation argument equals simulation hypothesis, just terminology wise. But let's go there. So simulation hypothesis means that we are living in a simulations, the hypothesis that we're living in a simulation, simulation argument has these three complete possibilities that cover all possibilities. So what are they? Yeah. So it's like a disjunction. It says at least one of these three is true, although it doesn't on its own tell us which one. So the first one is that almost all civilizations that are current stage of technological development go extinct before they reach technological maturity. So there is some great filter that makes it so that basically none of the civilizations throughout maybe a vast cosmos will ever get to realize the full potential of technological development. And this could be, theoretically speaking, this could be because most civilizations kill themselves too eagerly or destroy themselves too eagerly, or it might be super difficult to build a simulation. So the span of time. Theoretically it could be both. Now I think it looks like we would technologically be able to get there in a time span that is short compared to, say, the lifetime of planets and other sort of astronomical processes. So your intuition is to build a simulation is not... Well, so this is interesting concept of technological maturity. It's kind of an interesting concept to have other purposes as well. We can see even based on our current limited understanding what some lower bound would be on the capabilities that you could realize by just developing technologies that we already see are possible. So for example, one of my research fellows here, Eric Drexler, back in the 80s, studied molecular manufacturing. That is you could analyze using theoretical tools and computer modeling the performance of various molecularly precise structures that we didn't then and still don't today have the ability to actually fabricate. But you could say that, well, if we could put these atoms together in this way, then the system would be stable and it would rotate at this speed and have all these computational characteristics. And he also outlined some pathways that would enable us to get to this kind of molecularly manufacturing in the fullness of time. And you could do other studies we've done. You could look at the speed at which, say, it would be possible to colonize the galaxy if you had mature technology. We have an upper limit, which is the speed of light. We have sort of a lower current limit, which is how fast current rockets go. We know we can go faster than that by just making them bigger and have more fuel and stuff. We can then start to describe the technological affordances that would exist once a civilization has had enough time to develop, at least those technologies we already know are possible. Then maybe they would discover other new physical phenomena as well that we haven't realized that would enable them to do even more. But at least there is this kind of basic set of capabilities. Can you just link on that, how do we jump from molecular manufacturing to deep space exploration to mature technology? What's the connection there? Well, so these would be two examples of technological capability sets that we can have a high degree of confidence are physically possible in our universe and that a civilization that was allowed to continue to develop its science and technology would eventually attain. You can intuit like, we can kind of see the set of breakthroughs that are likely to happen. So you can see like, what did you call it, the technological set? With computers, maybe it's easiest. One is we could just imagine bigger computers using exactly the same parts that we have. So you can kind of scale things that way, right? But you could also make processors a bit faster. If you had this molecular nanotechnology that Eric Drexler described, he characterized a kind of crude computer built with these parts that would perform at a million times the human brain while being significantly smaller, the size of a sugar cube. And he made no claim that that's the optimum computing structure, like for all you know, we could build faster computers that would be more efficient, but at least you could do that if you had the ability to do things that were atomically precise. I mean, so you can then combine these two. You could have this kind of nanomolecular ability to build things atom by atom and then say at this as a spatial scale that would be attainable through space colonizing technology. You could then start, for example, to characterize a lower bound on the amount of computing power that a technologically mature civilization would have. If it could grab resources, you know, planets and so forth, and then use this molecular nanotechnology to optimize them for computing, you'd get a very, very high lower bound on the amount of compute. So sorry, just to define some terms, so technologically mature civilization is one that took that piece of technology to its lower bound. What is a technologically mature civilization? So that means it's a stronger concept than we really need for the simulation hypothesis. I just think it's interesting in its own right. So it would be the idea that there is some stage of technological development where you've basically maxed out, that you developed all those general purpose, widely useful technologies that could be developed, or at least kind of come very close to the, you know, 99.9% there or something. So that's an independent question. You can think either that there is such a ceiling, or you might think it just goes, the technology tree just goes on forever. Where does your sense fall? I would guess that there is a maximum that you would start to asymptote towards. So new things won't keep springing up, new ceilings. In terms of basic technological capabilities, I think that, yeah, there is like a finite set of laws that can exist in this universe. Moreover, I mean, I wouldn't be that surprised if we actually reached close to that level fairly shortly after we have, say, machine superintelligence. So I don't think it would take millions of years for a human originating civilization to begin to do this. It's more likely to happen on historical timescales. But that's an independent speculation from the simulation argument. I mean, for the purpose of the simulation argument, it doesn't really matter whether it goes indefinitely far up or whether there is a ceiling, as long as we know we can at least get to a certain level. And it also doesn't matter whether that's going to happen in 100 years or 5,000 years or 50 million years. Like the timescales really don't make any difference for this. Can you look on that a little bit? Like there's a big difference between 100 years and 10 million years. So it doesn't really not matter because you just said it doesn't matter if we jump scales to beyond historical scales. So we described that. So for the simulation argument, sort of doesn't it matter that we if it takes 10 million years, it gives us a lot more opportunity to destroy civilization in the meantime? Yeah, well, so it would shift around the probabilities between these three alternatives. That is, if we are very, very far away from being able to create these simulations, if it's like, say, billions of years into the future, then it's more likely that we will fail ever to get there. There's more time for us to kind of go extinct along the way. And so this is similarly for other civilizations. So it is important to think about how hard it is to build a simulation. In terms of figuring out which of the disjuncts. But for the simulation argument itself, which is agnostic as to which of these three alternatives is true. Yeah. Okay. It's like you don't have to like the simulation argument would be true whether or not we thought this could be done in 500 years or it would take 500 million years. No, for sure. The simulation argument stands. I mean, I'm sure there might be some people who oppose it, but it doesn't matter. I mean, it's very nice those three cases cover it. But the fun part is at least not saying what the probabilities are, but kind of thinking about kind of intuiting reasoning about what's more likely, what are the kind of things that would make some of the arguments less and more so like. But let's actually, I don't think we went through them. So number one is we destroy ourselves before we ever create simulation. Right. So that's kind of sad, but we have to think not just what might destroy us. I mean, so there could be some whatever disaster, some meteor slamming the earth a few years from now that could destroy us. Right. But you'd have to postulate in order for this first disjunct to be true that almost all civilizations throughout the cosmos also failed to reach technological maturity. And the underlying assumption there is that there is likely a very large number of other intelligent civilizations. Well, if there are, yeah, then they would virtually all have to succumb in the same way. I mean, then that leads off another, I guess there are a lot of little digressions that are interesting. Definitely, let's go there. Let's go there. Keep dragging us back. Well, there are these, there is a set of basic questions that always come up in conversations with interesting people, like the Fermi paradox, like there's like, you could almost define whether a person is interesting, whether at some point the question of the Fermi paradox comes up, like, well, so for what it's worth, it looks to me that the universe is very big. I mean, in fact, according to the most popular current cosmological theories, infinitely big. And so then it would follow pretty trivially that it would contain a lot of other civilizations, in fact, infinitely many. If you have some local stochasticity and infinitely many, it's like, you know, infinitely many lumps of matter, one next to another, there's kind of random stuff in each one, then you're going to get all possible outcomes with probability one infinitely repeated. So then certainly there would be a lot of extraterrestrials out there. Even short of that, if the universe is very big, that might be a finite but large number. If we were literally the only one, yeah, then of course, if we went extinct, then all of civilizations at our current stage would have gone extinct before becoming technological material. So then it kind of becomes trivially true that a very high fraction of those went extinct. But if we think there are many, I mean, it's interesting, because there are certain things that possibly could kill us, like if you look at existential risks, and it might be a different, like the best answer to what would be most likely to kill us might be a different answer than the best answer to the question, if there is something that kills almost everyone, what would that be? Because that would have to be some risk factor that was kind of uniform overall possible civilization. So in this, for the sake of this argument, you have to think about not just us, but like every civilization dies out before they create the simulation or something very close to everybody. Okay. So what's number two in the number two is the convergence hypothesis that is that maybe like a lot of some of these civilizations do make it through to technological maturity, but out of those who do get there, they all lose interest in creating these simulations. So they just have the capability of doing it, but they choose not to. Not just a few of them decide not to, but out of a million, maybe not even a single one of them would do it. And I think when you say lose interest, that sounds like unlikely because it's like they get bored or whatever, but it could be so many possibilities within that. I mean, losing interest could be, it could be anything from it being exceptionally difficult to do to fundamentally changing the sort of the fabric of reality. If you do it is ethical concerns, all those kinds of things could be exceptionally strong pressures. Well, certainly, I mean, yeah, ethical concerns. I mean, not really too difficult to do. I mean, in a sense, that's the first assumption that you get to technological maturity where you would have the ability using only a tiny fraction of your resources to create many, many simulations. So it wouldn't be the case that they would need to spend half of their GDP forever in order to create one simulation and they had this like difficult debate about whether they should invest half of their GDP for this. It would more be like, well, if any little fraction of the civilization feels like doing this at any point during maybe their millions of years of existence, then that would be millions of simulations. But certainly, there could be many conceivable reasons for why there would be this convert, many possible reasons for not running ancestor simulations or other computer simulations, even if you could do so cheaply. By the way, what's an ancestor simulation? Well, that would be the type of computer simulation that would contain people like those we think have lived on our planet in the past and like ourselves in terms of the types of experiences they have and where those simulated people are conscious. So like not just simulated in the same sense that a non player character would be simulated in the current computer game where it's kind of has like an avatar body and then a very simple mechanism that moves it forward or backwards. But something where the simulated being has a brain, let's say that's simulated at a sufficient level of granularity that it would have the same subjective experiences as we have. So where does consciousness fit into this? Do you think simulation, I guess there are different ways to think about how this can be simulated, just like you're talking about now. Do we have to simulate each brain within the larger simulation? Is it enough to simulate just the brain, just the minds and not the simulation, not the universe itself? Like, is there a different ways to think about this? Yeah, I guess there is a kind of premise in the simulation argument rolled in from philosophy of mind that is that it would be possible to create a conscious mind in a computer. And that what determines whether some system is conscious or not is not like whether it's built from organic biological neurons, but maybe something like what the structure of the computation is that it implements. So we can discuss that if we want, but I think it would be more forward as far as my view that it would be sufficient, say, if you had a computation that was identical to the computation in the human brain down to the level of neurons. So if you had a simulation with 100 billion neurons connected in the same way as the human brain, and you then roll that forward with the same kind of synaptic weights and so forth, so you actually had the same behavior coming out of this as a human with that brain would have done, then I think that would be conscious. Now it's possible you could also generate consciousness without having that detailed assimilation, there I'm getting more uncertain exactly how much you could simplify or abstract away. Can you look on that? What do you mean? I missed where you're placing consciousness in the second. Well, so if you are a computationalist, do you think that what creates consciousness is the implementation of a computation? Some property, emergent property of the computation itself. Yeah. That's the idea. Yeah, you could say that. But then the question is, what's the class of computations such that when they are run, consciousness emerges? So if you just have something that adds one plus one plus one plus one, like a simple computation, you think maybe that's not going to have any consciousness. If on the other hand, the computation is one like our human brains are performing, where as part of the computation, there is a global workspace, a sophisticated attention mechanism, there is self representations of other cognitive processes and a whole lot of other things that possibly would be conscious. And in fact, if it's exactly like ours, I think definitely it would. But exactly how much less than the full computation that the human brain is performing would be required is a little bit, I think, of an open question. He asked another interesting question as well, which is, would it be sufficient to just have say the brain or would you need the environment in order to generate the same kind of experiences that we have? And there is a bunch of stuff we don't know. I mean, if you look at, say, current virtual reality environments, one thing that's clear is that we don't have to simulate all details of them all the time in order for, say, the human player to have the perception that there is a full reality and that you can have say procedurally generated where you might only render a scene when it's actually within the view of the player character. And so similarly, if this environment that we perceive is simulated, it might be that all of the parts that come into our view are rendered at any given time. And a lot of aspects that never come into view, say the details of this microphone I'm talking into, exactly what each atom is doing at any given point in time, might not be part of the simulation, only a more coarse grained representation. So that to me is actually from an engineering perspective, why the simulation hypothesis is really interesting to think about is how difficult is it to fake sort of in a virtual reality context, I don't know if fake is the right word, but to construct a reality that is sufficiently real to us to be immersive in the way that the physical world is. I think that's actually probably an answerable question of psychology, of computer science, of how, where's the line where it becomes so immersive that you don't want to leave that world? Yeah, or that you don't realize while you're in it that it is a virtual world. Yeah, those are two actually questions, yours is the more sort of the good question about the realism, but mine, from my perspective, what's interesting is it doesn't have to be real, but how can we construct a world that we wouldn't want to leave? Yeah, I mean, I think that might be too low a bar, I mean, if you think, say when people first had pong or something like that, I'm sure there were people who wanted to keep playing it for a long time because it was fun and they wanted to be in this little world. I'm not sure we would say it's immersive, I mean, I guess in some sense it is, but like an absorbing activity doesn't even have to be. But they left that world though, that's the thing. So like, I think that bar is deceivingly high. So they eventually left, so you can play pong or Starcraft or whatever more sophisticated games for hours, for months, you know, while the work has to be in a big addiction, but eventually they escaped that. So you mean when it's absorbing enough that you would spend your entire, you would choose to spend your entire life in there. And then thereby changing the concept of what reality is, because your reality becomes the game. Not because you're fooled, but because you've made that choice. Yeah, and it made, different people might have different preferences regarding that. Some might, even if you had any perfect virtual reality, might still prefer not to spend the rest of their lives there. I mean, in philosophy, there's this experience machine, thought experiment. Have you come across this? So Robert Nozick had this thought experiment where you imagine some crazy super duper neuroscientist of the future have created a machine that could give you any experience you want if you step in there. And for the rest of your life, you can kind of pre programmed it in different ways. So your fun dreams could come true, you could, whatever you dream, you want to be a great artist, a great lover, like have a wonderful life, all of these things. If you step into the experience machine will be your experiences, constantly happy. But you would kind of disconnect from the rest of reality and you would float there in a tank. And so Nozick thought that most people would choose not to enter the experience machine. I mean, many might want to go there for a holiday, but they wouldn't want to have to check out of existence permanently. And so he thought that was an argument against certain views of value according to what we value is a function of what we experience. Because in the experience machine, you could have any experience you want, and yet many people would think that would not be much value. So therefore, what we value depends on other things than what we experience. So okay, can you can you take that argument further? What about the fact that maybe what we value is the up and down of life? So you could have up and downs in the experience machine, right? But what can't you have in the experience machine? Well, I mean, that then becomes an interesting question to explore. But for example, real connection with other people, if the experience machine is a solo machine where it's only you, like that's something you wouldn't have there. You would have this subjective experience that would be like fake people. But when if you gave somebody flowers, there wouldn't be anybody there who actually got happy. It would just be a little simulation of somebody smiling. But the simulation would not be the kind of simulation I'm talking about in the simulation argument where the simulated creature is conscious, it would just be a kind of smiley face that would look perfectly real to you. So we're now drawing a distinction between appear to be perfectly real and actually being real. Yeah. Um, so that could be one thing, I mean, like a big impact on history, maybe is also something you won't have if you check into this experience machine. So some people might actually feel the life I want to have for me is one where I have a big positive impact on history unfolds. So you could kind of explore these different possible explanations for why it is you wouldn't want to go into the experience machine if that's, if that's what you feel. And one interesting observation regarding this Nozick thought experiment and the conclusions he wanted to draw from it is how much is a kind of a status quo effect. So a lot of people might not want to get this on current reality to plug into this dream machine. But if they instead were told, well, what you've experienced up to this point was a dream now, do you want to disconnect from this and enter the real world when you have no idea maybe what the real world is, or maybe you could say, well, you're actually a farmer in Peru, growing, you know, peanuts, and you could live for the rest of your life in this way, or would you want to continue your dream life as Alex Friedman going around the world making podcasts and doing research. So if the status quo was that they were actually in the experience machine, I think a lot of people might then prefer to live the life that they are familiar with rather than sort of bail out into. So that's interesting, the change itself, the leap, yeah, so it might not be so much the reality itself that we're after. But it's more that we are maybe involved in certain projects and relationships. And we have, you know, a self identity and these things that our values are kind of connected with carrying that forward. And then whether it's inside a tank or outside a tank in Peru, or whether inside a computer outside a computer, that's kind of less important to what we ultimately care about. Yeah, but still, so just to linger on it, it is interesting. I find maybe people are different, but I find myself quite willing to take the leap to the farmer in Peru, especially as the virtual reality system become more realistic. I find that possibility and I think more people would take that leap. But so in this thought experiment, just to make sure we are understanding, so in this case, the farmer in Peru would not be a virtual reality, that would be the real, your life, like before this whole experience machine started. Well, I kind of assumed from that description, you're being very specific, but that kind of idea just like washes away the concept of what's real. I'm still a little hesitant about your kind of distinction between real and illusion. Because when you can have an illusion that feels, I mean, that looks real, I don't know how you can definitively say something is real or not, like what's a good way to prove that something is real in that context? Well, so I guess in this case, it's more a stipulation. In one case, you're floating in a tank with these wires by the super duper neuroscientists plugging into your head, giving you like Friedman experiences. In the other, you're actually tilling the soil in Peru, growing peanuts, and then those peanuts are being eaten by other people all around the world who buy the exports. That's two different possible situations in the one and the same real world that you could choose to occupy. But just to be clear, when you're in a vat with wires and the neuroscientists, you can still go farming in Peru, right? No, well, if you wanted to, you could have the experience of farming in Peru, but there wouldn't actually be any peanuts grown. But what makes a peanut, so a peanut could be grown and you could feed things with that peanut and why can't all of that be done in a simulation? I hope, first of all, that they actually have peanut farms in Peru, I guess we'll get a lot of comments otherwise from Angrit. I was way up to the point when you started talking about Peru peanuts, that's when I realized you're relying out of these. In that climate. No, I mean, I think, I mean, in the simulation, I think there is a sense, the important sense in which it would all be real. Nevertheless, there is a distinction between inside the simulation and outside the simulation. Or in the case of Nozick's thought experiment, whether you're in the vat or outside the vat, and some of those differences may or may not be important. I mean, that comes down to your values and preferences. So if the, if the experience machine only gives you the experience of growing peanuts, but you're the only one in the experience machines. No, but there's other, you can, within the experience machine, others can plug in. Well, there are versions of the experience machine. So in fact, you might want to have, distinguish different thought experiments, different versions of it. I see. So in, like in the original thought experiment, maybe it's only you, right? And you think, I wouldn't want to go in there. Well, that tells you something interesting about what you value and what you care about. Then you could say, well, what if you add the fact that there would be other people in there and you would interact with them? Well, it starts to make it more attractive, right? Then you could add in, well, what if you could also have important longterm effects on human history and the world, and you could actually do something useful, even though you were in there. That makes it maybe even more attractive. Like you could actually have a life that had a purpose and consequences. And so as you sort of add more into it, it becomes more similar to the baseline reality that you were comparing it to. Yeah, but I just think inside the experience machine and without taking those steps you just mentioned, you still have an impact on longterm history of the creatures that live inside that, of the quote unquote fake creatures that live inside that experience machine. And that, like at a certain point, you know, if there's a person waiting for you inside that experience machine, maybe your newly found wife and she dies, she has fear, she has hopes, and she exists in that machine when you plug out, when you unplug yourself and plug back in, she's still there going on about her life. Well, in that case, yeah, she starts to have more of an independent existence. Independent existence. But it depends, I think, on how she's implemented in the experience machine. Take one limit case where all she is is a static picture on the wall, a photograph. So you think, well, I can look at her, right? But that's it. There's no... Then you think, well, it doesn't really matter much what happens to that, any more than a normal photograph if you tear it up, right? It means you can't see it anymore, but you haven't harmed the person whose picture you tore up. But if she's actually implemented, say, at a neural level of detail so that she's a fully realized digital mind with the same behavioral repertoire as you have, then very plausibly she would be a conscious person like you are. And then what you do in this experience machine would have real consequences for how this other mind felt. So you have to specify which of these experience machines you're talking about. I think it's not entirely obvious that it would be possible to have an experience machine that gave you a normal set of human experiences, which include experiences of interacting with other people, without that also generating consciousnesses corresponding to those other people. That is, if you create another entity that you perceive and interact with, that to you looks entirely realistic. Not just when you say hello, they say hello back, but you have a rich interaction, many days, deep conversations. It might be that the only possible way of implementing that would be one that also has a side effect, instantiated this other person in enough detail that you would have a second consciousness there. I think that's to some extent an open question. So you don't think it's possible to fake consciousness and fake intelligence? Well, it might be. I mean, I think you can certainly fake, if you have a very limited interaction with somebody, you could certainly fake that. If all you have to go on is somebody said hello to you, that's not enough for you to tell whether that was a real person there, or a prerecorded message, or a very superficial simulation that has no consciousness, because that's something easy to fake. We could already fake it, now you can record a voice recording. But if you have a richer set of interactions where you're allowed to ask open ended questions and probe from different angles, you couldn't give canned answer to all of the possible ways that you could probe it, then it starts to become more plausible that the only way to realize this thing in such a way that you would get the right answer from any which angle you probed it, would be a way of instantiating it, where you also instantiated a conscious mind. Yeah, I'm with you on the intelligence part, but is there something about me that says consciousness is easier to fake? Like I've recently gotten my hands on a lot of rubas, don't ask me why or how. And I've made them, there's just a nice robotic mobile platform for experiments. And I made them scream and or moan in pain, so on, just to see when they're responding to me. And it's just a sort of psychological experiment on myself. And I think they appear conscious to me pretty quickly. To me, at least my brain can be tricked quite easily. I said if I introspect, it's harder for me to be tricked that something is intelligent. So I just have this feeling that inside this experience machine, just saying that you're conscious and having certain qualities of the interaction, like being able to suffer, like being able to hurt, like being able to wander about the essence of your own existence, not actually, I mean, creating the illusion that you're wandering about it is enough to create the illusion of consciousness. And because of that, create a really immersive experience to where you feel like that is the real world. So you think there's a big gap between appearing conscious and being conscious? Or is it that you think it's very easy to be conscious? I'm not actually sure what it means to be conscious. All I'm saying is the illusion of consciousness is enough to create a social interaction that's as good as if the thing was conscious, meaning I'm making it about myself. Right. Yeah. I mean, I guess there are a few different things. One is how good the interaction is, which might, I mean, if you don't really care about like probing hard for whether the thing is conscious, maybe it would be a satisfactory interaction, whether or not you really thought it was conscious. Now, if you really care about it being conscious in like inside this experience machine, how easy would it be to fake it? And you say, it sounds fairly easy, but then the question is, would that also mean it's very easy to instantiate consciousness? Like it's much more widely spread in the world and we have thought it doesn't require a big human brain with a hundred billion neurons, all you need is some system that exhibits basic intentionality and can respond and you already have consciousness. Like in that case, I guess you still have a close coupling. I guess that case would be where they can come apart, where you could create the appearance of there being a conscious mind with actually not being another conscious mind. I'm somewhat agnostic exactly where these lines go. I think one observation that makes it plausible that you could have very realistic appearances relatively simply, which also is relevant for the simulation argument and in terms of thinking about how realistic would a virtual reality model have to be in order for the simulated creature not to notice that anything was awry. Well, just think of our own humble brains during the wee hours of the night when we are dreaming. Many times, well, dreams are very immersive, but often you also don't realize that you're in a dream. And that's produced by simple primitive three pound lumps of neural matter effortlessly. So if a simple brain like this can create the virtual reality that seems pretty real to us, then how much easier would it be for a super intelligent civilization with planetary sized computers optimized over the eons to create a realistic environment for you to interact with? Yeah. By the way, behind that intuition is that our brain is not that impressive relative to the possibilities of what technology could bring. It's also possible that the brain is the epitome, is the ceiling. How is that possible? Meaning like this is the smartest possible thing that the universe could create. So that seems unlikely to me. Yeah. I mean, for some of these reasons we alluded to earlier in terms of designs we already have for computers that would be faster by many orders of magnitude than the human brain. Yeah. We can see that the constraints, the cognitive constraints in themselves is what enables the intelligence. So the more powerful you make the computer, the less likely it is to become super intelligent. This is where I say dumb things to push back on that statement. Yeah. I'm not sure I thought that we might. No. I mean, so there are different dimensions of intelligence. A simple one is just speed. Like if you can solve the same challenge faster in some sense, you're like smarter. So there I think we have very strong evidence for thinking that you could have a computer in this universe that would be much faster than the human brain and therefore have speed super intelligence, like be completely superior, maybe a million times faster. Then maybe there are other ways in which you could be smarter as well, maybe more qualitative ways, right? And the concepts are a little bit less clear cut. So it's harder to make a very crisp, neat, firmly logical argument for why that could be qualitative super intelligence as opposed to just things that were faster. Although I still think it's very plausible and for various reasons that are less than watertight arguments. But when you can sort of, for example, if you look at animals and even within humans, like there seems to be like Einstein versus random person, like it's not just that Einstein was a little bit faster, but like how long would it take a normal person to invent general relativity is like, it's not 20% longer than it took Einstein or something like that. It's like, I don't know whether they would do it at all or it would take millions of years or some totally bizarre. But your intuition is that the compute size will get you go increasing the size of the computer and the speed of the computer might create some much more powerful levels of intelligence that would enable some of the things we've been talking about with like the simulation, being able to simulate an ultra realistic environment, ultra realistic perception of reality. Yeah. I mean, strictly speaking, it would not be necessary to have super intelligence in order to have say the technology to make these simulations, ancestor simulations or other kinds of simulations. As a matter of fact, I think if we are in a simulation, it would most likely be one built by a civilization that had super intelligence. It certainly would help a lot. I mean, you could build more efficient larger scale structures if you had super intelligence. I also think that if you had the technology to build these simulations, that's like a very advanced technology. It seems kind of easier to get the technology to super intelligence. I'd expect by the time they could make these fully realistic simulations of human history with human brains in there, like before that they got to that stage, they would have figured out how to create machine super intelligence or maybe biological enhancements of their own brains if there were biological creatures to start with. So we talked about the three parts of the simulation argument. One, we destroy ourselves before we ever create the simulation. Two, we somehow, everybody somehow loses interest in creating the simulation. Three, we're living in a simulation. So you've kind of, I don't know if your thinking has evolved on this point, but you kind of said that we know so little that these three cases might as well be equally probable. So probabilistically speaking, where do you stand on this? Yeah, I mean, I don't think equal necessarily would be the most supported probability assignment. So how would you, without assigning actual numbers, what's more or less likely in your view? Well, I mean, I've historically tended to punt on the question of like between these three. So maybe you ask me another way is which kind of things would make each of these more or less likely? What kind of intuition? Certainly in general terms, if you think anything that say increases or reduces the probability of one of these, we tend to slosh probability around on the other. So if one becomes less probable, like the other would have to, cause it's got to add up to one. So if we consider the first hypothesis, the first alternative that there's this filter that makes it so that virtually no civilization reaches technological maturity, in particular our own civilization, if that's true, then it's like very unlikely that we would reach technological maturity because if almost no civilization at our stage does it, then it's unlikely that we do it. So hence... Sorry, can you linger on that for a second? Well, so if it's the case that almost all civilizations at our current stage of technological development failed to reach maturity, that would give us very strong reason for thinking we will fail to reach technological maturity. Oh, and also sort of the flip side of that is the fact that we've reached it means that many other civilizations have reached this point. Yeah. So that means if we get closer and closer to actually reaching technological maturity, there's less and less distance left where we could go extinct before we are there, and therefore the probability that we will reach increases as we get closer, and that would make it less likely to be true that almost all civilizations at our current stage failed to get there. Like we would have this... The one case we had started ourselves would be very close to getting there, that would be strong evidence that it's not so hard to get to technological maturity. So to the extent that we feel we are moving nearer to technological maturity, that would tend to reduce the probability of the first alternative and increase the probability of the other two. It doesn't need to be a monotonic change. Like if every once in a while some new threat comes into view, some bad new thing you could do with some novel technology, for example, that could change our probabilities in the other direction. But that technology, again, you have to think about as that technology has to be able to equally in an even way affect every civilization out there. Yeah, pretty much. I mean, that's strictly speaking, it's not true. I mean, that could be two different existential risks and every civilization, you know, one or the other, like, but none of them kills more than 50%. But incidentally, so in some of my work, I mean, on machine superintelligence, like pointed to some existential risks related to sort of super intelligent AI and how we must make sure, you know, to handle that wisely and carefully. It's not the right kind of existential catastrophe to make the first alternative true though. Like it might be bad for us if the future lost a lot of value as a result of it being shaped by some process that optimized for some completely nonhuman value. But even if we got killed by machine superintelligence, that machine superintelligence might still attain technological maturity. Oh, I see, so you're not human exclusive. This could be any intelligent species that achieves, like it's all about the technological maturity. But the humans have to attain it. Right. So like superintelligence could replace us and that's just as well for the simulation argument. Yeah, yeah. I mean, it could interact with the second hypothesis by alternative. Like if the thing that replaced us was either more likely or less likely than we would be to have an interest in creating ancestor simulations, you know, that could affect probabilities. But yeah, to a first order, like if we all just die, then yeah, we won't produce any simulations because we are dead. But if we all die and get replaced by some other intelligent thing that then gets to technological maturity, the question remains, of course, if not that thing, then use some of its resources to do this stuff. So can you reason about this stuff, given how little we know about the universe? Is it reasonable to reason about these probabilities? So like how little, well, maybe you can disagree, but to me, it's not trivial to figure out how difficult it is to build a simulation. We kind of talked about it a little bit. We also don't know, like as we try to start building it, like start creating virtual worlds and so on, how that changes the fabric of society. Like there's all these things along the way that can fundamentally change just so many aspects of our society about our existence that we don't know anything about, like the kind of things we might discover when we understand to a greater degree the fundamental, the physics, like the theory, if we have a breakthrough, have a theory and everything, how that changes stuff, how that changes deep space exploration and so on. Like, is it still possible to reason about probabilities given how little we know? Yes, I think there will be a large residual of uncertainty that we'll just have to acknowledge. And I think that's true for most of these big picture questions that we might wonder about. It's just we are small, short lived, small brained, cognitively very limited humans with little evidence. And it's amazing we can figure out as much as we can really about the cosmos. But okay, so there's this cognitive trick that seems to happen when I look at the simulation argument, which for me, it seems like case one and two feel unlikely. I want to say feel unlikely as opposed to sort of like, it's not like I have too much scientific evidence to say that either one or two are not true. It just seems unlikely that every single civilization destroys itself. And it seems like feels unlikely that the civilizations lose interest. So naturally, without necessarily explicitly doing it, but the simulation argument basically says it's very likely we're living in a simulation. To me, my mind naturally goes there. I think the mind goes there for a lot of people. Is that the incorrect place for it to go? Well, not necessarily. I think the second alternative, which has to do with the motivations and interests of technological and material civilizations, I think there is much we don't understand about that. Can you talk about that a little bit? What do you think? I mean, this is a question that pops up when you when you build an AGI system or build a general intelligence. How does that change our motivations? Do you think it'll fundamentally transform our motivations? Well, it doesn't seem that implausible that once you take this leap to to technological maturity, I mean, I think like it involves creating machine super intelligence, possibly that would be sort of on the path for basically all civilizations, maybe before they are able to create large numbers of ancestry simulations, they would that that possibly could be one of these things that quite radically changes the orientation of what a civilization is, in fact, optimizing for. There are other things as well. So at the moment, we have not perfect control over our own being our own mental states, our own experiences are not under our direct control. So for example, if if you want to experience a pleasure and happiness, you might have to do a whole host of things in the external world to try to get into the stage into the mental state where you experience pleasure, like some people get some pleasure from eating great food. Well, they can just turn that on, they have to kind of actually go to a nice restaurant and then they have to make money. So there's like all this kind of activity that maybe arises from the fact that we are trying to ultimately produce mental states. But the only way to do that is by a whole host of complicated activities in the external world. Now, at some level of technological development, I think we'll become auto potent in the sense of gaining direct ability to choose our own internal configuration, and enough knowledge and insight to be able to actually do that in a meaningful way. So then it could turn out that there are a lot of instrumental goals that would drop out of the picture and be replaced by other instrumental goals, because we could now serve some of these final goals in more direct ways. And who knows how all of that shakes out after civilizations reflect on that and converge on different attractors and so on and so forth. And that could be new instrumental considerations that come into view as well, that we are just oblivious to, that would maybe have a strong shaping effect on actions, like very strong reasons to do something or not to do something, then we just don't realize they are there because we are so dumb, bumbling through the universe. But if almost inevitably en route to attaining the ability to create many ancestors simulations, you do have this cognitive enhancement, or advice from super intelligences or yourself, then maybe there's like this additional set of considerations coming into view and it's obvious that the thing that makes sense is to do X, whereas right now it seems you could X, Y or Z and different people will do different things and we are kind of random in that sense. Because at this time, with our limited technology, the impact of our decisions is minor. I mean, that's starting to change in some ways. But… Well, I'm not sure how it follows that the impact of our decisions is minor. Well, it's starting to change. I mean, I suppose 100 years ago it was minor. It's starting to… Well, it depends on how you view it. What people did 100 years ago still have effects on the world today. Oh, I see. As a civilization in the togetherness. Yeah. So it might be that the greatest impact of individuals is not at technological maturity or very far down. It might be earlier on when there are different tracks, civilization could go down. Maybe the population is smaller, things still haven't settled out. If you count indirect effects, those could be bigger than the direct effects that people have later on. So part three of the argument says that… So that leads us to a place where eventually somebody creates a simulation. I think you had a conversation with Joe Rogan. I think there's some aspect here where you got stuck a little bit. How does that lead to we're likely living in a simulation? So this kind of probability argument, if somebody eventually creates a simulation, why does that mean that we're now in a simulation? What you get to if you accept alternative three first is there would be more simulated people with our kinds of experiences than non simulated ones. Like if you look at the world as a whole, by the end of time as it were, you just count it up. That would be more simulated ones than non simulated ones. Then there is an extra step to get from that. If you assume that, suppose for the sake of the argument, that that's true. How do you get from that to the statement we are probably in a simulation? So here you're introducing an indexical statement like it's that this person right now is in a simulation. There are all these other people that are in simulations and some that are not in the simulation. But what probability should you have that you yourself is one of the simulated ones in that setup? So I call it the bland principle of indifference, which is that in cases like this, when you have two sets of observers, one of which is much larger than the other and you can't from any internal evidence you have, tell which set you belong to, you should assign a probability that's proportional to the size of these sets. So that if there are 10 times more simulated people with your kinds of experiences, you would be 10 times more likely to be one of those. Is that as intuitive as it sounds? I mean, that seems kind of, if you don't have enough information, you should rationally just assign the same probability as the size of the set. It seems pretty plausible to me. Where are the holes in this? Is it at the very beginning, the assumption that everything stretches, you have infinite time essentially? You don't need infinite time. You just need, how long does the time take? However long it takes, I guess, for a universe to produce an intelligent civilization that attains the technology to run some ancestry simulations. When the first simulation is created, that stretch of time, just a little longer than they'll all start creating simulations. Well, I mean, there might be a difference. If you think of there being a lot of different planets and some subset of them have life and then some subset of those get to intelligent life and some of those maybe eventually start creating simulations, they might get started at quite different times. Maybe on some planet, it takes a billion years longer before you get monkeys or before you get even bacteria than on another planet. This might happen at different cosmological epochs. Is there a connection here to the doomsday argument and that sampling there? Yeah, there is a connection in that they both involve an application of anthropic reasoning that is reasoning about these kind of indexical propositions. But the assumption you need in the case of the simulation argument is much weaker than the assumption you need to make the doomsday argument go through. What is the doomsday argument and maybe you can speak to the anthropic reasoning in more general. Yeah, that's a big and interesting topic in its own right, anthropics, but the doomsday argument is this really first discovered by Brandon Carter, who was a theoretical physicist and then developed by philosopher John Leslie. I think it might have been discovered initially in the 70s or 80s and Leslie wrote this book, I think in 96. And there are some other versions as well by Richard Gott, who's a physicist, but let's focus on the Carter Leslie version where it's an argument that we have systematically underestimated the probability that humanity will go extinct soon. Now I should say most people probably think at the end of the day there is something wrong with this doomsday argument that it doesn't really hold. It's like there's something wrong with it, but it's proved hard to say exactly what is wrong with it and different people have different accounts. My own view is it seems inconclusive, but I can say what the argument is. Yeah, that would be good. So maybe it's easiest to explain via an analogy to sampling from urns. So imagine you have two urns in front of you and they have balls in them that have numbers. The two urns look the same, but inside one there are 10 balls. Ball number one, two, three, up to ball number 10. And then in the other urn you have a million balls numbered one to a million and somebody puts one of these urns in front of you and asks you to guess what's the chance it's the 10 ball urn and you say, well, 50, 50, I can't tell which urn it is. But then you're allowed to reach in and pick a ball at random from the urn and that's suppose you find that it's ball number seven. So that's strong evidence for the 10 ball hypothesis. It's a lot more likely that you would get such a low numbered ball if there are only 10 balls in the urn, like it's in fact 10% done, right? Then if there are a million balls, it would be very unlikely you would get number seven. So you perform a Bayesian update and if your prior was 50, 50 that it was the 10 ball urn, you become virtually certain after finding the random sample was seven that it's only has 10 balls in it. So in the case of the urns, this is uncontroversial, just elementary probability theory. The Doomsday Argument says that you should reason in a similar way with respect to different hypotheses about how many balls there will be in the urn of humanity as it were, how many humans there will ever have been by the time we go extinct. So to simplify, let's suppose we only consider two hypotheses, either maybe 200 billion humans in total or 200 trillion humans in total. You could fill in more hypotheses, but it doesn't change the principle here. So it's easiest to see if we just consider these two. So you start with some prior based on ordinary empirical ideas about threats to civilization and so forth. And maybe you say it's a 5% chance that we will go extinct by the time there will have been 200 billion only, you're kind of optimistic, let's say, you think probably we'll make it through, colonize the universe. But then, according to this Doomsday Argument, you should take off your own birth rank as a random sample. So your birth rank is your sequence in the position of all humans that have ever existed. It turns out you're about a human number of 100 billion, you know, give or take. That's like, roughly how many people have been born before you. That's fascinating, because I probably, we each have a number. We would each have a number in this, I mean, obviously, the exact number would depend on where you started counting, like which ancestors was human enough to count as human. But those are not really important, there are relatively few of them. So yeah, so you're roughly 100 billion. Now, if they're only going to be 200 billion in total, that's a perfectly unremarkable number. You're somewhere in the middle, right? It's a run of the mill human, completely unsurprising. Now, if they're going to be 200 trillion, you would be remarkably early, like what are the chances out of these 200 trillion human that you should be human number 100 billion? That seems it would have a much lower conditional probability. And so analogously to how in the urn case, you thought after finding this low numbered random sample, you update it in favor of the urn having few balls. Similarly, in this case, you should update in favor of the human species having a lower total number of members that is doomed soon. You said doomed soon? Well, that would be the hypothesis in this case that it will end 100 billion. I just like that term for that hypothesis. So what it kind of crucially relies on, the Doomsday Argument, is the idea that you should reason as if you were a random sample from the set of all humans that will have existed. If you have that assumption, then I think the rest kind of follows. The question then is, why should you make that assumption? In fact, you know you're 100 billion, so where do you get this prior? And then there is like a literature on that with different ways of supporting that assumption. That's just one example of anthropic reasoning, right? That seems to be kind of convenient when you think about humanity, when you think about sort of even like existential threats and so on, as it seems that quite naturally that you should assume that you're just an average case. Yeah, that you're kind of a typical randomly sample. Now, in the case of the Doomsday Argument, it seems to lead to what intuitively we think is the wrong conclusion, or at least many people have this reaction that there's got to be something fishy about this argument. Because from very, very weak premises, it gets this very striking implication that we have almost no chance of reaching size 200 trillion humans in the future. And how could we possibly get there just by reflecting on when we were born? It seems you would need sophisticated arguments about the impossibility of space colonization, blah, blah. So one might be tempted to reject this key assumption, I call it the self sampling assumption, the idea that you should reason as if you're a random sample from all observers or in your some reference class. However, it turns out that in other domains, it looks like we need something like this self sampling assumption to make sense of bona fide scientific inferences. In contemporary cosmology, for example, you have these multiverse theories. And according to a lot of those, all possible human observations are made. So if you have a sufficiently large universe, you will have a lot of people observing all kinds of different things. So if you have two competing theories, say about the value of some constant, it could be true according to both of these theories that there will be some observers observing the value that corresponds to the other theory, because there will be some observers that have hallucinations, so there's a local fluctuation or a statistically anomalous measurement, these things will happen. And if enough observers make enough different observations, there will be some that sort of by chance make these different ones. And so what we would want to say is, well, many more observers, a larger proportion of the observers will observe as it were the true value. And a few will observe the wrong value. If we think of ourselves as a random sample, we should expect with a probability to observe the true value and that will then allow us to conclude that the evidence we actually have is evidence for the theories we think are supported. It kind of then is a way of making sense of these inferences that clearly seem correct, that we can make various observations and infer what the temperature of the cosmic background is and the fine structure constant and all of this. But it seems that without rolling in some assumption similar to the self sampling assumption, this inference just doesn't go through. And there are other examples. So there are these scientific contexts where it looks like this kind of anthropic reasoning is needed and makes perfect sense. And yet, in the case of the Dupest argument, it has this weird consequence and people might think there's something wrong with it there. So there's then this project that would consist in trying to figure out what are the legitimate ways of reasoning about these indexical facts when observer selection effects are in play. In other words, developing a theory of anthropics. And there are different views of looking at that and it's a difficult methodological area. But to tie it back to the simulation argument, the key assumption there, this bland principle of indifference, is much weaker than the self sampling assumption. So if you think about, in the case of the Dupest argument, it says you should reason as if you are a random sample from all humans that will have lived, even though in fact you know that you are about number 100 billionth human and you're alive in the year 2020. Whereas in the case of the simulation argument, it says that, well, if you actually have no way of telling which one you are, then you should assign this kind of uniform probability. Yeah, yeah, your role as the observer in the simulation argument is different, it seems like. Like who's the observer? I mean, I keep assigning the individual consciousness. But a lot of observers in the context of the simulation argument, the relevant observers would be A, the people in original histories, and B, the people in simulations. So this would be the class of observers that we need, I mean, they're also maybe the simulators, but we can set those aside for this. So the question is, given that class of observers, a small set of original history observers and a large class of simulated observers, which one should you think is you? Where are you amongst this set of observers? I'm maybe having a little bit of trouble wrapping my head around the intricacies of what it means to be an observer in this, in the different instantiations of the anthropic reasoning cases that we mentioned. Yeah. I mean, does it have to be... It's not the observer. Yeah, I mean, it may be an easier way of putting it is just like, are you simulated, are you not simulated, given this assumption that these two groups of people exist? Yeah. In the simulation case, it seems pretty straightforward. Yeah. So the key point is the methodological assumption you need to make to get the simulation argument to where it wants to go is much weaker and less problematic than the methodological assumption you need to make to get the doomsday argument to its conclusion. Maybe the doomsday argument is sound or unsound, but you need to make a much stronger and more controversial assumption to make it go through. In the case of the simulation argument, I guess one maybe way intuition pumped to support this bland principle of indifference is to consider a sequence of different cases where the fraction of people who are simulated to non simulated approaches one. So in the limiting case where everybody is simulated, obviously you can deduce with certainty that you are simulated. If everybody with your experiences is simulated and you know you've got to be one of those, you don't need a probability at all, you just kind of logically conclude it, right? So then as we move from a case where say 90% of everybody is simulated, 99%, 99.9%, it should seem plausible that the probability you assign should sort of approach one certainty as the fraction approaches the case where everybody is in a simulation. You wouldn't expect that to be a discrete, well, if there's one non simulated person, then it's 50, 50, but if we move that, then it's 100%, like it should kind of, there are other arguments as well one can use to support this bland principle of indifference, but that might be enough to. But in general, when you start from time equals zero and go into the future, the fraction of simulated, if it's possible to create simulated worlds, the fraction of simulated worlds will go to one. Well, I mean, it won't go all the way to one. In reality, that would be some ratio, although maybe a technologically mature civilization could run a lot of simulations using a small portion of its resources, it probably wouldn't be able to run infinitely many. I mean, if we take say the observed, the physics in the observed universe, if we assume that that's also the physics at the level of the simulators, that would be limits to the amount of information processing that any one civilization could perform in its future trajectory. First of all, there's limited amount of matter you can get your hands off because with a positive cosmological constant, the universe is accelerating, there's like a finite sphere of stuff, even if you traveled with the speed of light that you could ever reach, you have a finite amount of stuff. And then if you think there is like a lower limit to the amount of loss you get when you perform an erasure of a computation, or if you think, for example, just matter gradually over cosmological timescales, decay, maybe protons decay, other things, and you radiate out gravitational waves, like there's all kinds of seemingly unavoidable losses that occur. Eventually, we'll have something like a heat death of the universe or a cold death or whatever, but yeah. So it's finite, but of course, we don't know which, if there's many ancestral simulations, we don't know which level we are. So there could be, couldn't there be like an arbitrary number of simulation that spawned ours, and those had more resources, in terms of physical universe to work with? Sorry, what do you mean that that could be? Sort of, okay, so if simulations spawn other simulations, it seems like each new spawn has fewer resources to work with. But we don't know at which step along the way we are at. Any one observer doesn't know whether we're in level 42, or 100, or one, or is that not matter for the resources? I mean, it's true that there would be uncertainty as to, you could have stacked simulations, and that could then be uncertainty as to which level we are at. As you remarked also, all the computations performed in a simulation within the simulation also have to be expanded at the level of the simulation. So the computer in basement reality where all these simulations with the simulations with the simulations are taking place, like that computer, ultimately, it's CPU or whatever it is, like that has to power this whole tower, right? So if there is a finite compute power in basement reality, that would impose a limit to how tall this tower can be. And if each level kind of imposes a large extra overhead, you might think maybe the tower would not be very tall, that most people would be low down in the tower. I love the term basement reality. Let me ask one of the popularizers, you said there's many through this, when you look at sort of the last few years of the simulation hypothesis, just like you said, it comes up every once in a while, some new community discovers it and so on. But I would say one of the biggest popularizers of this idea is Elon Musk. Do you have any kind of intuition about what Elon thinks about when he thinks about simulation? Why is this of such interest? Is it all the things we've talked about, or is there some special kind of intuition about simulation that he has? I mean, you might have a better, I think, I mean, why it's of interest, I think it's like seems pretty obvious why, to the extent that one thinks the argument is credible, why it would be of interest, it would, if it's correct, tell us something very important about the world in one way or the other, whichever of the three alternatives for a simulation that seems like arguably one of the most fundamental discoveries, right? Now, interestingly, in the case of someone like Elon, so there's like the standard arguments for why you might want to take the simulation hypothesis seriously, the simulation argument, right? In the case that if you are actually Elon Musk, let us say, there's a kind of an additional reason in that what are the chances you would be Elon Musk? It seems like maybe there would be more interest in simulating the lives of very unusual and remarkable people. So if you consider not just simulations where all of human history or the whole of human civilization are simulated, but also other kinds of simulations, which only include some subset of people, like in those simulations that only include a subset, it might be more likely that they would include subsets of people with unusually interesting or consequential lives. So if you're Elon Musk, it's more likely that you're an inspiration. Like if you're Donald Trump, or if you're Bill Gates, or you're like, some particularly like distinctive character, you might think that that, I mean, if you just think of yourself into the shoes, right, it's got to be like an extra reason to think that's kind of. So interesting. So on a scale of like farmer in Peru to Elon Musk, the more you get towards the Elon Musk, the higher the probability. You'd imagine that would be some extra boost from that. There's an extra boost. So he also asked the question of what he would ask an AGI saying, the question being, what's outside the simulation? Do you think about the answer to this question? If we are living in a simulation, what is outside the simulation? So the programmer of the simulation? Yeah, I mean, I think it connects to the question of what's inside the simulation in that. So if you had views about the creators of the simulation, it might help you make predictions about what kind of simulation it is, what might happen, what happens after the simulation, if there is some after, but also like the kind of setup. So these two questions would be quite closely intertwined. But do you think it would be very surprising to like, is the stuff inside the simulation, is it possible for it to be fundamentally different than the stuff outside? Yeah. Like, another way to put it, can the creatures inside the simulation be smart enough to even understand or have the cognitive capabilities or any kind of information processing capabilities enough to understand the mechanism that created them? They might understand some aspects of it. I mean, it's a level of, it's kind of, there are levels of explanation, like degrees to which you can understand. So does your dog understand what it is to be human? Well, it's got some idea, like humans are these physical objects that move around and do things. And a normal human would have a deeper understanding of what it is to be a human. And maybe some very experienced psychologist or great novelist might understand a little bit more about what it is to be human. And maybe superintelligence could see right through your soul. So similarly, I do think that we are quite limited in our ability to understand all of the relevant aspects of the larger context that we exist in. But there might be hope for some. I think we understand some aspects of it. But you know, how much good is that? If there's like one key aspect that changes the significance of all the other aspects. So we understand maybe seven out of 10 key insights that you need. But the answer actually, like varies completely depending on what like number eight, nine and 10 insight is. It's like whether you want to suppose that the big task were to guess whether a certain number was odd or even, like a 10 digit number. And if it's even, the best thing for you to do in life is to go north. And if it's odd, the best thing for you is to go south. Now we are in a situation where maybe through our science and philosophy, we figured out what the first seven digits are. So we have a lot of information, right? Most of it we figured out. But we are clueless about what the last three digits are. So we are still completely clueless about whether the number is odd or even and therefore whether we should go north or go south. I feel that's an analogy, but I feel we're somewhat in that predicament. We know a lot about the universe. We've come maybe more than half of the way there to kind of fully understanding it. But the parts we're missing are plausibly ones that could completely change the overall upshot of the thing and including change our overall view about what the scheme of priorities should be or which strategic direction would make sense to pursue. Yeah. I think your analogy of us being the dog trying to understand human beings is an entertaining one, and probably correct. The closer the understanding tends from the dog's viewpoint to us human psychologist viewpoint, the steps along the way there will have completely transformative ideas of what it means to be human. So the dog has a very shallow understanding. It's interesting to think that, to analogize that a dog's understanding of a human being is the same as our current understanding of the fundamental laws of physics in the universe. Oh man. Okay. We spent an hour and 40 minutes talking about the simulation. I like it. Let's talk about super intelligence. At least for a little bit. And let's start at the basics. What to you is intelligence? Yeah. I tend not to get too stuck with the definitional question. I mean, the common sense to understand, like the ability to solve complex problems, to learn from experience, to plan, to reason, some combination of things like that. Is consciousness mixed up into that or no? Is consciousness mixed up into that? Well, I think it could be fairly intelligent at least without being conscious probably. So then what is super intelligence? That would be like something that was much more, had much more general cognitive capacity than we humans have. So if we talk about general super intelligence, it would be much faster learner be able to reason much better, make plans that are more effective at achieving its goals, say in a wide range of complex challenging environments. In terms of as we turn our eye to the idea of sort of existential threats from super intelligence, do you think super intelligence has to exist in the physical world or can it be digital only? Sort of we think of our general intelligence as us humans, as an intelligence that's associated with the body, that's able to interact with the world, that's able to affect the world directly with physically. I mean, digital only is perfectly fine, I think. I mean, you could, it's physical in the sense that obviously the computers and the memories are physical. But it's capability to affect the world sort of. Could be very strong, even if it has a limited set of actuators, if it can type text on the screen or something like that, that would be, I think, ample. So in terms of the concerns of existential threat of AI, how can an AI system that's in the digital world have existential risk, sort of, and what are the attack vectors for a digital system? Well, I mean, I guess maybe to take one step back, so I should emphasize that I also think there's this huge positive potential from machine intelligence, including super intelligence. And I want to stress that because some of my writing has focused on what can go wrong. And when I wrote the book Superintelligence, at that point, I felt that there was a kind of neglect of what would happen if AI succeeds, and in particular, a need to get a more granular understanding of where the pitfalls are so we can avoid them. I think that since the book came out in 2014, there has been a much wider recognition of that. And a number of research groups are now actually working on developing, say, AI alignment techniques and so on and so forth. So yeah, I think now it's important to make sure we bring back onto the table the upside as well. And there's a little bit of a neglect now on the upside, which is, I mean, if you look at, I was talking to a friend, if you look at the amount of information that is available, or people talking and people being excited about the positive possibilities of general intelligence, that's not, it's far outnumbered by the negative possibilities in terms of our public discourse. Possibly, yeah. It's hard to measure. But what are, can you linger on that for a little bit, what are some, to you, possible big positive impacts of general intelligence? Super intelligence? Well, I mean, super intelligence, because I tend to also want to distinguish these two different contexts of thinking about AI and AI impacts, the kind of near term and long term, if you want, both of which I think are legitimate things to think about, and people should discuss both of them, but they are different and they often get mixed up. And then, then I get, you get confusion, like, I think you get simultaneously like maybe an overhyping of the near term and then under hyping of the long term. And so I think as long as we keep them apart, we can have like, two good conversations, but or we can mix them together and have one bad conversation. Can you clarify just the two things we were talking about, the near term and the long term? Yeah. And what are the distinctions? Well, it's a, it's a blurry distinction. But say the things I wrote about in this book, super intelligence, long term, things people are worrying about today with, I don't know, algorithmic discrimination, or even things, self driving cars and drones and stuff, more near term. And then of course, you could imagine some medium term where they kind of overlap and they one evolves into the other. But at any rate, I think both, yeah, the issues look kind of somewhat different depending on which of these contexts. So I think, I think it'd be nice if we can talk about the long term and think about a positive impact or a better world because of the existence of the long term super intelligence. Do you have views of such a world? Yeah. I mean, I guess it's a little hard to articulate because it seems obvious that the world has a lot of problems as it currently stands. And it's hard to think of any one of those, which it wouldn't be useful to have like a friendly aligned super intelligence working on. So from health to the economic system to be able to sort of improve the investment and trade and foreign policy decisions, all that kind of stuff. All that kind of stuff and a lot more. I mean, what's the killer app? Well, I don't think there is one. I think AI, especially artificial general intelligence is really the ultimate general purpose technology. So it's not that there is this one problem, this one area where it will have a big impact. But if and when it succeeds, it will really apply across the board in all fields where human creativity and intelligence and problem solving is useful, which is pretty much all fields. Right. The thing that it would do is give us a lot more control over nature. It wouldn't automatically solve the problems that arise from conflict between humans, fundamentally political problems. Some subset of those might go away if you just had more resources and cooler tech. But some subset would require coordination that is not automatically achieved just by having more technological capability. But anything that's not of that sort, I think you just get an enormous boost with this kind of cognitive technology once it goes all the way. Now, again, that doesn't mean I'm thinking, oh, people don't recognize what's possible with current technology and like sometimes things get overhyped. But I mean, those are perfectly consistent views to hold. The ultimate potential being enormous. And then it's a very different question of how far are we from that or what can we do with near term technology? Yeah. So what's your intuition about the idea of intelligence explosion? So there's this, you know, when you start to think about that leap from the near term to the long term, the natural inclination, like for me, sort of building machine learning systems today, it seems like it's a lot of work to get the general intelligence, but there's some intuition of exponential growth of exponential improvement of intelligence explosion. Can you maybe try to elucidate, try to talk about what's your intuition about the possibility of an intelligence explosion, that it won't be this gradual slow process, there might be a phase shift? Yeah, I think it's, we don't know how explosive it will be. I think for what it's worth, it seems fairly likely to me that at some point, there will be some intelligence explosion, like some period of time, where progress in AI becomes extremely rapid, roughly, roughly in the area where you might say it's kind of humanish equivalent in core cognitive faculties, that the concept of human equivalent starts to break down when you look too closely at it. And just how explosive does something have to be for it to be called an intelligence explosion? Like, does it have to be like overnight, literally, or a few years? But overall, I guess, if you plotted the opinions of different people in the world, I guess that would be somewhat more probability towards the intelligence explosion scenario than probably the average, you know, AI researcher, I guess. So and then the other part of the intelligence explosion, or just forget explosion, just progress is once you achieve that gray area of human level intelligence, is it obvious to you that we should be able to proceed beyond it to get to super intelligence? Yeah, that seems, I mean, as much as any of these things can be obvious, given we've never had one, people have different views, smart people have different views, it's like some degree of uncertainty that always remains for any big, futuristic, philosophical grand question that just we realize humans are fallible, especially about these things. But it does seem, as far as I'm judging things based on my own impressions, that it seems very unlikely that that would be a ceiling at or near human cognitive capacity. And that's such a, I don't know, that's such a special moment, it's both terrifying and exciting to create a system that's beyond our intelligence. So maybe you can step back and say, like, how does that possibility make you feel that we can create something, it feels like there's a line beyond which it steps, it'll be able to outsmart you. And therefore, it feels like a step where we lose control. Well, I don't think the latter follows that is you could imagine. And in fact, this is what a number of people are working towards making sure that we could ultimately project higher levels of problem solving ability while still making sure that they are aligned, like they are in the service of human values. I mean, so losing control, I think, is not a given that that would happen. Now you asked how it makes me feel, I mean, to some extent, I've lived with this for so long, since as long as I can remember, being an adult or even a teenager, it seemed to me obvious that at some point, AI will succeed. And so I actually misspoke, I didn't mean control, I meant, because the control problem is an interesting thing. And I think the hope is, at least we should be able to maintain control over systems that are smarter than us. But we do lose our specialness, it sort of will lose our place as the smartest, coolest thing on earth. And there's an ego involved with that, that humans aren't very good at dealing with. I mean, I value my intelligence as a human being. It seems like a big transformative step to realize there's something out there that's more intelligent. I mean, you don't see that as such a fundamentally... I think yes, a lot, I think it would be small, because I mean, I think there are already a lot of things out there that are, I mean, certainly, if you think the universe is big, there's going to be other civilizations that already have super intelligences, or that just naturally have brains the size of beach balls and are like, completely leaving us in the dust. And we haven't come face to face with them. We haven't come face to face. But I mean, that's an open question, what would happen in a kind of post human world? Like how much day to day would these super intelligences be involved in the lives of ordinary? I mean, you could imagine some scenario where it would be more like a background thing that would help protect against some things, but you wouldn't like that, they wouldn't be this intrusive kind of, like making you feel bad by like, making clever jokes on your expert, like there's like all sorts of things that maybe in the human context would feel awkward about that. You don't want to be the dumbest kid in your class, everybody picks it, like, a lot of those things, maybe you need to abstract away from, if you're thinking about this context where we have infrastructure that is in some sense, beyond any or all humans. I mean, it's a little bit like, say, the scientific community as a whole, if you think of that as a mind, it's a little bit of a metaphor. But I mean, obviously, it's got to be like, way more capacious than any individual. So in some sense, there is this mind like thing already out there that's just vastly more intelligent than any individual is. And we think, okay, that's, you just accept that as a fact. That's the basic fabric of our existence is there's super intelligent. You get used to a lot of, I mean, there's already Google and Twitter and Facebook, these recommender systems that are the basic fabric of our, I could see them becoming, I mean, do you think of the collective intelligence of these systems as already perhaps reaching super intelligence level? Well, I mean, so here it comes to the concept of intelligence and the scale and what human level means. The kind of vagueness and indeterminacy of those concepts starts to dominate how you would answer that question. So like, say the Google search engine has a very high capacity of a certain kind, like retrieving, remembering and retrieving information, particularly like text or images that are, you have a kind of string, a word string key, obviously superhuman at that, but a vast set of other things it can't even do at all. Not just not do well, but so you have these current AI systems that are superhuman in some limited domain and then like radically subhuman in all other domains. Same with a chess, like are just a simple computer that can multiply really large numbers, right? So it's going to have this like one spike of super intelligence and then a kind of a zero level of capability across all other cognitive fields. Yeah, I don't necessarily think the generalness, I mean, I'm not so attached with it, but I think it's sort of, it's a gray area and it's a feeling, but to me sort of alpha zero is somehow much more intelligent, much, much more intelligent than Deep Blue. And to say which domain, you could say, well, these are both just board games, they're both just able to play board games, who cares if they're going to do better or not, but there's something about the learning, the self play that makes it, crosses over into that land of intelligence that doesn't necessarily need to be general. In the same way, Google is much closer to Deep Blue currently in terms of its search engine than it is to sort of the alpha zero. And the moment it becomes, the moment these recommender systems really become more like alpha zero, but being able to learn a lot without the constraints of being heavily constrained by human interaction, that seems like a special moment in time. I mean, certainly learning ability seems to be an important facet of general intelligence, that you can take some new domain that you haven't seen before and you weren't specifically pre programmed for, and then figure out what's going on there and eventually become really good at it. So that's something alpha zero has much more of than Deep Blue had. And in fact, I mean, systems like alpha zero can learn not just Go, but other, in fact, probably beat Deep Blue in chess and so forth. So you do see this as general and it matches the intuition. We feel it's more intelligent and it also has more of this general purpose learning ability. And if we get systems that have even more general purpose learning ability, it might also trigger an even stronger intuition that they are actually starting to get smart. So if you were to pick a future, what do you think a utopia looks like with AGI systems? Sort of, is it the neural link brain computer interface world where we're kind of really closely interlinked with AI systems? Is it possibly where AGI systems replace us completely while maintaining the values and the consciousness? Is it something like it's a completely invisible fabric, like you mentioned, a society where just aids and a lot of stuff that we do like curing diseases and so on. What is utopia if you get to pick? Yeah, I mean, it is a good question and a deep and difficult one. I'm quite interested in it. I don't have all the answers yet, but I might never have. But I think there are some different observations one can make. One is if this scenario actually did come to pass, it would open up this vast space of possible modes of being. On one hand, material and resource constraints would just be like expanded dramatically. So there would be a lot of a big pie, let's say. Also it would enable us to do things, including to ourselves, it would just open up this much larger design space and option space than we have ever had access to in human history. I think two things follow from that. One is that we probably would need to make a fairly fundamental rethink of what ultimately we value, like think things through more from first principles. The context would be so different from the familiar that we could have just take what we've always been doing and then like, oh, well, we have this cleaning robot that cleans the dishes in the sink and a few other small things. I think we would have to go back to first principles. So even from the individual level, go back to the first principles of what is the meaning of life, what is happiness, what is fulfillment. And then also connected to this large space of resources is that it would be possible. And I think something we should aim for is to do well by the lights of more than one value system. That is, we wouldn't have to choose only one value criterion and say we're going to do something that scores really high on the metric of, say, hedonism, and then is like a zero by other criteria, like kind of wireheaded brain synovat, and it's like a lot of pleasure, that's good, but then like no beauty, no achievement like that. Or pick it up, I think to some significant, not unlimited sense, but the significant sense, it would be possible to do very well by many criteria, like maybe you could get like 98% of the best according to several criteria at the same time, given this great expansion of the option space. So have competing value systems, competing criteria, as a sort of forever, just like our Democrat versus Republican, there seems to be this always multiple parties that are useful for our progress in society, even though it might seem dysfunctional inside the moment, but having the multiple value system seems to be beneficial for, I guess, a balance of power. So that's, yeah, not exactly what I have in mind that it, well, although maybe in an indirect way it is, but that if you had the chance to do something that scored well on several different metrics, our first instinct should be to do that rather than immediately leap to the thing, which ones of these value systems are we going to screw over? Like our first, let's first try to do very well by all of them. Then it might be that you can't get 100% of all and you would have to then like have the hard conversation about which one will only get 97%. There you go. There's my cynicism that all of existence is always a trade off, but you say, maybe it's not such a bad trade off. Let's first at least try it. Well, this would be a distinctive context in which at least some of the constraints would be removed. I'll leave it at that. So there's probably still be trade offs in the end. It's just that we should first make sure we at least take advantage of this abundance. So in terms of thinking about this, like, yeah, one should think, I think in this kind of frame of mind of generosity and inclusiveness to different value systems and see how far one can get there at first. And I think one could do something that would be very good according to many different criteria. We kind of talked about AGI fundamentally transforming the value system of our existence, the meaning of life. But today, what do you think is the meaning of life? The silliest or perhaps the biggest question, what's the meaning of life? What's the meaning of existence? What gives your life fulfillment, purpose, happiness, meaning? Yeah, I think these are, I guess, a bunch of different but related questions in there that one can ask. Happiness meaning. Yeah. I mean, like you could imagine somebody getting a lot of happiness from something that they didn't think was meaningful. Like mindless, like watching reruns of some television series, waiting junk food, like maybe some people that gives pleasure, but they wouldn't think it had a lot of meaning. Whereas, conversely, something that might be quite loaded with meaning might not be very fun always, like some difficult achievement that really helps a lot of people, maybe requires self sacrifice and hard work. So these things can, I think, come apart, which is something to bear in mind also when if you're thinking about these utopia questions that you might, to actually start to do some constructive thinking about that, you might have to isolate and distinguish these different kinds of things that might be valuable in different ways. Make sure you can sort of clearly perceive each one of them and then you can think about how you can combine them. And just as you said, hopefully come up with a way to maximize all of them together. Yeah, or at least get, I mean, maximize or get like a very high score on a wide range of them, even if not literally all. You can always come up with values that are exactly opposed to one another, right? But I think for many values, they're kind of opposed with, if you place them within a certain dimensionality of your space, like there are shapes that are kind of, you can't untangle like in a given dimensionality, but if you start adding dimensions, then it might in many cases just be that they are easy to pull apart and you could. So we'll see how much space there is for that, but I think that there could be a lot in this context of radical abundance, if ever we get to that. I don't think there's a better way to end it, Nick. You've influenced a huge number of people to work on what could very well be the most important problems of our time. So it's a huge honor. Thank you so much for talking. Well, thank you for coming by, Lex. That was fun. Thank you. Thanks for listening to this conversation with Nick Bostrom, and thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LEXPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, subscribe on Patreon, or simply connect with me on Twitter at Lex Friedman. And now, let me leave you with some words from Nick Bostrom. Our approach to existential risks cannot be one of trial and error. There's no opportunity to learn from errors. The reactive approach, see what happens, limit damages, and learn from experience is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive, preventative action and to bear the costs, moral and economic, of such actions. Thank you for listening, and hope to see you next time.
Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83
The following is a conversation with William McCaskill. He's a philosopher, ethicist, and one of the originators of the effective altruism movement. His research focuses on the fundamentals of effective altruism, or the use of evidence and reason to help others by as much as possible with our time and money, with a particular concentration on how to act given moral uncertainty. He's the author of Doing Good, Better, Effective Altruism, and a Radical New Way to Make a Difference. He is a cofounder and the president of the Center of Effective Altruism, CEA, that encourages people to commit to donate at least 10% of their income to the most effective charities. He cofounded 80,000 Hours, which is a nonprofit that provides research and advice on how you can best make a difference through your career. This conversation was recorded before the outbreak of the coronavirus pandemic. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to send and receive money digitally, peer to peer, and security in all digital transactions is very important, let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now, we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the App Store or Google Play, and use the code LEXPODCAST, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with William McCaskill. What does utopia for humans and all life on Earth look like for you? That's a great question. What I want to say is that we don't know, and the utopia we want to get to is an indirect one that I call the long reflection. So, period of post scarcity, no longer have the kind of urgent problems we have today, but instead can spend, perhaps it's tens of thousands of years debating, engaging in ethical reflection in order, before we take any kind of drastic lock in, actions like spreading to the stars, and then we can figure out what is of kind of moral value. The long reflection, that's a really beautiful term. So, if we look at Twitter for just a second, do you think human beings are able to reflect in a productive way? I don't mean to make it sound bad, because there is a lot of fights and politics and division in our discourse. Maybe if you zoom out, it actually is civilized discourse. It might not feel like it, but when you zoom out. So, I don't want to say that Twitter is not civilized discourse. I actually believe it. It's more civilized than people give it credit for. But do you think the long reflection can actually be stable, where we as human beings with our descendant of eight brains would be able to sort of rationally discuss things together and arrive at ideas? I think, overall, we're pretty good at discussing things rationally, and at least in the earlier stages of our lives being open to many different ideas, and being able to be convinced and change our views. I think that Twitter is designed almost to bring out all the worst tendencies. So, if the long reflection were conducted on Twitter, maybe it would be better just not even to bother. But I think the challenge really is getting to a stage where we have a society that is as conducive as possible to rational reflection, to deliberation. I think we're actually very lucky to be in a liberal society where people are able to discuss a lot of ideas and so on. I think when we look to the future, that's not at all guaranteed that society would be like that, rather than a society where there's a fixed canon of values that are being imposed on all of society, and where you aren't able to question that. That would be very bad from my perspective, because it means we wouldn't be able to figure out what the truth is. I can already sense we're going to go down a million tangents, but what do you think is the... If Twitter is not optimal, what kind of mechanism in this modern age of technology can we design where the exchange of ideas could be both civilized and productive, and yet not be too constrained where there's rules of what you can say and can't say, which is, as you say, is not desirable, but yet not have some limits as to what can be said or not and so on? Do you have any ideas, thoughts on the possible future? Of course, nobody knows how to do it, but do you have thoughts of what a better Twitter might look like? I think that text based media are intrinsically going to be very hard to be conducive to rational discussion, because if you think about it from an informational perspective, if I just send you a text of less than, what is it now, 240 characters, 280 characters, I think, that's a tiny amount of information compared to, say, you and I talking now, where you have access to the words I say, which is the same as in text, but also my tone, also my body language, and we're very poorly designed to be able to assess... I have to read all of this context into anything you say, so maybe your partner sends you a text and has a full stop at the end. Are they mad at you? You don't know. You have to infer everything about this person's mental state from whether they put a full stop at the end of a text or not. Well, the flip side of that is it truly text that's the problem here, because there's a viral aspect to the text, where you could just post text nonstop. It's very immediate. The times before Twitter, before the internet, the way you would exchange texts is you would write books. And that, while it doesn't get body language, it doesn't get tone, it doesn't... so on, but it does actually boil down after some time of thinking, some editing, and so on, boil down ideas. So is the immediacy and the viral nature, which produces the outrage mobs and so on, the potential problem? I think that is a big issue. I think there's going to be this strong selection effect where something that provokes outrage, well, that's high arousal, you're more likely to retweet that, whereas kind of sober analysis is not as sexy, not as viral. I do agree that long form content is much better to productive discussion. In terms of the media that are very popular at the moment, I think that podcasting is great where your podcasts are two hours long, so they're much more in depth than Twitter are, and you are able to convey so much more nuance, so much more caveat, because it's an actual conversation. It's more like the sort of communication that we've evolved to do, rather than these very small little snippets of ideas that, when also combined with bad incentives, just clearly aren't designed for helping us get to the truth. It's kind of interesting that it's not just the length of the podcast medium, but it's the fact that it was started by people that don't give a damn about quote unquote demand, that there's a relaxed, sort of the style that Joe Rogan does, there's a freedom to express ideas in an unconstrained way that's very real. It's kind of funny that it feels so refreshingly real to us today, and I wonder what the future looks like. It's a little bit sad now that quite a lot of sort of more popular people are getting into podcasting, and they try to sort of create, they try to control it, they try to constrain it in different kinds of ways. People I love, like Conan O Brien and so on, different comedians, and I'd love to see where the real aspects of this podcasting medium persist, maybe in TV, maybe in YouTube, maybe Netflix is pushing those kind of ideas, and it's kind of, it's a really exciting word, that kind of sharing of knowledge. Yeah, I mean, I think it's a double edged sword as it becomes more popular and more profitable, where on the one hand you'll get a lot more creativity, people doing more interesting things with the medium, but also perhaps you get this place to the bottom where suddenly maybe it'll be hard to find good content on podcasts because it'll be so overwhelmed by the latest bit of viral outrage. So speaking of that, jumping on Effective Altruism for a second, so much of that internet content is funded by advertisements. Just in the context of Effective Altruism, we're talking about the richest companies in the world, they're funded by advertisements essentially, Google, that's their primary source of income. Do you see that as, do you have any criticism of that source of income? Do you see that source of money as a potentially powerful source of money that could be used, well, certainly could be used for good, but is there something bad about that source of money? I think there's significant worries with it, where it means that the incentives of the company might be quite misaligned with making people's lives better, where again, perhaps the incentives are towards increasing drama and debate on your social media feed in order that more people are going to be engaged, perhaps compulsively involved with the platform. Whereas there are other business models like having an opt in subscription service where perhaps they have other issues, but there's much more of an incentive to provide a product that its users are just really wanting, because now I'm paying for this product. I'm paying for this thing that I want to buy rather than I'm trying to use this thing and it's going to get a profit mechanism that is somewhat orthogonal to me actually just wanting to use the product. And so, I mean, in some cases it'll work better than others. I can imagine, I can in theory imagine Facebook having a subscription service, but I think it's unlikely to happen anytime soon. Well, it's interesting and it's weird now that you bring it up that it's unlikely. For example, I pay I think 10 bucks a month for YouTube Red and I don't think I get it much for that except just for no ads, but in general it's just a slightly better experience. And I would gladly, now I'm not wealthy, in fact I'm operating very close to zero dollars, but I would pay 10 bucks a month to Facebook and 10 bucks a month to Twitter for some kind of more control in terms of advertisements and so on. But the other aspect of that is data, personal data. People are really sensitive about this and I as one who hopes to one day create a company that may use people's data to do good for the world, wonder about this. One, the psychology of why people are so paranoid. Well, I understand why, but they seem to be more paranoid than is justified at times. And the other is how do you do it right? So it seems that Facebook is, it seems that Facebook is doing it wrong. That's certainly the popular narrative. It's unclear to me actually how wrong. Like I tend to give them more benefit of the doubt because it's a really hard thing to do right and people don't necessarily realize it, but how do we respect in your view people's privacy? Yeah, I mean in the case of how worried are people about using their data, I mean there's a lot of public debate and criticism about it. When we look at people's revealed preferences, people's continuing massive use of these sorts of services. It's not clear to me how much people really do care. Perhaps they care a bit, but they're happy to in effect kind of sell their data in order to be able to kind of use a certain service. That's a great term, revealed preferences. So these aren't preferences you self report in the survey. This is like your actions speak. Yeah, exactly. So you might say, oh yeah, I hate the idea of Facebook having my data. But then when it comes to it, you actually are willing to give that data in exchange for being able to use the service. And if that's the case, then I think unless we have some explanation about why there's some negative externality from that or why there's some coordination failure, or if there's something that consumers are just really misled about where they don't realize why giving away data like this is a really bad thing to do, then ultimately I kind of want to, you know, respect people's preferences. They can give away their data if they want. I think there's a big difference between companies use of data and governments having data where, you know, looking at the track record of history, governments knowing a lot about their people can be very bad if the government chooses to do bad things with it. And that's more worrying, I think. So let's jump into it a little bit. Most people know, but actually I, two years ago, had no idea what effective altruism was until I saw there was a cool looking event in an MIT group here. I think it's called the Effective Altruism Club or a group. I was like, what the heck is that? And one of my friends said, I mean, he said that they're just a bunch of eccentric characters. So I was like, hell yes, I'm in. So I went to one of their events and looked up what's it about. It's quite a fascinating philosophical and just a movement of ideas. So can you tell me what is effective altruism? Great, so the core of effective altruism is about trying to answer this question, which is how can I do as much good as possible with my scarce resources, my time and with my money? And then once we have our best guess answers to that, trying to take those ideas and put that into practice, and do those things that we believe will do the most good. And we're now a community of people, many thousands of us around the world, who really are trying to answer that question as best we can and then use our time and money to make the world better. So what's the difference between sort of classical general idea of altruism and effective altruism? So normally when people try to do good, they often just aren't so reflective about those attempts. So someone might approach you on the street asking you to give to charity. And if you're feeling altruistic, you'll give to the person on the street. Or if you think, oh, I wanna do some good in my life, you might volunteer at a local place. Or perhaps you'll decide, pursue a career where you're working in a field that's kind of more obviously beneficial like being a doctor or a nurse or a healthcare professional. But it's very rare that people apply the same level of rigor and analytical thinking to lots of other areas we think about. So take the case of someone approaching you on the street. Imagine if that person instead was saying, hey, I've got this amazing company. Do you want to invest in it? It would be insane. No one would ever think, oh, of course, I'm just a company like you'd think it was a scam. But somehow we don't have that same level of rigor when it comes to doing good, even though the stakes are more important when it comes to trying to help others than trying to make money for ourselves. Well, first of all, so there is a psychology at the individual level of doing good just feels good. And so in some sense, on that pure psychological part, it doesn't matter. In fact, you don't wanna know if it does good or not because most of the time it won't. So like in a certain sense, it's understandable why altruism without the effective part is so appealing to a certain population. By the way, let's zoom off for a second. Do you think most people, two questions. Do you think most people are good? And question number two is, do you think most people wanna do good? So are most people good? I think it's just super dependent on the circumstances that someone is in. I think that the actions people take and their moral worth is just much more dependent on circumstance than it is on someone's intrinsic character. So is there evil within all of us? It seems like with the better angels of our nature, there's a tendency of us as a society to tend towards good, less war. I mean, with all these metrics. Is that us becoming who we want to be or is that some kind of societal force? What's the nature versus nurture thing here? Yeah, so in that case, I just think, yeah, so violence has massively declined over time. I think that's a slow process of cultural evolution, institutional evolution such that now the incentives for you and I to be violent are very, very small indeed. In contrast, when we were hunter gatherers, the incentives were quite large. If there was someone who was potentially disturbing the social order and hunter gatherer setting, there was a very strong incentive to kill that person and people did and it was just the guarded 10% of deaths among hunter gatherers were murders. After hunter gatherers, when you have actual societies is when violence can probably go up because there's more incentive to do mass violence, right? To take over, conquer other people's lands and murder everybody in place and so on. Yeah, I mean, I think total death rate from human causes does go down, but you're right that if you're in a hunter gatherer situation you're kind of a group that you're part of is very small then you can't have massive wars that just massive communities don't exist. But anyway, the second question, do you think most people want to do good? Yeah, and then I think that is true for most people. I think you see that with the fact that most people donate, a large proportion of people volunteer. If you give people opportunities to easily help other people, they will take it. But at the same time, we're a product of our circumstances and if it were more socially awarded to be doing more good, if it were more socially awarded to do good effectively rather than not effectively, then we would see that behavior a lot more. So why should we do good? Yeah, my answer to this is there's no kind of deeper level of explanation. So my answer to kind of why should you do good is well, there is someone whose life is on the line, for example, whose life you can save via donating just actually a few thousand dollars to an effective nonprofit like the Against Malaria Foundation. That is a sufficient reason to do good. And then if you ask, well, why ought I to do that? I'm like, I just show you the same facts again. It's that fact that is the reason to do good. There's nothing more fundamental than that. I'd like to sort of make more concrete the thing we're trying to make better. So you just mentioned malaria. So there's a huge amount of suffering in the world. Are we trying to remove? So is ultimately the goal, not ultimately, but the first step is to remove the worst of the suffering. So there's some kind of threshold of suffering that we want to make sure does not exist in the world. Or do we really naturally want to take a much further step and look at things like income inequality? So not just getting everybody above a certain threshold, but making sure that there's some, that broadly speaking, there's less injustice in the world, unfairness, in some definition, of course, very difficult to define a fairness. Yeah, so the metric I use is how many people do we affect and by how much do we affect them? And so that can, often that means eliminating suffering, but it doesn't have to, could be helping promote a flourishing life instead. And so if I was comparing reducing income inequality or getting people from the very pits of suffering to a higher level, the question I would ask is just a quantitative one of just if I do this first thing or the second thing, how many people am I going to benefit and by how much am I going to benefit? Am I going to move that one person from kind of 10%, 0% well being to 10% well being? Perhaps that's just not as good as moving a hundred people from 10% well being to 50% well being. And the idea is the diminishing returns is the idea of when you're in terrible poverty, then the $1 that you give goes much further than if you were in the middle class in the United States, for example. Absolutely. And this fact is really striking. So if you take even just quite a conservative estimate of how we are able to turn money into well being, the economists put it as like a log curve. That's the or steeper. But that means that any proportional increase in your income has the same impact on your well being. And so someone moving from $1,000 a year to $2,000 a year has the same impact as someone moving from $100,000 a year to $200,000 a year. And then when you combine that with the fact that we in middle class members of rich countries are 100 times richer than financial terms in the global poor, that means we can do a hundred times to benefit the poorest people in the world as we can to benefit people of our income level. And that's this astonishing fact. Yeah, it's quite incredible. A lot of these facts and ideas are just difficult to think about because there's an overwhelming amount of suffering in the world. And even acknowledging it is difficult. Not exactly sure why that is. I mean, I mean, it's difficult because you have to bring to mind, you know, it's an unpleasant experience thinking about other people's suffering. It's unpleasant to be empathizing with it, firstly. And then secondly, thinking about it means that maybe we'd have to change our lifestyles. And if you're very attached to the income that you've got, perhaps you don't want to be confronting ideas or arguments that might cause you to use some of that money to help others. So it's quite understandable in the psychological terms, even if it's not the right thing that we ought to be doing. So how can we do better? How can we be more effective? How does data help? Yeah, in general, how can we do better? It's definitely hard. And we have spent the last 10 years engaged in kind of some deep research projects, to try and answer kind of two questions. One is, of all the many problems the world is facing, what are the problems we ought to be focused on? And then within those problems that we judge to be kind of the most pressing, where we use this idea of focusing on problems that are the biggest in scale, that are the most tractable, where we can make the most progress on that problem, and that are the most neglected. Within them, what are the things that have the kind of best evidence, or we have the best guess, will do the most good. And so we have a bunch of organizations. So GiveWell, for example, is focused on global health and development, and has a list of seven top recommended charities. So the idea in general, and sorry to interrupt, is, so we'll talk about sort of poverty and animal welfare and existential risk. Those are all fascinating topics, but in general, the idea is there should be a group, sorry, there's a lot of groups that seek to convert money into good. And then you also on top of that want to have a accounting of how good they actually perform that conversion, how well they did in converting money to good. So ranking of these different groups, ranking these charities. So does that apply across basically all aspects of effective altruism? So there should be a group of people, and they should report on certain metrics of how well they've done, and you should only give your money to groups that do a good job. That's the core idea. I'd make two comments. One is just, it's not just about money. So we're also trying to encourage people to work in areas where they'll have the biggest impact. Absolutely. And in some areas, you know, they're really people heavy, but money poor. Other areas are kind of money rich and people poor. And so whether it's better to focus time or money depends on the cause area. And then the second is that you mentioned metrics, and while that's the ideal, and in some areas we do, we are able to get somewhat quantitative information about how much impact an area is having. That's not always true. For some of the issues, like you mentioned existential risks, well, we're not able to measure in any sort of precise way like how much progress we're making. And so you have to instead fall back on just rigorous argument and evaluation, even in the absence of data. So let's first sort of linger on your own story for a second. How do you yourself practice effective altruism in your own life? Because I think that's a really interesting place to start. So I've tried to build effective altruism into at least many components of my life. So on the donation side, my plan is to give away most of my income over the course of my life. I've set a bar I feel happy with and I just donate above that bar. So at the moment, I donate about 20% of my income. Then on the career side, I've also shifted kind of what I do, where I was initially planning to work on very esoteric topics in the philosophy of logic, philosophy of language, things that are intellectually extremely interesting, but the path by which they really make a difference to the world is, let's just say it's very unclear at best. And so I switched instead to researching ethics to actually just working on this question of how we can do as much good as possible. And then I've also spent a very large chunk of my life over the last 10 years creating a number of nonprofits who again in different ways are tackling this question of how we can do the most good and helping them to grow over time too. Yeah, we mentioned a few of them with the career selection, 80,000. 80,000 hours. 80,000 hours is a really interesting group. So maybe also just a quick pause on the origins of effective altruism because you paint a picture who the key figures are, including yourself in the effective altruism movement today. Yeah, there are two main strands that kind of came together to form the effective altruism movement. So one was two philosophers, myself and Toby Ord at Oxford, and we had been very influenced by the work of Peter Singer, an Australian model philosopher who had argued for many decades that because one can do so much good at such little cost to oneself, we have an obligation to give away most of our income to benefit those in extreme poverty, just in the same way that we have an obligation to run in and save a child from drowning in a shallow pond if it would just ruin your suit that cost a few thousand dollars. And we set up Giving What We Can in 2009, which is encouraging people to give at least 10% of their income to the most effective charities. And the second main strand was the formation of GiveWell, which was originally based in New York and started in about 2007. And that was set up by Holden Carnovsky and Elie Hassenfeld, who were two hedge fund dudes who were making good money and thinking, well, where should I donate? And in the same way as if they wanted to buy a product for themselves, they would look at Amazon reviews. They were like, well, what are the best charities? Found they just weren't really good answers to that question, certainly not that they were satisfied with. And so they formed GiveWell in order to try and work out what are those charities where they can have the biggest impact. And then from there and some other influences, kind of community grew and spread. Can we explore the philosophical and political space that effective altruism occupies a little bit? So from the little and distant in my own lifetime that I've read of Ayn Rand's work, Ayn Rand's philosophy of objectivism, espouses, and it's interesting to put her philosophy in contrast with effective altruism. So it espouses selfishness as the best thing you can do. But it's not actually against altruism. It's just you have that choice, but you should be selfish in it, right? Or not, maybe you can disagree here. But so it can be viewed as the complete opposite of effective altruism or it can be viewed as similar because the word effective is really interesting. Because if you want to do good, then you should be damn good at doing good, right? I think that would fit within the morality that's defined by objectivism. So do you see a connection between these two philosophies and other perhaps in this complicated space of beliefs that effective altruism is positioned as opposing or aligned with? I would definitely say that objectivism, Ayn Rand's philosophy, is a philosophy that's quite fundamentally opposed to effective altruism. In which way? Insofar as Ayn Rand's philosophy is about championing egoism and saying that I'm never quite sure whether the philosophy is meant to say that just you ought to do whatever will best benefit yourself, that's ethical egoism, no matter what the consequences are. Or second, if there's this alternative view, which is, well, you ought to try and benefit yourself because that's actually the best way of benefiting society. Certainly, in Atlas Shalaguchi is presenting her philosophy as a way that's actually going to bring about a flourishing society. And if it's the former, then well, effective altruism is all about promoting the idea of altruism and saying, in fact, we ought to really be trying to help others as much as possible. So it's opposed there. And then on the second side, I would just dispute the empirical premise. It would seem, given the major problems in the world today, it would seem like this remarkable coincidence, quite suspicious, one might say, if benefiting myself was actually the best way to bring about a better world. So on that point, and I think that connects also with career selection that we'll talk about, but let's consider not objectives, but capitalism. And the idea that you focusing on the thing that you are damn good at, whatever that is, may be the best thing for the world. Part of it is also mindset, right? The thing I love is robots. So maybe I should focus on building robots and never even think about the idea of effective altruism, which is kind of the capitalist notion. Is there any value in that idea in just finding the thing you're good at and maximizing your productivity in this world and thereby sort of lifting all boats and benefiting society as a result? Yeah, I think there's two things I'd want to say on that. So one is what your comparative advantages, what your strengths are when it comes to career. That's obviously super important because there's lots of career paths I would be terrible at if I thought being an artist was the best thing one could do. Well, I'd be doomed, just really quite astonishingly bad. And so I do think, at least within the realm of things that could plausibly be very high impact, choose the thing that you think you're going to be able to really be passionate at and excel at over the long term. Then on this question of should one just do that in an unrestricted way and not even think about what the most important problems are. I do think that in a kind of perfectly designed society, that might well be the case. That would be a society where we've corrected all market failures, we've internalized all externalities, and then we've managed to set up incentives such that people just pursuing their own strengths is the best way of doing good. But we're very far from that society. So if one did that, then it would be very unlikely that you would focus on improving the lives of nonhuman animals that aren't participating in markets or ensuring the long run future goes well, where future people certainly aren't participating in markets or benefiting the global poor who do participate, but have so much less kind of power from a starting perspective that their views aren't accurately kind of represented by market forces too. Got it. So yeah, instead of pure definition capitalism, it just may very well ignore the people that are suffering the most, the white swath of them. So if you could allow me this line of thinking here. So I've listened to a lot of your conversations online. I find, if I can compliment you, they're very interesting conversations. Your conversation on Rogan, on Joe Rogan was really interesting, with Sam Harris and so on, whatever. There's a lot of stuff that's really good out there. And yet, when I look at the internet and I look at YouTube, which has certain mobs, certain swaths of right leaning folks, whom I dearly love. I love all people, especially people with ideas. They seem to not like you very much. So I don't understand why exactly. So my own sort of hypothesis is there is a right left divide that absurdly so caricatured in politics, at least in the United States. And maybe you're somehow pigeonholed into one of those sides. And maybe that's what it is. Maybe your message is somehow politicized. Yeah, I mean. How do you make sense of that? Because you're extremely interesting. Like you got the comments I see on Joe Rogan. There's a bunch of negative stuff. And yet, if you listen to it, the conversation is fascinating. I'm not speaking, I'm not some kind of lefty extremist, but just it's a fascinating conversation. So why are you getting some small amount of hate? So I'm actually pretty glad that Effective Altruism has managed to stay relatively unpoliticized because I think the core message to just use some of your time and money to do as much good as possible to fight some of the problems in the world can be appealing across the political spectrum. And we do have a diversity of political viewpoints among people who have engaged in Effective Altruism. We do, however, do get some criticism from the left and the right. Oh, interesting. What's the criticism? Both would be interesting to hear. Yeah, so criticism from the left is that we're not focused enough on dismantling the capitalist system that they see as the root of most of the problems that we're talking about. And there I kind of disagree on partly the premise where I don't think relevant alternative systems would say to the animals or to the global poor or to the future generations kind of much better. And then also the tactics where I think there are particular ways we can change society that would massively benefit, you know, be massively beneficial on those things that don't go via dismantling like the entire system, which is perhaps a million times harder to do. Then criticism on the right, there's definitely like in response to the Joe Rogan podcast. There definitely were a number of Ayn Rand fans who weren't keen on the idea of promoting altruism. There was a remarkable set of ideas. Just the idea that Effective Altruism was unmanly, I think, was driving a lot of criticism. Okay, so I love fighting. I've been in street fights my whole life. I'm as alpha in everything I do as it gets. And the fact that Joe Rogan said that I thought Scent of a Woman is a better movie than John Wick put me into this beta category amongst people who are like basically saying this, yeah, unmanly or it's not tough. It's not some principled view of strength that is represented by a spasmodic. So actually, so how do you think about this? Because to me, altruism, especially Effective Altruism, I don't know what the female version of that is, but on the male side, manly as fuck, if I may say so. So how do you think about that kind of criticism? I think people who would make that criticism are just occupying a like state of mind that I think is just so different from my state of mind that I kind of struggle to maybe even understand it where if something's manly or unmanly or feminine or unfeminine, I'm like, I don't care. Like, is it the right thing to do or the wrong thing to do? So let me put it not in terms of man or woman. I don't think that's useful, but I think there's a notion of acting out of fear as opposed to out of principle and strength. Yeah. So, okay. Yeah. Here's something that I do feel as an intuition and that I think drives some people who do find Canvaean Land attractive and so on as a philosophy, which is a kind of taking control of your own life and having power over how you're steering your life and not kind of kowtowing to others, you know, really thinking things through. I find like that set of ideas just very compelling and inspirational. I actually think of effect of altruism has really, you know, that side of my personality. It's like scratch that itch where you are just not taking the kind of priorities that society is giving you as granted. Instead, you're choosing to act in accordance with the priorities that you think are most important in the world. And often that involves then doing quite unusual things from a societal perspective, like donating a large chunk of your earnings or working on these weird issues about AI and so on that other people might not understand. Yeah, I think that's a really gutsy thing to do. That is taking control. That's at least at this stage. I mean, that's you taking ownership, not of just yourself, but your presence in this world that's full of suffering and saying as opposed to being paralyzed by that notion is taking control and saying I could do something. Yeah, I mean, that's really powerful. But I mean, sort of the one thing I personally hate too about the left currently that I think those folks to detect is the social signaling. When you look at yourself, sort of late at night, would you do everything you're doing in terms of effective altruism if your name, because you're quite popular, but if your name was totally unattached to it, so if it was in secret. Yeah, I mean, I think I would. To be honest, I think the kind of popularity is like, you know, it's mixed bag, but there are serious costs. And I don't particularly, I don't like love it. Like, it means you get all these people calling you a cuck on Joe Rogan. It's like not the most fun thing. But you also get a lot of sort of brownie points for doing good for the world. Yeah, you do. But I think my ideal life, I would be like in some library solving logic puzzles all day and I'd like really be like learning maths and so on. So you have a like good body of friends and so on. So your instinct for effective altruism is something deep. It's not one that is communicating socially. It's more in your heart. You want to do good for the world. Yeah, I mean, so we can look back to early giving what we can. So, you know, we're setting this up, me and Toby. And I really thought that doing this would be a big hit to my academic career because I was now spending, you know, at that time more than half my time setting up this nonprofit at the crucial time when you should be like producing your best academic work and so on. And it was also the case at the time. It was kind of like the Toby order club. You know, he was he was the most popular. There's this personal interest story about him and his plans donate and sorry to interrupt but Toby was donating a large amount. Can you tell just briefly what he was doing? Yeah, so he made this public commitment to give everything he earned above 20,000 pounds per year to the most effective causes. And even as a graduate student, he was still donating about 15, 20% of his income, which is so quite significant given that graduate students are not known for being super wealthy. That's right. And when we launched Giving What We Can, the media just loved this as like a personal interest story. So the story about him and his pledge was the most, yeah, it was actually the most popular news story of the day. And we kind of ran the same story a year later and it was the most popular news story of the day a year later too. And so it really was kind of several years before then I was also kind of giving more talks and starting to do more writing and then especially with, you know, I wrote this book Doing Good Better that then there started to be kind of attention and so on. But deep inside your own relationship with effective altruism was, I mean, it had nothing to do with the publicity. Did you see yourself? How did the publicity connect with it? Yeah, I mean, that's kind of what I'm saying is I think the publicity came like several years afterwards. I mean, at the early stage when we set up Giving What We Can, it was really just every person we get to pledge 10% is, you know, something like $100,000 over their lifetime. That's huge. And so it was just we had started with 23 members, every single person was just this like kind of huge accomplishment. And at the time, I just really thought, you know, maybe over time we'll have a hundred members and that'll be like amazing. Whereas now we have, you know, over four thousand and one and a half billion dollars pledged. That's just unimaginable to me at the time when I was first kind of getting this, you know, getting the stuff off the ground. So can we talk about poverty and the biggest problems that you think in the near term effective altruism can attack in each one. So poverty obviously is a huge one. Yeah. How can we help? Great. Yeah. So poverty, absolutely this huge problem. 700 million people in extreme poverty living in less than two dollars per day where that's what that means is what two dollars would buy in the US. So think about that. It's like some rice, maybe some beans. It's very, you know, really not much. And at the same time, we can do an enormous amount to improve the lives of people in extreme poverty. So the things that we tend to focus on interventions in global health and that's for a couple of few reasons. One is like global health just has this amazing track record life expectancy globally is up 50% relative to 60 or 70 years ago. We've eradicated smallpox that's which killed 2 million lives every year almost eradicated polio. Second is that we just have great data on what works when it comes to global health. So we just know that bed nets protect children from prevent them from dying from malaria. And then the third is just that's extremely cost effective. So it costs $5 to buy one bed net, protects two children for two years against malaria. If you spend about $3,000 on bed nets, then statistically speaking, you're going to save a child's life. And there are other interventions too. And so given the people in such suffering and we have this opportunity to, you know, do such huge good for such low cost. Well, yeah, why not? So the individual. So for me today, if I wanted to look at poverty, how would I help? And I wanted to say, I think donating 10% of your income is a very interesting idea or some percentage or some setting a bar and sort of sticking to it. How do we then take the step towards the effective part? So you've conveyed some notions, but who do you give the money to? Yeah. So GiveWell, this organization I mentioned, well, it makes charity recommendations and some of its top recommendations. So Against Malaria Foundation is this organization that buys and distributes these insecticide seeded bed nets. And then it has a total of seven charities that it recommends very highly. So that recommendation, is it almost like a star of approval or is there some metrics? So what are the ways that GiveWell conveys that this is a great charity organization? Yeah. So GiveWell is looking at metrics and it's trying to compare charities ultimately in the number of lives that you can save or an equivalent benefit. So one of the charities it recommends is GiveDirectly, which simply just transfers cash to the poorest families where poor family will get a cash transfer of $1,000 and they kind of regard that as the baseline intervention because it's so simple and people, you know, they know what to do with how to benefit themselves. That's quite powerful, by the way. So before GiveWell, before the Effective Altruism Movement, was there, I imagine there's a huge amount of corruption, funny enough, in charity organizations or misuse of money. Yeah. So there was nothing like GiveWell before that? No. I mean, there were some. So, I mean, the charity corruption, I mean, obviously there's some, I don't think it's a huge issue. They're also just focusing on the long things. Prior to GiveWell, there were some organizations like Charity Navigator, which were more aimed at worrying about corruption and so on. So they weren't saying, these are the charities where you're going to do the most good. Instead, it was like, how good are the charities financials? How good is its health? Are they transparent? And yeah, so that would be more useful for weeding out some of those worst charities. So GiveWell has just taken a step further, sort of in this 21st century of data. It's actually looking at the effective part. Yeah. So it's like, you know, if you know the wire cutter for if you want to buy a pair of headphones, they will just look at all the headphones and be like, these are the best headphones you can buy. That's the idea with GiveWell. Okay. So do you think there's a bar of what suffering is? And do you think one day we can eradicate suffering in our world? Yeah. Amongst humans? Let's talk humans for now. Talk humans. But in general, yeah, actually. So there's a colleague of mine calling the term abolitionism for the idea that we should just be trying to abolish suffering. And in the long run, I mean, I don't expect to anytime soon, but I think we can. I think that would require, you know, quite change, quite drastic changes to the way society is structured and perhaps even the, you know, the human, in fact, even changes to human nature. But I do think that suffering whenever it occurs is bad and we should want it to not occur. So there's a line. There's a gray area between suffering. Now I'm Russian. So I romanticize some aspects of suffering. There's a gray line between struggle, gray area between struggle and suffering. So one, do we want to eradicate all struggle in the world? So there's an idea, you know, that the human condition inherently has suffering in it and it's a creative force. It's a struggle of our lives and we somehow grow from that. How do you think about, how do you think about that? I agree that's true. So, you know, often, you know, great artists can be also suffering from, you know, major health conditions or depression and so on. They come from abusive parents. Most great artists, I think, come from abusive parents. Yeah, that seems to be at least commonly the case, but I want to distinguish between suffering as being instrumentally good, you know, it causes people to produce good things and whether it's intrinsically good and I think intrinsically it's always bad. And so if we can produce these, you know, great achievements via some other means where, you know, if we look at the scientific enterprise, we've produced incredible things often from people who aren't suffering, have, you know, pretty good lives. They're just, they're driven instead of, you know, being pushed by a certain sort of anguish. They're being driven by intellectual curiosity. If we can instead produce a society where it's all cavet and no stick, that's better from my perspective. Yeah, but I'm going to disagree with the notion that that's possible, but I would say most of the suffering in the world is not productive. So I would dream of effective altruism curing that suffering. Yeah, but then I would say that there is some suffering that is productive that we want to keep the because but that's not even the focus of because most of the suffering is just absurd and needs to be eliminated. So let's not even romanticize this usual notion I have, but nevertheless struggle has some kind of inherent value that to me at least, you're right. There's some elements of human nature that also have to be modified in order to cure all suffering. Yeah, I mean, there's an interesting question of whether it's possible. So at the moment, you know, most of the time we're kind of neutral and then we burn ourselves and that's negative and that's really good that we get that negative signal because it means we won't burn ourselves again. There's a question like could you design agents humans such that you're not hovering around the zero level you're hovering it like bliss. Yeah, and then you touch the flame and you're like, oh no, you're just slightly worse bliss. Yeah, but that's really bad compared to the bliss you were normally in so that you can have like a gradient of bliss instead of like pain and pleasure on that point. I think it's a really important point on the experience of suffering the relative nature of it. Maybe having grown up in the Soviet Union were quite poor by any measure and when I when I was in my childhood, but it didn't feel like you're poor because everybody around you were poor there's a and then in America, I feel I for the first time begin to feel poor. Yeah. Yeah, because of the road there's different. There's some cultural aspects to it that really emphasize that it's good to be rich. And then there's just the notion that there is a lot of income inequality and therefore you experience that inequality. That's where suffering go. Do you so what do you think about the inequality of suffering that that we have to think about do you think we have to think about that as part of effective altruism? Yeah, I think we're just things vary in terms of whether you get benefits or costs from them just in relative terms or in absolute terms. So a lot of the time yeah, there's this hedonic treadmill where if you get you know, there's money is useful because it helps you buy things or good for you because it helps you buy things, but there's also a status component too and that status component is kind of zero sum if you were saying like in Russia, you know, no one else felt poor because everyone around you is poor. Whereas now you've got this these other people who are you know super rich and maybe that makes you feel. You know less good about yourself. There are some other things however, which are just intrinsically good or bad. So commuting for example, it's just people hate it. It doesn't really change knowing the other people are commuting to doesn't make it any any kind of less bad, but it's sort of to push back on that for a second. I mean, yes, but also if some people were, you know on horseback your commute on the train might feel a lot better. Yeah, you know the there is a relative Nick. I mean everybody's complaining about society today forgetting it's forgetting how much better is the better angels of our nature how the technologies improve fundamentally improving most of the world's lives. Yeah, and actually there's some psychological research on the well being benefits of volunteering where people who volunteer tend to just feel happier about their lives and one of the suggested explanations is it because it extends your reference class. So no longer you comparing yourself to the Joneses who have their slightly better car because you realize that you know people in much worse conditions than you and so now, you know your life doesn't seem so bad. That's actually on the psychological level. One of the fundamental benefits of effective altruism. Yeah is is I mean, I guess it's the altruism part of effective altruism is exposing yourself to the suffering in the world allows you to be more. Yeah happier and actually allows you in the sort of meditative introspective way realize that you don't need most of the wealth you have to to be happy. Absolutely. I mean, I think effective options have been this huge benefit for me and I really don't think that if I had more money that I was living on that that would change my level of well being at all. Whereas engaging in something that I think is meaningful that I think is stealing humanity in a positive direction. That's extremely rewarding. And so yeah, I mean despite my best attempts at sacrifice. Um, I don't you know, I think I've actually ended up happier as a result of engaging in effective altruism than I would have done. That's such an interesting idea. Yeah, so let's let's talk about animal welfare. Sure, easy question. What is consciousness? Yeah, especially as it has to do with the capacity to suffer. I think there seems to be a connection between how conscious something is the amount of consciousness and stability to suffer and that all comes into play about us thinking how much suffering there's in the world with regard to animals. So how do you think about animal welfare and consciousness? Okay. Well consciousness easy question. Okay. Um, yeah, I mean, I think we don't have a good understanding of consciousness. My best guess is it's got and by consciousness. I'm meaning what it is feels like to be you the subjective experience that's seems to be different from everything else we know about in the world. Yeah, I think it's clear. It's very poorly understood at the moment. I think it has something to do with information processing. So the fact that the brain is a computer or something like a computer. So that would mean that very advanced AI could be conscious of information processors in general could be conscious with some suitable complexity, but that also some suitable complexity. It's a question whether greater complexity creates some kind of greater consciousness which relates to animals. Yeah, right. Is there if it's an information processing system and it's smaller and smaller is an ant less conscious than a cow less conscious than a monkey. Yeah, and again this super hard question, but I think my best guess is yes, like if you if I think well consciousness, it's not some magical thing that appears out of nowhere. It's not you know, Descartes thought it was just comes in from this other realm and then enters through the pineal gland in your brain and that's kind of soul and it's conscious. So it's got something to do with what's going on in your brain. A chicken has one three hundredth of the size of the brain that you have ants. I don't know how small it is. Maybe it's a millionth the size my best guess which I may well be wrong about because this is so hard is that in some relevant sense the chicken is experiencing consciousness to a less degree than the human and the ants significantly less again. I don't think it's as little as three hundredth as much. I think there's everyone who's ever seen a chicken that's there's evolutionary reasons for thinking that like the ability to feel pain comes on the scene relatively early on and we have lots of our brain that's dedicated stuff that doesn't seem to have to do in anything to do with consciousness language processing and so on. So it seems like the easy so there's a lot of complicated questions there that we can't ask the animals about but it seems that there is easy questions in terms of suffering which is things like factory farming that could be addressed. Yeah, is that is that the lowest hanging fruit? If I may use crude terms here of animal welfare. Absolutely. I think that's the lowest hanging fruit. So at the moment we kill we raise and kill about 50 billion animals every year. So how many 50 billion in? Yeah, so for every human on the planet several times that number of being killed and the vast majority of them are raised in factory farms where basically whatever your view on animals, I think you should agree even if you think well, maybe it's not bad to kill an animal. Maybe if the animal was raised in good conditions, that's just not the empirical reality. The empirical reality is that they are kept in incredible cage confinement. They are de beaked or detailed without an aesthetic, you know chickens often peck each other to death other like otherwise because of them such stress. It's really, you know, I think when a chicken gets killed that's the best thing that happened to the chicken in the course of its life and it's also completely unnecessary. This is in order to save, you know a few pence for the price of meat or price of eggs and we have indeed found it's also just inconsistent with consumer preference as well people who buy the products if they could they all they when you do surveys are extremely against suffering in factory farms. It's just they don't appreciate how bad it is and you know, just tend to go with easy options. And so then the best the most effective programs I know of at the moment are nonprofits that go to companies and work with companies to get them to take a pledge to cut certain sorts of animal products like eggs from cage confinement out of their supply chain. And it's now the case that the top 50 food retailers and fast food companies have all made these kind of cage free pledges and when you do the numbers you get the conclusion that every dollar you're giving to these nonprofits result in hundreds of chickens being spared from cage confinement. And then they're working to other other types of animals other products too. So is that the most effective way to do in have a ripple effect essentially it's supposed to directly having regulation from on top that says you can't do this. So I would be more open to the regulation approach, but at least in the US there's quite intense regulatory capture from the agricultural industry. And so attempts that we've seen to try and change regulation have it's been a real uphill struggle. There are some examples of ballot initiatives where the people have been able to vote in a ballot to say we want to ban eggs from cage conditions and that's been huge. That's been really good, but beyond that it's much more limited. So I've been really interested in the idea of hunting in general and wild animals and seeing nature as a form of cruelty that I am ethically more okay with. Okay, just from my perspective and then I read about wild animal suffering that I'm just I'm just giving you the kind of yeah notion of how I felt because animal because animal factory farming is so bad. Yeah that living in the woods seem good. Yeah, and yet when you actually start to think about it all I mean all of the animals in the animal world the living in like terrible poverty, right? Yeah. Yeah, so you have all the medical conditions all of that. I mean they're living horrible lives. It could be improved. That's a really interesting notion that I think may not even be useful to talk about because factory farming is such a big thing to focus on. Yeah, but it's nevertheless an interesting notion to think of all the animals in the wild as suffering in the same way that humans in poverty are suffering. Yeah, I mean and often even worse so many animals we produce by our selection. So you have a very large number of children in the expectation that only a small number survive. And so for those animals almost all of them just live short lives where they starve to death. So yeah, there's huge amounts of suffering in nature that I don't think we should you know pretend that it's this kind of wonderful paradise for most animals. Yeah, their life is filled with hunger and fear and disease. Yeah, I did agree with you entirely that when it comes to focusing on animal welfare, we should focus in factory farming, but we also yeah should be aware to the reality of what life for most animals is like. So let's talk about a topic I've talked a lot about and you've actually quite eloquently talked about which is the third priority that effective altruism considers is really important is existential risks. Yeah, when you think about the existential risks that are facing our civilization, what's before us? What concerns you? What should we be thinking about from in the especially from an effective altruism perspective? Great. So the reason I started getting concerned about this was thinking about future generations where the key idea is just well future people matter morally. There are vast numbers of future people. If we don't cause our own extinction, there's no reason why civilization might not last a million years. I mean we last as long as a typical mammalian species or a billion years is when the Earth is no longer habitable or if we can take to the stars then perhaps it's trillions of years beyond that. So the future could be very big indeed and it seems like we're potentially very early on in civilization. Then the second idea is just well, maybe there are things that are going to really derail that things that actually could prevent us from having this long wonderful civilization and instead could cause our own cause our own extinction or otherwise perhaps like lock ourselves into a very bad state. And what ways could that happen? Well causing our own extinction development of nuclear weapons in the 20th century at least put on the table that we now had weapons that were powerful enough that you could very significantly destroy society perhaps and all that nuclear war would cause a nuclear winter. Perhaps that would be enough for the human race to go extinct. Why do you think we haven't done it? Sorry to interrupt. Why do you think we haven't done it yet? Is it surprising to you that having, you know, always for the past few decades several thousand of active ready to launch nuclear weapons warheads and yet we have not launched them ever since the initial launch on Hiroshima and Nagasaki. I think it's a mix of luck. So I think it's definitely not inevitable that we haven't used them. So John F. Kennedy, general Cuban Missile Crisis put the estimate of nuclear exchange between the US and USSR that somewhere between one and three and even so, you know, we really did come close. At the same time, I do think mutually assured destruction is a reason why people don't go to war. It would be, you know, why nuclear powers don't go to war. Do you think that holds if you can linger on that for a second, like my dad is a physicist amongst other things and he believes that nuclear weapons are actually just really hard to build which is one of the really big benefits of them currently so that you don't have it's very hard if you're crazy to build to acquire a nuclear weapon. So the mutually shared destruction really works when you talk seems to work better when it's nation states, when it's serious people, even if they're a little bit, you know, dictatorial and so on. Do you think this mutually sure destruction idea will carry how far will it carry us in terms of different kinds of weapons? Oh, yeah, I think it's your point that nuclear weapons are very hard to build and relatively easy to control because you can control fissile material is a really important one and future technology that's equally destructive might not have those properties. So for example, if in the future people are able to design viruses, perhaps using a DNA printing kit that's on that, you know, one can just buy. In fact, there are companies in the process of creating home DNA printing kits. Well, then perhaps that's just totally democratized. Perhaps the power to reap huge destructive potential is in the hands of most people in the world or certainly most people with effort and then yeah, I no longer trust mutually assured destruction because some for some people the idea that they would die is just not a disincentive. There was a Japanese cult, for example. Ohm Shinrikyo in the 90s that had they what they believed was that Armageddon was coming if you died before Armageddon, you would get good karma. You wouldn't go to hell if you died during Armageddon. Maybe you would go to hell and they had a biological weapons program chemical weapons program when they were finally apprehended. They hadn't stocks of southern gas that were sufficient to kill 4 million people engaged in multiple terrorist acts. If they had had the ability to print a virus at home, that would have been very scary. So it's not impossible to imagine groups of people that hold that kind of belief of death as suicide as a good thing for passage into the next world and so on and then connect them with some weapons then ideology and weaponry may create serious problems for us. Let me ask you a quick question on what do you think is the line between killing most humans and killing all humans? How hard is it to kill everybody? Yeah, have you thought about this? I've thought about it a bit. I think it is very hard to kill everybody. So in the case of let's say an all out nuclear exchange and let's say that leads to nuclear winter. We don't really know but we you know might well happen that would I think result in billions of deaths would it kill everybody? It's quite it's quite hard to see how that how it would kill everybody for a few reasons. One is just those are so many people. Yes, you know seven and a half billion people. So this bad event has to kill all you know, all almost all of them. Secondly live in such a diversity of locations. So a nuclear exchange or the virus that has to kill people who live in the coast of New Zealand which is going to be climatically much more stable than other areas in the world or people who are on submarines or who have access to bunkers. So there's a very like there's just like I'm sure there's like two guys in Siberia just badass. There's the just human nature somehow just perseveres. Yeah, and then the second thing is just if there's some catastrophic event people really don't want to die. So there's going to be like, you know, huge amounts of effort to ensure that it doesn't affect everyone. Have you thought about what it takes to rebuild a society with smaller smaller numbers like how big of a setback these kinds of things are? Yeah, so then that's something where there's real uncertainty I think where at some point you just lose genetic sufficient genetic diversity such that you can't come back. There's it's unclear how small that population is. But if you've only got say a thousand people or fewer than a thousand, then maybe that's small enough. What about human knowledge and then there's human knowledge. I mean, it's striking how short on geological timescales or evolutionary timescales the progress in or how quickly the progress in human knowledge has been like agriculture. We only invented in 10,000 BC cities were only, you know, 3000 BC whereas typical mammal species is half a million years to a million years. Do you think it's inevitable in some sense agriculture everything that came the Industrial Revolution cars planes the internet that level of innovation you think is inevitable. I think given how quickly it arose. So in the case of agriculture, I think that was dependent on climate. So it was the kind of glacial period was over the earth warmed up a bit that made it much more likely that humans would develop agriculture when it comes to the Industrial Revolution. It's just you know, again only took a few thousand years from cities to Industrial Revolution if we think okay, we've gone back to this even let's say agricultural era, but there's no reason why we wouldn't go extinct in the coming tens of thousands of years or hundreds of thousands of years. It seems just vet. It would be very surprising if we didn't rebound unless there's some special reason that makes things different. Yes. So perhaps we just have a much greater like disease burden now so HIV exists. It didn't exist before and perhaps that's kind of latent and you know and being suppressed by modern medicine and sanitation and so on but would be a much bigger problem for some, you know, utterly destroyed the society that was trying to rebound or there's just maybe there's something we don't know about. So another existential risk comes from the mysterious the beautiful artificial intelligence. Yeah. So what what's the shape of your concerns about AI? I think there are quite a lot of concerns about AI and sometimes the different risks don't get distinguished enough. So the kind of classic worry most is closely associated with Nick Bostrom and Elias Joukowski is that we at some point move from having narrow AI systems to artificial general intelligence. You get this very fast feedback effect where AGI is able to build, you know, artificial intelligence helps you to build greater artificial intelligence. We have this one system that suddenly very powerful far more powerful than others than perhaps far more powerful than, you know, the rest of the world combined and then secondly, it has goals that are misaligned with human goals. And so it pursues its own goals. It realize, hey, there's this competition namely from humans. It would be better if we eliminated them in just the same way as homo sapiens eradicated the Neanderthals. In fact, it in fact killed off most large animals on the planet that walk the planet. So that's kind of one set of worries. I think that's not my I think these shouldn't be dismissed as science fiction. I think it's something we should be taking very seriously, but it's not the thing you visualize when you're concerned about the biggest near term. Yeah, I think it's I think it's like one possible scenario that would be astronomically bad. I think that other scenarios that would also be extremely bad comparably bad that are more likely to occur. So one is just we are able to control AI. So we're able to get it to do what we want it to do. And perhaps there's not like this fast takeoff of AI capabilities within a single system. It's distributed across many systems that do somewhat different things, but you do get very rapid economic and technological progress as a result that concentrates power into the hands of a very small number of individuals, perhaps a single dictator. And secondly, that single individual is or small group of individuals or single country is then able to like lock in their values indefinitely via transmitting those values to artificial systems that have no reason to die like, you know, their code is copyable. Perhaps, you know, Donald Trump or Xi Jinping creates their kind of AI progeny in their own image. And once you have a system that's once you have a society that's controlled by AI, you no longer have one of the main drivers of change historically, which is the fact that human lifespans are you know, only a hundred years give or take. So that's really interesting. So as opposed to sort of killing off all humans is locking in creating a hell on earth, basically a set of principles under which the society operates that's extremely undesirable. So everybody is suffering indefinitely. Or it doesn't, I mean, it also doesn't need to be hell on earth. It could just be the long values. So we talked at the very beginning about how I want to see this kind of diversity of different values and exploration so that we can just work out what is kind of morally like what is good, what is bad and then pursue the thing that's bad. So actually, so the idea of wrong values is actually probably the beautiful thing is there's no such thing as right and wrong values because we don't know the right answer. We just kind of have a sense of which value is more right, which is more wrong. So any kind of lock in makes a value wrong because it prevents exploration of this kind. Yeah, and just, you know, imagine if fascist value, you know, imagine if there was Hitler's utopia or Stalin's utopia or Donald Trump's or Xi Jinping's forever. Yeah, you know, how good or bad would that be compared to the best possible future we could create? And my suggestion is it would really suck compared to the best possible future we could create. And you're just one individual. There's some individuals for whom Donald Trump is perhaps the best possible future. And so that's the whole point of us individuals exploring the space together. Exactly. Yeah, and what's trying to figure out which is the path that will make America great again. Yeah, exactly. So how can effective altruism help? I mean, this is a really interesting notion they actually describing of artificial intelligence being used as extremely powerful technology in the hands of very few potentially one person to create some very undesirable effect. So as opposed to AI and again, the source of the undesirableness there is the human. Yeah, AI is just a really powerful tool. So whether it's that or whether AI's AGI just runs away from us completely. How as individuals, as people in the effective altruism movement, how can we think about something like this? I understand poverty and animal welfare, but this is a far out incredibly mysterious and difficult problem. Great. Well, I think there's three paths as an individual. So if you're thinking about, you know, career paths you can pursue. So one is going down the line of technical AI safety. So this is most relevant to the kind of AI winning AI taking over scenarios where this is just technical work on current machine learning systems often sometimes going more theoretical to on how we can ensure that an AI is able to learn human values and able to act in the way that you want it to act. And that's a pretty mainstream issue and approach in machine learning today. So, you know, we definitely need more people doing that. Second is on the policy side of things, which I think is even more important at the moment, which is how should developments in AI be managed on a political level? How can you ensure that the benefits of AI are very distributed? It's not being, power isn't being concentrated in the hands of a small set of individuals. How do you ensure that there aren't arms races between different AI companies that might result in them, you know, cutting corners with respect to safety. And so there the input as individuals who can have is this. We're not talking about money. We're talking about effort. We're talking about career choices. We're talking about career choice. Yeah, but then it is the case that supposing, you know, you're like, I've already decided my career. I'm doing something quite different. You can contribute with money too, where at the Center for Effective Altruism, we set up the Long Term Future Fund. So if you go on to effectivealtruism.org, you can donate where a group of individuals will then work out what's the highest value place they can donate to work on existential risk issues with a particular focus on AI. What's path number three? This was path number three. This is donations with the third option I was thinking of. Okay. And then, yeah, there are, you can also donate directly to organizations working on this, like Center for Human Compatible AI at Berkeley, Future of Humanity Institute at Oxford, or other organizations too. Does AI keep you up at night? This kind of concern? Yeah, it's kind of a mix where I think it's very likely things are going to go well. I think we're going to be able to solve these problems. I think that's by far the most likely outcome, at least over the next. By far the most likely. So if you look at all the trajectories running away from our current moment in the next hundred years, you see AI creating destructive consequences as a small subset of those possible trajectories. Or at least, yeah, kind of eternal, destructive consequences. I think that being a small subset. At the same time, it still freaks me out. I mean, when we're talking about the entire future of civilization, then small probabilities, you know, 1% probability, that's terrifying. What do you think about Elon Musk's strong worry that we should be really concerned about existential risks of AI? Yeah, I mean, I think, you know, broadly speaking, I think he's right. I think if we talked, we would probably have very different probabilities on how likely it is that we're doomed. But again, when it comes to talking about the entire future of civilization, it doesn't really matter if it's 1% or if it's 50%, we ought to be taking every possible safeguard we can to ensure that things go well rather than poorly. Last question, if you yourself could eradicate one problem from the world, what would that problem be? That's a great question. I don't know if I'm cheating in saying this, but I think the thing I would most want to change is just the fact that people don't actually care about ensuring the long run future goes well. People don't really care about future generations. They don't think about it. It's not part of their aims. In some sense, you're not cheating at all because in speaking the way you do, in writing the things you're writing, you're doing, you're addressing exactly this aspect. Exactly. That is your input into the effective altruism movement. So for that, Will, thank you so much. It's an honor to talk to you. I really enjoyed it. Thanks so much for having me on. If that were the case, we'd probably be pretty generous. Next round's on me, but that's effectively the situation we're in all the time. It's like a 99% off sale or buy one get 99 free. Might be the most amazing deal you'll see in your life. Thank you for listening and hope to see you next time.
William MacAskill: Effective Altruism | Lex Fridman Podcast #84
The following is a conversation with Roger Penrose, physicist, mathematician, and philosopher at University of Oxford. He has made fundamental contributions in many disciplines from the mathematical physics of general relativity and cosmology to the limitations of a computational view of consciousness. In his book, The Emperor's New Mind, Roger writes that, quote, "'Children are not afraid to pose basic questions that may embarrass us as adults to ask.' In many ways, my goal with this podcast is to embrace the inner child that is not constrained by how one should behave, speak, and think in the adult world. Roger is one of the most important minds of our time, so it was truly a pleasure and an honor to talk with him. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological, and financial burden of the crisis, I'm sending love your way. Stay strong, we're in this together, we'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code LEX PODCAST. This show is presented by Cash App, the number one finance app in the app store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of the fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code LEX PODCAST, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lexpod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy to use, press the big power on button and your privacy is protected. And if you like, you can make it look like your location is anywhere else in the world. I might be in Boston now, but I can make it look like I'm in New York, London, Paris or anywhere else. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux, shout out to Ubuntu, Windows, Android, but it's available everywhere else too. Once again, get it at expressvpn.com slash lexpod to get a discount and to support this podcast. And now, here's my conversation with Roger Penrose. You mentioned in conversation with Eric Weinstein on the Portal podcast that 2001 Space Odyssey is your favorite movie. Which aspect, if you could mention, of its representation of artificial intelligence, science, engineering connected with you? There are all sorts of scenes there which are so amazing. And how science was so well done. I mean, people say, oh no, Interstellar is this amazing movie which is the most scientific movie. I thought it's not a patch on 2001. I mean, 2001, they really went into all sorts of details. And regarding getting the free fall well done and everything, I thought it was extremely well done. So just the details were mesmerizing in terms of this. And also things like the scene where at the beginning they have these sort of human ancestors which are sort of apes becoming humans. The monolith. Yes, and well, it's the one where he throws the bone up into the air and then it becomes this. I mean, that's an amazing sequence there. What do you make of the monolith? Does it have any scientific or philosophical meaning to you, this kind of thing that sparks innovation? Not really. That comes from Arthur C. Clarke. I was always a great fan of Arthur C. Clarke. So it's just a nice plot device. Yeah, that plot is excellent, yes. So Hal 9000 decides to get rid of the astronauts because he, it, she believes that they will interfere with the mission. That's right. Yeah, well, there you are. It's this view. I don't know whether I disagree with it because in a certain sense it was telling you it's wrong. See, the machine seemed to think it was superior to the human and so it was entitled to get rid of the human beings and run the show itself. Well, do you think Hal did the right thing? Do you think Hal's flawed evil? Or if we think about systems like Hal, would we want Hal to do the same thing in the future? What was the flaw there? Well, you're basically touching on questions. You see, it's one supposed to believe that Hal was actually conscious. I mean, it was played rather that way, as though Hal was a conscious being. Because Hal showed some pain, some cognizance, Hal appeared to be cognizant of what it means to die. Yes. And therefore had an inkling of consciousness. Yeah, I mean, I'm not sure that aspect of it was made completely clear, whether Hal was really just a very sophisticated computer, which really didn't actually have these feelings and somehow, but you're right, it didn't like the idea of being turned off. How does it change things if Hal was or wasn't conscious? Well, it might say that it would be wrong to turn it off if it was actually conscious. I mean, these questions arise if you think. I mean, AI, one of the ideas, it's sort of a mixture in a sense. You say, if it's trying to do everything a human can do, and if you take the view that consciousness is something which would come along when the computer is sufficiently complicated, sufficiently whatever criterion you use to characterize its consciousness in terms of some computational criteria, computational criterion. So how does consciousness change our evaluation of the decision that Hal made? I guess I was trying to say that people are a bit confused about this, because if they say these machines will become conscious, but just simply because it's a degree of computation, and when you get beyond that certain degree of computation, it will become conscious, then of course you have all these problems. I mean, you might say, well, one of the reasons you're doing AI is because you want to send a device out to some distant planet, and you don't want to send a human out there, because then you'd have to bring it back again, and that costs you far more than just sending it there and leaving it there. But if this device is actually a conscious entity, then you have to face up to the fact that that's immoral. And so the mere fact that you're making some AI device and thinking that removes your responsibility to it would be incorrect. And so this is a sign of flaw in that kind of viewpoint. I'm not sure how people who take it very seriously, I mean, I had this curious conversation with, I'm going to forget names, I'm afraid, because this is what happens to me at the wrong moment, Hofstadter, Douglas Hofstadter. Douglas Hofstadter, yeah. And he'd written this book, God Will Let You Up, which I liked. I thought it was a fantastic book. But I didn't agree with his conclusion from Gödel's theorem. I think he got it wrong, you see. Well, I'll just tell you my story, you see, because I'd never met him. And then I knew I was going to meet him, the occasion I realized he was coming in, he wanted to talk to me, and I said, that's fine. And I thought in my mind, well, I'm going to paint him into a corner, you see, because I'll use his arguments to convince him that certain numbers are conscious. Some integers, large enough integers are actually conscious. And this was going to be my reductio ad absurdum. So I started having this argument with him. He simply leapt into the corner. He didn't even need to be painted into it. He took the view that certain numbers were conscious. I thought that was a reductio ad absurdum, but he seemed to think it was perfectly a reasonable point of view. Without the absurdum there. Yes. Interesting, but the thing you mentioned about how is the intuition that a lot of the people, at least in the artificial intelligence world, had and have, I think. They don't make it explicit, but that if you increase the power of computation, naturally consciousness will emerge. Yes, I think that's what they think. But basically that's because they can't think of anything else. Well, that's right. And so it's a reasonable thing. I mean, you think, what does the brain do? Well, it does do a lot of computation. I think most of what you actually call computation is done by the cerebellum. I mean, this is one of the things that people don't much mention. I mean, I come to this subject from the outside and certain things strike me, which you hardly ever hear mentioned. I mean, you hear mentioned about the left right business. They move your right arm, that's the left side of the brain and so on and all that sort of stuff. And it's more than that. If you have these plots of different parts of the brain, there are two of these things called the homunculi, which you see these pictures of a distorted human figure and showing different parts of the brain, controlling different parts of the body. And it's not simply things like, okay, the right hand is controlled and both sensory and motor on the left side, left hand on the right side. It's more than that. Vision is the back basically, your feet at the top. And it's as though it's about the worst organization you could imagine. So it can't just be a mistake in nature. There's something going on there. And this is made more pronounced when you think of the cerebellum. The cerebellum has, when I was first thinking about these things, I was told that it had half as many neurons or something like that, comparable. And now they tell me it's got far more neurons than the cerebrum, and cerebrum is this sort of convoluted thing at the top people always talk about. Cerebellum is this thing just looks a bit like a ball of wool right at the back underneath them. It's got more neurons. It's got more connections. Computationally, it's got much more going on than this from the cerebrum. But as far as we know, that's slightly controversial, the cerebellum is entirely unconscious. So the actions, you have a pianist who plays an incredible piece of music and think of, and he moves his little finger into this little key to get it, hit it, just the right moment. Does he or she consciously will that movement? No. Okay, the consciousness is coming in. It's probably to do with the feeling of the piece of music that's being performed and that sort of thing, which is going on. But the details of what's going on are controlled. I would think almost entirely by the cerebellum. That's where you have this precision and the really detailed. Once you get, I mean, you think of a tennis player or something, does that tennis player think exactly which muscles should be moved in what direction and so on? No, of course not. But he or she will maybe think, well, if the ball is angled in such a way in that corner, that will be tricky for the opponent. And the details of that are all done largely with the cerebellum. That's where all the precise motions, but it's unconscious. So why is it interesting to you that so much computation is done in the cerebellum and yet it is unconscious? Because it doesn't, it's the view that somehow it's computation which is producing the consciousness. And it's here you have an incredible amount of computation going on. And as far as we know, it's completely unconscious. So why, what's the difference? And I think it's an important thing. What's the difference? Why is the cerebrum, all this very peculiar stuff that very hard to see on a computational perspective, like having the, everything have to cross over under the other side and do something which looks completely inefficient. And you've got funny things like the frontal lobe and the, what do we call the lobes? And the place where they come together, you have the different parts, the control, you see one to do with motor and the other to do with sensory. And they're sort of opposite each other rather than being connected by, it's not as though you've got electrical circuits. There's something else going on there. So it's just the idea that it's like a complicated computer just seems to me to be completely missing the point. There must be a lot of computation going on, but the cerebellum seems to be much better at doing that than the cerebrum is. So for sure, I think what explains it is like half hope and half we don't know what's going on. And therefore from the computer science perspective, you hope that a Turing machine can be perfectly, can achieve general intelligence. Well, you have this wonderful thing about Turing and Gödel and Church and Curry and various people, particularly Turing, and I guess Post was the other one. These people who developed the idea of what a computation is. And there were different ideas of what a computation, developed differently. I mean, Church's way of doing it, was very different from Turing's, but then they were shown to be equivalent. And so the view emerged that what we mean by computation is a very clear concept. And one of the wonderful things that Turing did was to show that you could have what we call the universal Turing machine. It's you just have to have a certain finite device. Okay, it has to have an unlimited storage space, which is accessible to it, but the actual computation, if you like, is performed by this one universal device. And so the view comes away, well, you have this universal Turing machine, and maybe the brain is something like that, a universal Turing machine, and it's got maybe not unlimited storage, but a huge storage accessible to it. And this model is one, which is what's used in ordinary computation. It's a very powerful model. And the universallness of computation is very useful. You could have some problem and you may not see immediately how to put it onto a computer, but if it is something of that nature, then there are all sorts of subprograms and subroutines when all the, I mean, I learned a little bit of computing when I was a student, but not very much. But it was enough to get the general ideas. And there's something really pleasant about a formal system like that. Yeah. Where you can start discussing about what's provable, what's not, these kinds of things. And you've got a notion, which is an absolute notion, this notion of computability, and you can address when things are, mathematical problems are computably solvable and what chance. So. And it's a very beautiful area of mathematics, and it's a very powerful area of mathematics. And it underlies the whole sort of, I won't say, the principles of computing machines that we have today. Could you say, what is Gayle's Incompleteness Theorem? And how does it, maybe also say, is it heartbreaking to you? And how does it interfere with this notion of computation and consciousness? Sure. Well, the ideas, basically ideas, which I formulated in my first year as a graduate student in Cambridge. I did my undergraduate work in mathematics in London, and I had a colleague, Ian Percival. We used to discuss things like computational and logical systems quite a lot. I'd heard about Gayle's theorem. I was a bit worried by the idea that it seemed to say there were things in mathematics that you could never prove. And so when I went to Cambridge as a graduate student, I went to various courses. You see, I was doing pure mathematics. I was doing algebraic geometry of a sort. A little bit different from what my supervisor and people, but it was algebraic geometry. Yeah. And I was interested, I got particularly interested in three lecture courses that were nothing to do with what I was supposed to be doing. One was a course by Herman Bondy on Einstein's general theory of relativity, which was a beautiful course. He was an amazing lecturer, brought these things alive, absolutely. Another was a course on quantum mechanics given by a great physicist, Paul Dirac. Very beautiful course in a completely different way. It was, he was very kind of organized and never got excited about anything seemingly. But it was extremely well put together. And I found that amazing too. Third course that was nothing to do with what I should be doing was a course on mathematical logic. I got excited, as I say, my discussions with Ian Percival was incompleteness theorem already deeply within mathematical logic space. Were you introduced to it? I was introduced to it in detail by the course, by Steen. And he, it was two things he described which were very fundamental to my understanding. One was Turing machines and the whole idea of computability and all that. So that was all very much part of the course. The other one was the Gödel theorem. And it wasn't what I was afraid it was to tell you there were things in mathematics you couldn't prove. It was basically, and he phrased it in a way which often people didn't. And if you read Douglas Soft status book, he doesn't, you see. But Steen made it very clear. And also in a sort of public lecture that he gave to a mathematical, I think it may be the Adams Society, one of the mathematical undergraduate societies. And he made this point again very clearly. That if you've got a formal system of proof, so suppose what you mean by proof is something which you could check with a computer. So to say whether you've got it right or not, you've got a lot of steps. Have you carried this computational procedure? Well, following the proof, steps of the proof correctly, that can be checked by an algorithm, by a computer. So that's the key thing. Now what you have to, now you see, is this any good? If you've got an algorithmic system, which claims to say, yes, this is right, this you've proved it correctly, this is true. If you've proved it, if you made a mistake, it doesn't say it's true or false. But if you have, if you've done it right, then the conclusion you've come to is correct. Now you say, why do you believe it's correct? Because you've looked at the rules and you said, well, okay, that one's all right. Yeah, that one's all right. What about that? Oh, yeah, I see, I see why it's all right. Okay, you go through all the rules. You say, yes, following those rules, if it says, yes, it's true, it is true. So you've got to make sure that these rules are ones that you trust. If you follow the rules and it says it's a proof, is the result actually true? Right. And that your belief that it's true depends upon looking at the rules and understanding them. Now, what Gödel shows, that if you have such a system, then you can construct a statement of the very kind that it's supposed to look at, a mathematical statement, and you can see by the way it's constructed and what it means that it's true, but not provable by the rules that you've been given. And it depends on your trust in the rules. Do you believe that the rules only give you truths? If you believe the rules only give you truths, then you believe this other statement is also true. I found this absolutely mind blowing. When I saw this, it blew my mind. I thought, my God, you can see that this statement is true. It's as good as any proof, because it only depends on your belief in the reliability of the proof procedure, that's all it is, and understanding that the coding is done correctly. And it enables you to transcend that system. So whatever system you have, as long as you can understand what it's doing and why you believe it only gives you truths, then you can see beyond that system. Now, how do you see beyond it? What is it that enables you to transcend that system? Well, it's your understanding of what the system is actually saying and what the statement that you've constructed is actually saying. So it's this quality of understanding, whatever it is, which is not governed by rules. It's not a computational procedure. So this idea of understanding is not going to be within the rules of the, within the formal system. Yes, you're only using those rules anyway, because you have understood them to be rules which only give you truths. There'd be no point in it otherwise. I mean, people say, well, okay, this is, it's one set of rules as good as any other. Well, it's not true. You see, you have to understand what the rules mean. And why does that understanding of the mean give you something beyond the rules themselves? And that's what it was. That's what blew my mind. It's somehow understanding why the rules give you truths enables you to transcend the rules. So that's where, I mean, even at that time, that's already where the thought entered your mind that the idea of understanding, or we can start calling it things like intelligence or even consciousness is outside the rules. Yes. See, I've always concentrated on understanding. You know, people say, people come and point out things. Well, you know, what about creativity? That's something a machine can't do is create. Well, I don't know. What is creativity? And I don't know. You know, somebody can put some funny things on a piece of paper and say that's creative and you could make a machine do that. Is it really creative? I don't know. You see, I worry about that one. I sort of agree with it in a sense, but it's so hard to do anything with that statement. But understanding, yes, you can. You can make, go see that understanding, whatever it is, and it's very hard to put your finger on it. That's absolutely true. Can you try to define or maybe dance around a definition of understanding? To some degree, but I don't, I often wondered about this, but there is something there which is very slippery. It's something like standing back. And it's got to be something, you see, it's also got to be something which was of value to our remote ancestors. Right. Because sometimes, there's a cartoon which I drew sometimes showing you how all these, there's in the foreground, you see this mathematician just doing some mathematical theorem. There's a little bit of a joke in that theorem, but let's not go into that. He's trying to prove some theorem. And he's about to be eaten by a saber tooth tiger who's hiding in the undergrowth, you see. And in the distance, you see his cousins building, growing crops, building shelters, domesticating animals, and in the slight foreground, you see they've built a mammoth trap and this poor old mammoth is falling into a pit, you see, and all these people around them are about to grab him, you see, and well, you see, those are the ones who, the quality of understanding, which goes with all, it's not just the mathematician doing his mathematics, this understanding quality is something else, which has been a tremendous advantage to us, not just to us. See, I don't think consciousness is limited to humans. Yeah, that's the interesting question, at which point, if it is indeed connected to the evolutionary process, at which point did we pick up this? A very hard question. It's certainly, I don't think it's primates, you know, you see these pictures of African hunting dogs and how they can plan amongst themselves how to catch the antelopes. Some of these David Attenborough films, I think this probably was one of them, and you could see the hunting dogs, and they divide themselves into two groups and they go in two routes, two different routes. One of them goes and they sort of hide next to the river. And the other group goes around and they start yelping at these, they don't bark, I guess whatever noise hunting dogs do, the antelopes, and they sort of round them up and they chase them in the direction of the river. And there are the other ones just waiting for them, just to get, because when they get to the river, it slows them down. And so they pounce on them. So they've obviously planned this all out somehow. I have no idea how. And there is some element of conscious planning, as far as I can see. I don't think it's just some kind of, so much of AI these days is done on what they call bottom up systems, is it? Yeah, where you have neural networks and you give them a zillion different things to look at and then they sort of can choose one thing over another, just because it's seen so many examples and picks up on little signals, which one may not even be conscious of. And that doesn't feel like understanding. There's no understanding in that whatsoever. Well, you're being a little bit human centric, so. Well, I'm talking about, I'm not with the dogs, am I? No, you're not. Sorry, not human centric, but I misspoke. Biology centric. Is it possible that consciousness would just look slightly different? Well, I'm not saying it's biological, because we don't know. I think other examples of elephants is a wonderful example, too. Where they, this was, I think this was an Attenborough one, where the elephants have to go from along, the troop of them have to go long distances. And the leader of a troop is a female. They all are, apparently. And this female, she had to go all the way from one part of the country to another. And at a certain point, she made a detour. And they went off in this big detour. All the troop came with her. And this was where her sister had died. And there were her bones lying around. And they're going to pick up the bones, and they hand it around, and they caress the bones. And then they put them back, and they go back again. What in the hell are they doing? That's so interesting. I mean, there's something going on. There's no clear connection with natural selection. There's just some deep feeling going on there, which has to do with their conscious experience. And I think it's something that, overall, is advantageous, our natural selection, but not directly to do with natural selection. I like that. There's something going on there. Like I told you, I'm Russian, so I tend to romanticize all things of this nature, that it's not merely cold, hard computation. Perhaps I could just slightly answer your question. You were asking me, what is it? There's something about sort of standing back and thinking about your own thought processes. I mean, there is something like that in the Gödel thing, because you're not following the rules. You're standing back and thinking about the rules. And so there is something that you might say, you think about you're doing something, and you think, what the hell am I doing? And you sort of stand back and think about what it is that's making you think in such a way. Just take a step back outside the game you've been playing. Yeah, you back up and you think about, you're just not playing the game anymore. You're thinking about what the hell you're doing in playing this game. And that's somehow, it's not a very precise description, but somehow it feels very true that that's somehow understanding. This kind of reflection. The reflection, yes. Yeah, it's a bit hard to put your finger on, but there is something there, which I think maybe could be unearthed at some point and see this is really what's going on, why conscious beings have this advantage, what it is that gives them advantage. And I think it goes way back. I don't think we're talking about the hunting dogs and the elephants. It's pretty clear that octopuses have the same sort of quality, and we call it consciousness. Yeah, I think so. Seen enough examples of the way that they behave and the evolution route is completely different. Does it go way back to some common ancestor or did it come separately? My hope is it's something simple, but the hard question if there's a hardware prerequisite. We have to develop some kind of hardware mechanisms in our computers. Like basically, as you suggest, we'll get to in a second, we kind of have to throw away the computer as we know it today. Yeah. The deterministic machines we know today to try to create it. I mean, my hope, of course, is not, but... Well, I should go really back to the story which, in a sense, I haven't finished because I went to these three courses, you see, when I was a graduate student. And so I started to think, well, I'm really, I'm a pretty, what you might call a materialist in the sense of thinking that there's no kind of mystical something or other which comes in from who knows where. You still that? Are you still, throughout your life, been a materialist? I don't like the word materialist because it suggests we know what material is. And that is a bad word because... But there's no mystical. It's not some mystical something which is not treatable by science. That's so beautifully put, just to pause on that for a second. You're a materialist, but you acknowledge that we don't really know what the material is. That's right. I mean, I like to call myself a scientist, I suppose, but it means that... Yes, well, you see, the question goes on here. So I began thinking, okay, if consciousness or understanding is something which is not a computational process, what can it be? And I knew enough from my undergraduate work. I knew about Newtonian mechanics, and I knew how basically you could put it on a computer. There is a fundamental issue, which is it important or not? That computation depends upon discrete things. So you're using discrete elements, whereas the physical laws depend on the continuum. Now, is this something to do with it? Is it the fact that we use the continuum in our physics? And if we model our physical system, we use discrete systems like ordinary computers? I came to the view that that's probably not it. I might have to retract on that someday, but the view was no, you can get close enough. It's not altogether clear, I have to say, but you can get close enough. And I went to this course by Bondi on general relativity, and I thought, well, you can put that on a computer, because that was a long time before people, and I've sort of grown up with this, how people have done better and better calculations, and they could work out about black holes, and they can then work out how black holes can interact with each other, spiral around, and what kind of gravitational waves can out. And it's a very impressive piece of computational work, how you can actually work out the shapes of those signals. And now we have LIGO seeing these signals, and they say, yeah, those black holes spiral into each other. This is just a vindication of the power of computation in describing Einstein's general relativity. So in that case, we can get close, but with computation, we can get close to our understanding of the physics. You can get very, very close. Now, is that close enough, you see? And then I went to this course by Dirac. Now, you see, I think it was the very first lecture that he gave, and he was talking about a superposition principle. And he said, if you have a particle, you usually think of particle can be over here or over there, but in quantum mechanics, it can be over here and over there at the same time. And you have these states which involve a superposition in some sense of different locations for that particle. And then he got out his piece of chalk. Some people say he broke it in two as a kind of illustration of how the piece of chalk might be over here and over there at the same time. And he was talking about this, and my mind wandered. I don't remember what he said. All I can remember, he's just moved on to the next topic, and something about energy he'd mentioned, which I had no idea what it had to do with anything. And so I'd been struck with this and worried about it ever since. It's probably just as well I didn't hear his explanation because it was probably one of these things to calm me down and not worry about it anymore. Whereas in my case, I've worried about it ever since. So I thought maybe that's the catch. There is something in quantum mechanics where the superpositions become one or the other, and that's not part of quantum mechanics. There's something missing in the theory. The theory is incomplete. It's not just incomplete. It's in a certain sense not quite right because if you follow the equation, the basic equation of quantum mechanics, that's the Schrodinger equation, you could put that on a computer too. There are lots of difficulties about how many parameters you have to put in and so on. That can be very tricky, but nevertheless, it is a computational process. Modulo this question about the continuum as before, but it's not clear that makes any difference. So our theories of quantum mechanics may be missing the same element that the universal Turing machine is missing about consciousness. Yes, yes. Yeah, this is the view I held is that you need a theory and that what people call the reduction of the state or the collapse of the wave function, which you have to have, otherwise quantum mechanics doesn't relate to the world we see. To make it relate to the world we see, you've got to break the Schrodinger equation. Schrodinger himself was absolutely appalled by this idea, his own equation. I mean, that's why he introduced this famous Schrodinger's cat as a thought experiment. He's really saying, look, this is where my equation leads you into it. There's something wrong, something we haven't understood, which is basically fundamental. And so I was trying to put all these things together and said, well, it's got to be the noncomputability comes in there. And I also can't quite remember when I thought this, but it's when gravity is involved in quantum mechanics. It's the combination of those two. And that's that point when you have good reasons to believe, this came much later, that I have good reason to believe that the principles of general relativity and those of quantum mechanics, most particularly, it's the basic principle of equivalence, which goes back to Galileo. If you fall freely, you eliminate the gravitational field. So you imagine Galileo dropping his big rock and his little rock from the leaning tower, whether he actually ever did that or not, pretty irrelevant. And as the rocks fall to the ground, you have a little insect sitting on one of them, looking at the other one. And it seems to think, oh, there's no gravity here. Of course, it hits the ground and then you realize something's difference going on. But when it's in free fall, the gravity has been eliminated. Galileo understood that very beautifully. He gives these wonderful examples of fireworks. And you see the fireworks and explode, and you see this fear of sparkling fireworks. It remains as fear as it falls down, as though there were no gravity. So he understood that principle, but he couldn't make a theory out of it. Einstein came along, used exactly the same principle. And that's the basis of Einstein's general theory of relativity. Now, there is a conflict. This is something I did much, much later. So this wasn't at those days, much, much later. You can see there is a basic conflict between the principle of superposition, the thing that Dirac was talking about, and the principle of general covariance. Well, principle of equivalence. Gravitational field's equivalent to an acceleration. Can you pause for a second? What is the principle of equivalence? It's this Galileo principle that we can eliminate, at least locally. You have to be in a small neighborhood because if you have people dropping rocks all around the world somewhere, you can't get rid of it all at once. But in the local neighborhood, you can eliminate the gravitational field by falling freely with it. And we now see this with astronauts, and they don't, you know, the Earth is right there. You can see the great globe of the Earth right beneath them. But they don't care about it. As far as they're concerned, there's no gravity. They fall freely within the gravitational field, and that gets rid of the gravitational field. And that's the principle of equivalence. So what's the contradiction? What's the tension with superposition and equivalence? Oh, well, that's technical. So just to backtrack for a second just to see if we can weave a thread through it all. So we started to think about consciousness as potentially needing some of the same, not mystical, but some of the same magic. You see, it is a complicated story. So, you know, people think, oh, I'm drifting away from the point or something. But I think it is a complicated story. So what I'm trying to say, I mean, I try to put it in a nutshell, but it's not so easy. I'm trying to say that whatever consciousness is, it's not a computation. Or it's not a physical process which can be described by computation. But it nevertheless could be, so one of the interesting models that you've proposed is the orchestrated objective reduction. Yes, well, you see, that's going from there, you see. So I say I have no idea. So I wrote this book through my scientific career. I thought, you know, when I'm retired, I'll have enough time to write a sort of a popularish book which I will explain my ideas and puzzles, what I like, beautiful things about physics and mathematics, and this puzzle about computability and consciousness and so on. And in the process of writing this book, well, I thought I'd do it when I was retired. I didn't actually, I didn't wait that long because there was a radio discussion between Edward Fredkin and Marvin Minsky. And they were talking about what computers could do. And they were entering a big room. They imagined entering this big room where at the other end of the room, two computers were talking to each other. And as you walk up to the computers, they will have communicated to each other more ideas, concepts, things than the entire human race had ever done. So I thought, well, I know where you're coming from, but I just don't believe you. There's something missing. So I thought, well, I should write my book. And so I did. It was roughly the same time Stephen Hawking was writing his brief history of time. In the 80s at some point. The book you're talking about is The Emperor's New Mind. The Emperor's New Mind, that's right. And both are incredible books, The Brief History of Time and The Emperor's New Mind. Yes, it was quite interesting because he told me he'd got Carl Sagan, I think, to write a foreword for the book, you see. So I thought, gosh, what am I gonna do? I'm not gonna get anywhere unless I get somebody. So I said, oh, I know Martin Gardner, so I wonder if he'd do it. So he did, and he did a very nice foreword. So that's an incredible book, and some of the same people you mentioned, Ed Franken, which I guess of expert systems fame, and Minsky, of course, people know in the AI world, but they represent the artificial intelligence world that do hope and dream that AI's intelligence is. Well, you see, it was my thinking, well, you know, I see where they're coming from. From that perspective, yeah, you're right. But that's not my perspective. So I thought I had to say it. And as I was writing my book, you see, I thought, well, I don't really know anything about neurophysiology. What am I doing writing this book? So I started reading up about neurophysiology, and I read up, and I think, now, I'm trying to find out how it is that nerve signals could possibly preserve quantum coherence. And all I read is that the electrical signals which go along the nerves create effects through the brain. There's no chance you can isolate it. So I thought, this is hopeless. So I come to the end of the book, and I more or less give up. I just think of something which I didn't believe in. Maybe this is a way around it, but no. And then, you see, I thought, well, maybe this book will at least stimulate young people to do science or something. And I got all these letters from old, retired people instead. These are the only people who had time to read my book. So, I mean, but. Except for Stuart Hameroff. Except for Stuart Hameroff. Stuart Hameroff wrote to me, and he said, I think you're missing something. You don't know about microtubules, do you? He didn't put it quite like that. But that was more or less it. And he said, this is what you really need to consider. So I thought, my God, yes. That's a much more promising structure. So, I mean, fundamentally, you were searching for the source of, noncomputable source of consciousness within the human brain, in the biology. And so, what are, if I may ask, what are microtubules? Well, you see, I was ignorant in what I'd read. I never came across them in the books I looked at. Perhaps I only read rather superficially, which is true. But I didn't know about microtubules. Stuart, I think one of the things that impressed him about them was, when you see pictures of mitosis, that's a cell dividing, and you see all the chromosomes. And the chromosomes, they all get lined up, and then they get pulled apart. And so, as the cell divides, half the chromosomes go, they divide into the two parts, and they go two different ways. And what is it that's pulling them apart? Well, those are these little things called microtubules. And so, he started to get interested in them. And he formed the view, well, he was, his day job or night job or whatever you call it, is to put people to sleep, except he doesn't like calling it sleep because it's different. General anesthetics in a reversible way. So, you want to make sure that they don't experience the pain that would otherwise be something that they feel. And consciousness is turned off for a while, and it can be turned back on again. So, it's crucial that you can turn it off and turn it on. And what do you do when you're doing that? What do general anesthetic gases do? And see, he formed the view that it's the microtubules that they affect. And the details of why he formed that view is not, well, they're clear to me, but there's an interesting story he keeps talking about. But I found this very exciting because I thought these structures, these little tubes which inhabit pretty well all cells, it's not just neurons, apart from red blood cells, they inhabit pretty well all the other cells in the body. But they're not all the same kind. You get different kinds of microtubules. And the ones that excited me the most, this may still not be totally clear, but the ones that excited me most were the only ones that I knew about at the time because they're very, very symmetrical structures. And I had reason to believe that these very symmetrical structures would be much better at preserving a quantum state, quantum coherence, preserving the thing without, you just need to preserve certain degrees of freedom without them leaking into the environment. Once they leak into the environment, you're lost. So you've got to preserve these quantum states at a level which the state reduction process comes in and that's where I think the noncomputability comes in and it's the measurement process in quantum mechanics, what's going on. So something about the measurement process and what's going on, something about the structure of the microtubules, your intuition says maybe there's something here, maybe this kind of structure allows for the mystery of the quantum mechanics. There was a much better chance, yes. It just struck me that partly it was the symmetry because there is a feature of symmetry you can preserve quantum coherence much better with symmetrical structures. There's a good reason for that. And that impressed me a lot. I didn't know the difference between the A lattice and B lattice at that time, which could be important. Now that could even, see, which isn't talked about much. But that's some, in some sense, details. We've got to take a step back just to say in case people are not familiar. So this was called the orchestrated objective reduction idea or ORCOR, which is a biological philosophy of mind that postulates that consciousness originates at the quantum level inside neurons. So that has to do with your search for where, where is it coming from? So that's counter to the notion that consciousness may arise from the computation performed by the synapses. Yes, I think the key point. Sometimes people say it's because it's quantum mechanical. It's not just that. See, it's more outrageous than that. You see, this is one reason I think we're so far off from it, because we don't even know the physics right. You see, it's not just quantum mechanics. People say, oh, you know, quantum systems and biological structures. No, will you starting to see that some basic biological systems does depend on quantum. I mean, look, in the first place, all of chemistry is quantum mechanics. People got used to that, so they don't count that. So he said, let's not count quantum chemistry. We sort of got the hang of that, I think. But you have quantum effects, which are not just chemical, in photosynthesis. And this is one of the striking things in the last several years, that photosynthesis seems to be a basically quantum process, which is not simply chemical. It's using quantum mechanics in a very basic way. So you could start saying, oh, well, if photosynthesis is based on quantum mechanics, why not behavior of neurons and things like that? Maybe there's something which is a bit like photosynthesis in that respect. But what I'm saying is even more outrageous than that, because those things are talking about conventional quantum mechanics. Now, my argument says that conventional quantum mechanics, if you're just following the Schrodinger equation, that's still computable. So you've got to go beyond that. So you've got to go to where quantum mechanics goes wrong in a certain sense. You have to be a little bit careful about that, because the way people do quantum mechanics is a sort of mixture of two different processes. One of them is the Schrodinger equation, which is an equation Schrodinger wrote down, and it tells you how the state of a system evolves. And it evolves according to this equation, completely deterministic, but it evolves into ridiculous situations. And this was what Schrodinger was very much pointing out with his cat. He said, you follow my equation, that's Schrodinger's equation, and you could say that you have to get a cat, a cat which is dead and alive at the same time. That would be the evolution of the Schrodinger equation, would lead to a state, which is the cat being dead and alive at the same time. And he's more or less saying, this is an absurdity. People nowadays say, oh, well, Schrodinger said you can have a cat which is dead, that's not that. You see, he was saying, this is an absurdity. There's something missing. And that the reduction of the state or the collapse of the wave function or whatever it is, is something which has to be understood. It's not following the Schrodinger equation. It's not the way we conventionally do quantum mechanics. There's something more than that. And it's easy to quote authority here because Einstein, at least three of the greatest physicists of 20th century who were very fundamental in developing quantum mechanics, Einstein, one of them, Schrodinger, another, Dirac, another. You have to look carefully at Dirac's writing because he didn't tend to say this out loud too much because he was very cautious about what he said. You find the right place and you see he says quantum mechanics is a provisional theory. We need something which explains the collapse of the wave function. We need to go beyond the theory we have now. I happen to be one of the kinds of people, there are many, there is a whole group of people, they're all considered to be a bit mavericks, who believe that quantum mechanics needs to be modified. There's a small minority of those people, which are already a minority, who think that the way in which it's modified has to be with gravity. And there is an even smaller minority of those people who think it's the particular way that I think it is. You see. So those are the quantum gravity folks. But what's... You see, quantum gravity is already not this. Because when you say quantum gravity, what you really mean is quantum mechanics applied to gravitational theory. So you say, let's take this wonderful formalism of quantum mechanics and make gravity fit into it. So that is what quantum gravity is meant to be. Now I'm saying you've got to be more even handed that gravity affects the structure of quantum mechanics too. It's not just you quantize gravity, you've got to gravitate quantum mechanics. And it's a two way thing. But then when do you even get started? So that you're saying that we have to figure out a totally new ideas in there. Exactly. No, you're stuck. You don't have a theory. That's the trouble. So this is a big problem. If you say, okay, well, what's the theory? I don't know. So maybe in the very early days, sort of... It is in the very early days. But just making this point. Yes. You see, Stuart Hammeroff tends to be, oh, Penrose says that it's got to be a reduction of the state and so on, so let's use it. The trouble is Penrose doesn't say that. Penrose says, well, I think that we have no experiments as yet, which shows that. There are experiments which are being thought through and which I'm hoping will be performed. There is an experiment which is being developed by Dirk Baumeister, who I've known for a long time, who shares his time between Leiden in the Netherlands and Santa Barbara in the US. And he's been working on an experiment which could perhaps demonstrate that quantum mechanics, as we now understand it, if you don't bring in the gravitational effects, it has to be modified. And then there's also experiments that are underway that kind of look at the microtubule side of things to see if there's, in the biology, you could see something like that. Could you briefly mention it? Because that's really sort of one of the only experimental attempts in the very early days of even thinking about consciousness. I think there's a very serious area here, which is what Stuart Hammeroff is doing, and I think it's very important. One of the few places that you can really get a bit of a handle on what consciousness is is what turns it off. And when you're thinking about general anesthetics, it's very specific. These things turn consciousness off. What the hell do they do? Well, Stuart and a number of people who work with him and others happen to believe that the general anesthetics directly affect microtubules. And there is some evidence for this. I don't know how strong it is and how watertight the case is, but I think there is some evidence pointing in that kind of direction. It's not just an ordinary chemical process. There's something quite different about it. And one of the main candidates is that these anesthetic gases do affect directly microtubules. And how strong that evidence is, I wouldn't be in a position to say, but I think there is fairly impressive evidence. And the point is the experiments are being undertaken, which is. Yeah. I mean, that is experimental. You see, so it's a very clear direction where you can think of experiments which could indicate whether or not it's really microtubules which the anesthetic gases directly affect. That's really exciting. One of the sad things is as far as I'm, from my outside perspective, is not many people are working on this. So there's a very, like with Stuart, it feels like there's very few people are carrying the flag forward on this. I think it's not many in the sense it's a minority, but it's not zero anymore. You see, when Stuart and I were originally taught by us, we were just us and a few of our friends, there weren't many people taking it, but it's grown into one of the main viewpoints. There might be about four or five or six different views which people hold, and it's one of them. So it's considered as one of the possible lines of thinking, yes. You describe physics theories as falling into one of three categories, the superb, the useful, or the tentative. I like those words. It's a beautiful categorization. Do you think we'll ever have a superb theory of intelligence and of consciousness? We might. We're a long way from it. I don't think we're even, whether we're in the tentative scale. I mean, it's... You don't think we've even entered the realm of tentative? Probably not. Yeah, that's right. Now, when you see this, it's so controversial. We don't have a clear view which is accepted by a majority. I mean, you see, yeah, people, most views are computational in one form or another. They think it's some, but it's not very clear, because even the IIT people who think of them as computational, but I've heard them say, no, consciousness is supposed to be not computational. I say, well, if it's not computational, what in the hell is it? What's going on? What physical processes are going on which are that? What does it mean for something to be computational then? So, is... Well, there has to be a process which is... You see, it's very curious the way the history has developed in quantum mechanics, because very early on, people thought there was something to do with consciousness, but it was almost the other way around. You see, you have to say the Schrodinger equation says all these different alternatives happen all at once, and then when is it that only one of them happens? Well, one of the views, which was quite commonly held by a few distinguished quantum physicists, that's when a conscious being looks at the system or becomes aware of it, and at that point, it becomes one or the other. That's a role where consciousness is somehow actively reducing the state. My view is almost the exact opposite of that. It's the state reduces itself in some way which... Some noncomputational way which we don't understand, we don't have a proper theory of, and that is the building block of what consciousness is. So consciousness is the other way around. It depends on that choice which nature makes all the time when the state becomes one or the other rather than the superposition of one and the other, and when that happens, there is what we're saying now, an element of proto consciousness takes place. Proto consciousness is, roughly speaking, the building block out of which actual consciousness is constructed. So you have these proto conscious elements, which are when the state decides to do one thing or the other, and that's the thing which when organized together, that's the OR part in ORCOR, but the ORC part, that's the OR part at least one can see where we're driving at a theory. You can say it's the quantum choice of going this way or that way, but the ORC part, which is the orchestration of this, is much more mysterious, and how does the brain somehow orchestrate all these individual OR processes into a genuine, genuine conscious experience? And it might be something that's beautifully simple, but we're completely in the dark about. Yeah, I think at the moment, that's the thing, you know, we happily put the word ORC down there to say orchestrated, but that's even more unclear what that really means. Just like the word material, orchestrated, who knows? And we've been dancing a little bit between the word intelligence or understanding and consciousness. Do you kind of see those as sitting in the same space of mystery as we discussed? Yes, well, you see, I tend to say you have understanding and intelligence and awareness, and somehow understanding is in the middle of it, you see. I like to say, could you say of an entity that is actually intelligent if it doesn't have the quality of understanding? Now, you see, I'm using terms I don't even know how to define, but who cares? I'm just relating them. They're somewhat poetic, so if I somehow understand them. Yes, that's right, we don't, exactly. But they're not mathematical in nature. Yes, you see, as a mathematician, I don't know how to define any of them, but at least I can point to the connections. So the idea is intelligence is something which I believe needs understanding, otherwise you wouldn't say it's really intelligence. And understanding needs awareness, otherwise you wouldn't really say it's understanding. Do you say of an entity that understands something, unless it's really aware of it, you know, normal usage. So there's a three sort of awareness, understanding, and intelligence. And I just tend to concentrate on understanding because that's where I can say something. Okay. And that's the Gödel theorem, things like that. But what does it mean to be, perceive the color blue or something? I mean, I'm foggiest. It's a much more difficult question. I mean, is it the same if I see a color blue and you see it? If you're somebody with this condition, what's it called then? Or where you assign a sound to a color. Yeah, yeah, that's right. You get colors and sounds mixed up. And that sort of thing. I mean, an interesting subject. But from the physics perspective, from the fundamentals perspective, we don't. I think we're way off having much understanding what's going on there. In your 2010 book, Cycles of Time, you suggest that another universe may have existed before the Big Bang. Can you describe this idea? First of all, what is the Big Bang? Sounds like a funny word. And what may have been there before it? Yes. Just as a matter of terminology, I don't like to call it another universe. Because when you have another universe, you think of it kind of quite separate from us. But these things, they're not separate. Now the Big Bang, conventional theory. You see, I was actually brought up in the sense of when I started getting interested in cosmology, there was a thing called the Steady State Model, which was sort of philosophically very interesting. And there wasn't a Big Bang in that theory. But somehow, new material was created all the time in the form of hydrogen, and the universe kept on expanding, expanding, expanding, and there was room for more hydrogen. It was a rather philosophically nice picture. It was disproved when the Big Bang, well, when I say the Big Bang, this was theoretically discovered by people trying to solve Einstein's equations and apply it to cosmology. Einstein didn't like the idea. He liked a universe which was there all the time. And he had a model which was there all the time. But then there was this discovery, accidental discovery, very important discovery, of this microwave background. And if you, there's the crackle on your television screen which is already sensing this microwave background, which is coming at us from all directions. And you can trace it back and back and back and back. And it came from a very early stage of the universe. Well, it's part of the Big Bang theory. The Big Bang theory was when people tried to solve Einstein's equations. They really found you had to have this initial state where the universe, it used to be called the primordial atom and things like this. There's Friedman and Lemaitre. Friedman was a Russian, Lemaitre was a Belgian. And they independently, well, basically Friedman first. And Lemaitre talked about the initial state, which is a very, very concentrated initial state which seemed to be the origin of the universe. Primordial atom. Primordial atom is what he called it, yes. And then it became, well, Fred Hoyle used the term Big Bang in a kind of derogatory sense. Just like with the Schrodinger and the cats, right? Yes, it's like sort of got picked up on whereas it wasn't his intention originally. But then the evidence piled up and piled up. And one of my friends and I learned a lot from him when I was in Cambridge was Dennis Sharma. He was a great proponent of steady state. And then he got converted. He said, no, I'm sorry. I had a great respect for him. He went around lecturing and said, I was wrong. The steady state model doesn't work. There was this Big Bang. And this microwave background that you see, okay, it's not actually quite the Big Bang. When I say not quite, it's about 380,000 years after the Big Bang, but that's what you see. But then you have to have had this Big Bang before it in order to make the equations work. And it works beautifully except for one little thing, which is this thing called inflation, which people had to put into it to make it work. When I first heard of it, I didn't like it at all. What's inflation? Inflation is that in the first, I'm gonna give you a very tiny number. Think of a second. That's not very long. Now I'm gonna give you a fraction of a second, one over a number. This number has 32 digits between, well, let's say between 36 and 32 digits. Tiny, tiny time between those two tiny, ridiculous seconds, fraction of a second, the universe was supposed to have expanded in this exponential way, an enormous way. For no apparent reason, you had to invent a particular thing called the inflaton field to make it do it. And I thought this is completely crazy. There are reasons why people stuck with this idea. You see, the thing is that I formed my model for reasons which are very fundamental, if you like. It has to do with this very fundamental principle, which is known as the second law of thermodynamics. The second law of thermodynamics says more or less, things get more and more random as time goes on. Now, another way of saying exactly the same thing is things get less and less random. As things go back, as you go back in time, they get less and less random. They go back and back and back and back. And the earliest thing you can directly see is this microwave background. What's one of the most striking features of it is that it's random. It has this, what you call this spectrum of, which is what's called the Planck spectrum, of frequencies, different intensities for different frequencies. And it's this wonderful curve due to Max Planck. And what's it telling you? It's telling you that the entropy is at a maximum. Started off at a maximum and it's going up ever since. I call that the mammoth in the room. I mean, it's a paradox. A mammoth, yeah, it is. And so people, why don't cosmologists worry about this? So I worried about it. And then I thought, well, it's not really a paradox because you're looking at matter and radiation at a maximum entropy state. What you're not seeing directly in that is the gravitation. It's gravitation, which is not thermalized. The gravitation was very, very low entropy. And it's low entropy by the uniformity. And you see that in the microwave too. It's very uniform over the whole sky. I'm compressing a long story into a very short few sentences. And doing a great job, yeah. So what I'm saying is that there's a huge puzzle. Why was gravity in this very low entropy state, very highly organized state, everything else was all random? And that to me was the biggest problem in cosmology. The biggest problem, nobody seems to even worry about it. People say they solved all the problems and they don't even worry about it. They think inflation solves it. It doesn't, it can't. Because it's just that... Just to clarify, that was your problem with the inflation describing some aspect of the moments right after the Big Bang? Inflation is supposed to stretch it out and make it all uniform, you see. It doesn't do it because it can only do it if it's uniform already at the beginning. It's, you just have to look at, I can't go into the details, but it doesn't solve it. And it was completely clear to me it doesn't solve it. But where does the conformal cyclic cosmology of starting to talk about something before that singular and the Big Bang? I was just thinking to myself, how boring this universe is going to be. You've got this exponential expansion. This was discovered early in the, in this century, 21st century. People discovered that these supernova exploding stars showed that the universe is actually undergoing this exponential expansion. So it's a self similar expansion. And it seems to be a feature of this term that Einstein introduced into his cosmology for the wrong reason. He wanted a universe that was static. He put this new term into his cosmology. To make it make sense, it's called the cosmological constant. And then when he got convinced that the universe had a Big Bang, he retracted it complaining this was his greatest blunder. The trouble is it wasn't a blunder. It was actually right, very ironic. And so the universe seems to be behaving with this cosmological constant. Okay, so this universe is expanding and expanding. What's going to happen in the future? Well, it gets more and more boring for a while. What's the most interesting thing in the universe? Well, there's black holes. The black holes more or less gulp down entire clusters of galaxies. The cluster, it'll swallow up most of our galaxy. We will run into our Andromeda galaxy's black hole. That black hole will swallow our one. They'll get bigger and bigger and they'll basically swallow up the whole cluster of galaxies, gulp it all down. Pretty well all, most of it, maybe not all, most of it. Okay, then that'll happen to, there'll be just these black holes around. Pretty boring, but still not as boring as it's gonna get. It's gonna get more boring because these black holes, you wait and you wait and you wait and you wait an unbelievable length of time and Hawking's black hole evaporation starts to come in. And the black holes, you just, it's incredibly tedious. Finally evaporate away. Each one goes away, disappears with a pop at the end. What could be more boring? It was boring then, now this is really boring. There's nothing, not even black holes. Universe gets colder and colder and colder and colder. And I thought, this is very, very boring. Now that's not science, is it? But it's emotional. So I thought, who's gonna be bored by this universe? Not us, we won't be around. It'll be mostly photons running around. And what the photons do, they don't get bored because it's part of relativity, you see. It's not really that they don't experience anything. That's not the point. Photons get right out to infinity without experience any time. It's the way relativity works. And this was part of what I used to do in my old days when I was looking at gravitational radiation and how things behaved to infinity. Infinity is just like another place. You can squash it down. As long as you don't have any mass in the world, infinity is just another place. The photons get there, the gravitons get there. What do they get? They've run into infinity. They say, well, now I'm here, what do I? There's something on the other side, is there? The usual view, it's just a mathematical notion. There's nothing on the other side. That's just the boundary of it. A nice example is this beautiful series of pictures by the Dutch artist MC Escher. You may know them. The one's called Circle Limits. They're a very famous one with the angels and the devils. And you can see them crowding and crowding and crowding up to the edge. Now, the kind of geometry that these angels and devils inhabit, that's their infinity. But from our perspective, infinity is just a place. Okay, there is... I'm sorry, can you just take a brief pause? Yes. In just the words you're saying, infinity is just a place. So for the most part, infinity, sort of even just going back, infinity is a mathematical concept. I think this is one of the things... You think there's an actual physical manifest... In which way does infinity ever manifest itself in our physical universe? Well, it does in various places. You see, it's a thing that if you're not a mathematician, you think, oh, infinity, I can't think about that. Mathematicians think about affinity all the time. They get used to the idea and they just play around with different kinds of infinities and it becomes no problem. But you just have to take my word for it. Now, one of the things is, you see, you take a Euclidean geometry. Well, it just keeps on going and it goes out to infinity. Now, there's other kinds of geometry and this is what's called hyperbolic geometry. It's a bit like Euclidean geometry, it's a little bit different. It's like what Escher was trying to describe in his angels and devils. And he learned about this from Coxeter and he think that's a very nice thing. That's why I represent this infinity to this kind of geometry. So it's not quite Euclidean geometry, it's a bit like it, that the angels and the devils inhabit. And their infinity, by this nice transformation, you squash their infinity down so you can draw it as this nice circle boundary to their universe. Now, from our outside perspective, we can see their infinity as this boundary. Now, what I'm saying is that it's very like that. The infinity that we might experience like those angels and devils in their world can be thought of as a boundary. Now, I found this a very useful way of talking about radiation, gravitational radiation and things like that. It was a trick, mathematical trick. So now what I'm saying is that that mathematical trick becomes real. That somehow, the photons, they need to go somewhere because from their perspective, infinity is just another place. Now, this is a difficult idea to get your mind around. So that's one of the reasons cosmologists are finding a lot of trouble taking me seriously. But to me, it's not such a wild idea. What's on the other side of that infinity? You have to think, why am I allowed to think of this? Why am I allowed to think of this? Because photons don't have any mass. And we in physics have beautiful ways of measuring time. There are incredibly precise clocks, atomic and nuclear clocks, unbelievably precise. Why are they so precise? Because of the two most famous equations of 20th century physics. One of them is Einstein's E equals MC squared. What's that tell us? Energy and mass are equivalent. The other one is even older than that, still 20th century, only just. Max Planck, E equals h nu. Nu is a frequency, h is a constant, again, like C. E is energy. Energy and frequency are equivalent. Put the two together, energy and mass are equivalent, Einstein. Energy and frequency are equivalent, Max Planck. Put the two together, mass and frequency are equivalent. Absolutely basic physical principle. If you have a massive entity, a massive particle, it is a clock with a very, very precise frequency. It's not, you can't directly use it, you have to scale it down. So your atomic and nuclear clocks, but that's the basic principle. You scale it down to something you can actually perceive. But it's the same principle. If you have mass, you have beautiful clocks. But the other side of that coin is, if you don't have mass, you don't have clocks. If you don't have clocks, you don't have rulers. You don't have scale. So you don't have space and time. You don't have a measure of the scale of space and time. Oh, scale of space and time. You do have the structure, what's called the conformal structure. You see, it's what the angels and devils have. If you look at the eye of the devil, no matter how close to the boundary it is, it has the same shape, but it has a different size. So you can scale up and you can scale down, but you mustn't change the shape. So it's basically the same idea, but applied to space time now. In the very remote future, you have things which don't measure the scale, but the shape, if you like, is still there. Now that's in the remote future. Now I'm gonna do the exact opposite. Now I'm gonna go way back into the Big Bang. Now as you get there, things get hotter and hotter, denser and denser. What's the universe dominated by? Particles moving around almost with the speed of light. When they get almost with the speed of light, okay, they begin to lose the mass too. So for completely opposite reason, they lose the sense of scale as well. So my crazy idea is the Big Bang and the remote future, they seem completely different. One is extremely dense, extremely hot. The other is very, very rarefied and very, very cold. But if you squash one down by this conformal scaling, you get the other. So although they look and feel very different, they're really almost the same. The remote future on the other side, I'm claiming is that where do the photons go? They go into the next Big Bang. You've got to get your mind around that crazy idea. Taking a step on the other side of the place that is infinity. Okay, but. So I'm saying the other side of our Big Bang, now I'm going back into the Big Bang. Back, backwards. There was the remote future of a previous eon. Previous eon. And what I'm saying is that previous eon, there are signals coming through to us, which we can see and which we do see. And these are both signals, the two main signals are to do with black holes. One of them is the collisions between black holes. And as they spiral into each other, they release a lot of energy in the form of gravitational waves. Those gravitational waves get through in a certain form into the next eon. That's fascinating that there's some, I mean, maybe you can correct me if I'm wrong, but that means that some information can travel from another eon. Exactly. That is fascinating. I mean, I've seen somewhere described sort of the discussion of the Fermi Paradox, you know, that if there's intelligent life. Yes. Being, you know, communication immediately takes you there. So. We have a paper, I have my colleague, Vahid Guzajan, who I worked with on these ideas for a while. We have a crazy paper on that, yes. So. Looking at the Fermi Paradox, yes. Right, so if the universe is just cycling over and over and over, punctuated by the, punctuated the singularity of the Big Bang, and then intelligent or any kind of intelligent systems can communicate through from eon to eon, why haven't we heard anything from our alien friends? Because we don't know how to look. That's fundamentally the reason, is we. I don't know, you see, it's speculation. I mean, the SETI program is a reasonable thing to do, but still speculation. It's trying to say, okay, maybe not too far away was a civilization which got there first, before us, early enough that they could send us signals, but how far away would you need to go before, I mean, I don't know, we have so little knowledge about that, we haven't seen any signals yet, but it's worth looking, it's worth looking. What I'm trying to say, here's another possible place where you might look. Now you're not looking at civilizations which got there first, you're looking at those civilizations which were so successful, probably a lot more successful than they're likely to be by the looks of things, which knew how to handle their own global warming or whatever it is and to get through it all and to live to a ripe old age in the sense of a civilization to the extent that they could harness signals that they could propagate through for some reason of their own desires, whatever we wouldn't know to other civilizations which might be able to pick up the signals. But what kind of signals would they be? I haven't the foggiest. Let me ask the question. Yes. What to you is the most beautiful idea in physics or mathematics or the art at the intersection of the two? I'm gonna have to say complex analysis. I might've said infinities. And one of the most single, most beautiful idea I think was the fact that you can have infinities of different sizes and so on. But that's in a way, I think complex analysis. It's got so much magic in it. It's a very simple idea. You take these, you take numbers, you take the integers and then you fill them up into the fractions and the real numbers. You imagine you're trying to measure a continuous line and then you think of how you can solve equations. Then what about X squared equals minus one? Well, there's no real number which has to satisfy that. So you have to think of, well, there's a number called I. You think you invent it. Well, in a certain sense, it's there already. But this number, when you add that square root of minus one to it, you have what's called the complex numbers. And they're an incredible system. If you like, you put one little thing in, you put square root of minus one in and you get how much benefit out of it. All sorts of things that you'd never imagined before. And it's that amazing, all hiding there in putting that square root of minus one in. I think that's the most magical thing I've seen in mathematics or physics. And it's in quantum mechanics. And in quantum mechanics. You see, it's there already. You might think, what's it doing there? Okay, just a nice beautiful piece of mathematics. And then suddenly we see, nope. It's the very crucial basis of quantum mechanics. It's there and the way the world works. So on the question of whether math is discovered or invented, it sounds like you may be suggesting that partially it's possible that math is indeed discovered. Oh, absolutely, yes. No, it's more like archeology than you might think. Yes, yes. So let me ask the most ridiculous, maybe the most important question. What is the meaning of life? What gives your life fulfillment, purpose, happiness, and meaning? Why do you think we're here on this? Given all the big bang and the infinities of photons that we've talked about. All I would say, I think it's not a stupid question. I mean, there are some people, you know, many of my colleagues who are scientists, and they say, well, that's a stupid question, meaning, yeah, well, we're just here because things came together and produced life and so what. I think there's more to it. But what there is that's more to it, I have really much idea. And it might be somehow connected to the mechanisms of consciousness that we've been talking about, the mystery there. It's connected with all sorts of, yeah, I think these things are tied up in ways which are, you see, I tend to think the mystery of consciousness is tied up with the mystery of quantum mechanics and how it fits in with the classical world, and that's all to do with the mystery of complex numbers. And there are mysteries there which look like mathematical mysteries, but they seem to have a bearing on the way the physical world operates. We're scratching the surface. We have a long, huge way to go before we really understand that. And it's a beautiful idea that the depth, the mathematical depth could be discovered, and then there's tragedies of ghettos and completeness along the way that we'll have to somehow figure our ways around. Yeah. So, Roger, it was a huge honor to talk to you. Thank you so much for your time today. It's been my pleasure. Thank you. Thanks for listening to this conversation with Roger Penrose, and thank you to our presenting sponsor, Cash App. Please consider supporting this podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcasts, support on Patreon, or simply connect with me on Twitter at lexfreedman. And now let me leave you with some words of wisdom that Roger Penrose wrote in his book, The Emperor's New Mind. Beneath all this technicality is the feeling that it is indeed, quote unquote, obvious that the conscious mind cannot work like a computer, even though much of what is involved in mental activity might do so. This is the kind of obviousness that a child can see, though the child may later in life become browbeaten into believing that the obvious problems are quote unquote, non problems, to be argued into nonexistence by careful reasoning and clever choices of definition. Children sometimes see things clearly that are obscured in later life. We often forget the wonder that we felt as children when the cares of the quote unquote, real world had begun to settle on our shoulders. Children are not afraid to pose basic questions that may embarrass us as adults to ask. What happens to each of our streams of consciousness after we die? Where was it before we were born? Might we become or have been someone else? Why do we perceive it all? Why are we here? Why is there a universe here at all in which we can actually be? These are puzzles that tend to come with the awakenings of awareness in any of us and no doubt with the awakening of self awareness within whichever creature or other entity it first came. Thank you for listening and hope to see you next time.
Roger Penrose: Physics of Consciousness and the Infinite Universe | Lex Fridman Podcast #85
The following is a conversation with David Silver, who leads the Reinforcement Learning Research Group at DeepMind, and was the lead researcher on AlphaGo, AlphaZero, and co led the AlphaStar and MuZero efforts, and a lot of important work in reinforcement learning in general. I believe AlphaZero is one of the most important accomplishments in the history of artificial intelligence. And David is one of the key humans who brought AlphaZero to life together with a lot of other great researchers at DeepMind. He's humble, kind, and brilliant. We were both jet lagged, but didn't care and made it happen. It was a pleasure and truly an honor to talk with David. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong, we're in this together, we'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, Masterclass and Cash App. Please consider supporting the podcast by signing up to Masterclass and masterclass.com slash Lex and downloading Cash App and using code LexPodcast. This show is presented by Cash App, the number one finance app in the app store. When you get it, use code LexPodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book on this history. Debits and credits on Ledger started around 30,000 years ago. The US dollar created over 200 years ago, and Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it's still aiming to and just might redefine the nature of money. So again, if you get Cash App from the app store or Google Play and use the code LexPodcast, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount and to support this podcast. In fact, for a limited time now, if you sign up for an all access pass for a year, you get to get another all access pass to share with a friend. Buy one, get one free. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from to list some of my favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking communication, Will Wright, the creator of SimCity and Sims on game design, Jane Goodall on conservation, Carlos Santana on guitar. His song Europa could be the most beautiful guitar song ever written. Gary Kasparov on chess, Daniel Negrano on poker, and many, many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. For me, the key is to not be overwhelmed by the abundance of choice. Pick three courses you want to complete, watch each of them all the way through. It's not that long, but it's an experience that will stick with you for a long time, I promise. It's easily worth the money. You can watch it on basically any device. Once again, sign up on masterclass.com slash Lex to get a discount and to support this podcast. And now, here's my conversation with David Silver. What was the first program you've ever written? And what programming language? Do you remember? I remember very clearly, yeah. My parents brought home this BBC Model B microcomputer. It was just this fascinating thing to me. I was about seven years old and couldn't resist just playing around with it. So I think first program ever was writing my name out in different colors and getting it to loop and repeat that. And there was something magical about that, which just led to more and more. How did you think about computers back then? Like the magical aspect of it, that you can write a program and there's this thing that you just gave birth to that's able to create sort of visual elements and live in its own. Or did you not think of it in those romantic notions? Was it more like, oh, that's cool. I can solve some puzzles. It was always more than solving puzzles. It was something where, you know, there was this limitless possibilities. Once you have a computer in front of you, you can do anything with it. I used to play with Lego with the same feeling. You can make anything you want out of Lego, but even more so with a computer, you know, you're not constrained by the amount of kit you've got. And so I was fascinated by it and started pulling out the user guide and the advanced user guide and then learning. So I started in basic and then later 6502. My father also became interested in this machine and gave up his career to go back to school and study for a master's degree in artificial intelligence, funnily enough, at Essex University when I was seven. So I was exposed to those things at an early age. He showed me how to program in prologue and do things like querying your family tree. And those are some of my earliest memories of trying to figure things out on a computer. Those are the early steps in computer science programming, but when did you first fall in love with artificial intelligence or with the ideas, the dreams of AI? I think it was really when I went to study at university. So I was an undergrad at Cambridge and studying computer science. And I really started to question, you know, what really are the goals? What's the goal? Where do we want to go with computer science? And it seemed to me that the only step of major significance to take was to try and recreate something akin to human intelligence. If we could do that, that would be a major leap forward. And that idea, I certainly wasn't the first to have it, but it, you know, nestled within me somewhere and became like a bug. You know, I really wanted to crack that problem. So you thought it was, like you had a notion that this is something that human beings can do, that it is possible to create an intelligent machine. Well, I mean, unless you believe in something metaphysical, then what are our brains doing? Well, at some level they're information processing systems, which are able to take whatever information is in there, transform it through some form of program and produce some kind of output, which enables that human being to do all the amazing things that they can do in this incredible world. So then do you remember the first time you've written a program that, because you also had an interest in games. Do you remember the first time you were in a program that beat you in a game? That more beat you at anything? Sort of achieved super David Silver level performance? So I used to work in the games industry. So for five years I programmed games for my first job. So it was an amazing opportunity to get involved in a startup company. And so I was involved in building AI at that time. And so for sure there was a sense of building handcrafted, what people used to call AI in the games industry, which I think is not really what we might think of as AI in its fullest sense, but something which is able to take actions and in a way which makes things interesting and challenging for the human player. And at that time I was able to build these handcrafted agents, which in certain limited cases could do things which were able to do better than me, but mostly in these kind of Twitch like scenarios where they were able to do things faster or because they had some pattern which was able to exploit repeatedly. I think if we're talking about real AI, the first experience for me came after that when I realized that this path I was on wasn't taking me towards, it wasn't dealing with that bug which I still had inside me to really understand intelligence and try and solve it. That everything people were doing in games was short term fixes rather than long term vision. And so I went back to study for my PhD, which was funny enough trying to apply reinforcement learning to the game of Go. And I built my first Go program using reinforcement learning, a system which would by trial and error play against itself and was able to learn which patterns were actually helpful to predict whether it was gonna win or lose the game and then choose the moves that led to the combination of patterns that would mean that you're more likely to win. And that system, that system beat me. And how did that make you feel? Made me feel good. I mean, was there sort of the, yeah, it's a mix of a sort of excitement and was there a tinge of sort of like, almost like a fearful awe? You know, it's like in space, 2001 Space Odyssey kind of realizing that you've created something that, you know, that's achieved human level intelligence in this one particular little task. And in that case, I suppose neural networks weren't involved. There were no neural networks in those days. This was pre deep learning revolution. But it was a principled self learning system based on a lot of the principles which people are still using in deep reinforcement learning. How did I feel? I think I found it immensely satisfying that a system which was able to learn from first principles for itself was able to reach the point that it was understanding this domain better than I could and able to outwit me. I don't think it was a sense of awe. It was a sense that satisfaction, that something I felt should work had worked. So to me, AlphaGo, and I don't know how else to put it, but to me, AlphaGo and AlphaGo Zero, mastering the game of Go is again, to me, the most profound and inspiring moment in the history of artificial intelligence. So you're one of the key people behind this achievement and I'm Russian. So I really felt the first sort of seminal achievement when Deep Blue beat Garry Kasparov in 1987. So as far as I know, the AI community at that point largely saw the game of Go as unbeatable in AI using the sort of the state of the art brute force methods, search methods. Even if you consider, at least the way I saw it, even if you consider arbitrary exponential scaling of compute, Go would still not be solvable, hence why it was thought to be impossible. So given that the game of Go was impossible to master, what was the dream for you? You just mentioned your PhD thesis of building the system that plays Go. What was the dream for you that you could actually build a computer program that achieves world class, not necessarily beats the world champion, but achieves that kind of level of playing Go? First of all, thank you, that's very kind words. And funnily enough, I just came from a panel where I was actually in a conversation with Garry Kasparov and Murray Campbell, who was the author of Deep Blue. And it was their first meeting together since the match. So that just occurred yesterday. So I'm literally fresh from that experience. So these are amazing moments when they happen, but where did it all start? Well, for me, it started when I became fascinated in the game of Go. So Go for me, I've grown up playing games. I've always had a fascination in board games. I played chess as a kid, I played Scrabble as a kid. When I was at university, I discovered the game of Go. And to me, it just blew all of those other games out of the water. It was just so deep and profound in its complexity with endless levels to it. What I discovered was that I could devote endless hours to this game. And I knew in my heart of hearts that no matter how many hours I would devote to it, I would never become a grandmaster, or there was another path. And the other path was to try and understand how you could get some other intelligence to play this game better than I would be able to. And so even in those days, I had this idea that, what if, what if it was possible to build a program that could crack this? And as I started to explore the domain, I discovered that this was really the domain where people felt deeply that if progress could be made in Go, it would really mean a giant leap forward for AI. It was the challenge where all other approaches had failed. This is coming out of the era you mentioned, which was in some sense, the golden era for the classical methods of AI, like heuristic search. In the 90s, they all fell one after another, not just chess with deep blue, but checkers, backgammon, Othello. There were numerous cases where systems built on top of heuristic search methods with these high performance systems had been able to defeat the human world champion in each of those domains. And yet in that same time period, there was a million dollar prize available for the game of Go, for the first system to be a human professional player. And at the end of that time period, in year 2000 when the prize expired, the strongest Go program in the world was defeated by a nine year old child when that nine year old child was giving nine free moves to the computer at the start of the game to try and even things up. And computer Go expert beat that same strongest program with 29 handicapped stones, 29 free moves. So that's what the state of affairs was when I became interested in this problem in around 2003 when I started working on computer Go. There was nothing, there was very, very little in the way of progress towards meaningful performance, again, anything approaching human level. And so people, it wasn't through lack of effort, people had tried many, many things. And so there was a strong sense that something different would be required for Go than had been needed for all of these other domains where AI had been successful. And maybe the single clearest example is that Go, unlike those other domains, had this kind of intuitive property that a Go player would look at a position and say, hey, here's this mess of black and white stones. But from this mess, oh, I can predict that this part of the board has become my territory, this part of the board has become your territory, and I've got this overall sense that I'm gonna win and that this is about the right move to play. And that intuitive sense of judgment, of being able to evaluate what's going on in a position, it was pivotal to humans being able to play this game and something that people had no idea how to put into computers. So this question of how to evaluate a position, how to come up with these intuitive judgments was the key reason why Go was so hard in addition to its enormous search space, and the reason why methods which had succeeded so well elsewhere failed in Go. And so people really felt deep down that in order to crack Go we would need to get something akin to human intuition. And if we got something akin to human intuition, we'd be able to solve many, many more problems in AI. So for me, that was the moment where it's like, okay, this is not just about playing the game of Go, this is about something profound. And it was back to that bug which had been itching me all those years. This is the opportunity to do something meaningful and transformative, and I guess a dream was born. That's a really interesting way to put it. So almost this realization that you need to find, formulate Go as a kind of a prediction problem versus a search problem was the intuition. I mean, maybe that's the wrong crude term, but to give it the ability to kind of intuit things about positional structure of the board. Now, okay, but what about the learning part of it? Did you have a sense that you have to, that learning has to be part of the system? Again, something that hasn't really as far as I think, except with TD Gammon in the 90s with RL a little bit, hasn't been part of those state of the art game playing systems. So I strongly felt that learning would be necessary. And that's why my PhD topic back then was trying to apply reinforcement learning to the game of Go and not just learning of any type, but I felt that the only way to really have a system to progress beyond human levels of performance wouldn't just be to mimic how humans do it, but to understand for themselves. And how else can a machine hope to understand what's going on except through learning? If you're not learning, what else are you doing? Well, you're putting all the knowledge into the system. And that just feels like something which decades of AI have told us is maybe not a dead end, but certainly has a ceiling to the capabilities. It's known as the knowledge acquisition bottleneck, that the more you try to put into something, the more brittle the system becomes. And so you just have to have learning. You have to have learning. That's the only way you're going to be able to get a system which has sufficient knowledge in it, millions and millions of pieces of knowledge, billions, trillions of a form that it can actually apply for itself and understand how those billions and trillions of pieces of knowledge can be leveraged in a way which will actually lead it towards its goal without conflict or other issues. Yeah, I mean, if I put myself back in that time, I just wouldn't think like that. Without a good demonstration of RL, I would think more in the symbolic AI, like not learning, but sort of a simulation of knowledge base, like a growing knowledge base, but it would still be sort of pattern based, like basically have little rules that you kind of assemble together into a large knowledge base. Well, in a sense, that was the state of the art back then. So if you look at the Go programs, which had been competing for this prize I mentioned, they were an assembly of different specialized systems, some of which used huge amounts of human knowledge to describe how you should play the opening, how you should, all the different patterns that were required to play well in the game of Go, end game theory, combinatorial game theory, and combined with more principled search based methods, which were trying to solve for particular sub parts of the game, like life and death, connecting groups together, all these amazing sub problems that just emerge in the game of Go, there were different pieces all put together into this like collage, which together would try and play against a human. And although not all of the pieces were handcrafted, the overall effect was nevertheless still brittle, and it was hard to make all these pieces work well together. And so really, what I was pressing for and the main innovation of the approach I took was to go back to first principles and say, well, let's back off that and try and find a principled approach where the system can learn for itself, just from the outcome, like learn for itself. If you try something, did that help or did it not help? And only through that procedure can you arrive at knowledge, which is verified. The system has to verify it for itself, not relying on any other third party to say this is right or this is wrong. And so that principle was already very important in those days, but unfortunately, we were missing some important pieces back then. So before we dive into maybe discussing the beauty of reinforcement learning, let's take a step back, we kind of skipped it a bit, but the rules of the game of Go, what the elements of it perhaps contrasting to chess that sort of you really enjoyed as a human being, and also that make it really difficult as a AI machine learning problem. So the game of Go has remarkably simple rules. In fact, so simple that people have speculated that if we were to meet alien life at some point, that we wouldn't be able to communicate with them, but we would be able to play Go with them. Probably have discovered the same rule set. So the game is played on a 19 by 19 grid, and you play on the intersections of the grid and the players take turns. And the aim of the game is very simple. It's to surround as much territory as you can, as many of these intersections with your stones and to surround more than your opponent does. And the only nuance to the game is that if you fully surround your opponent's piece, then you get to capture it and remove it from the board and it counts as your own territory. Now from those very simple rules, immense complexity arises. There's kind of profound strategies in how to surround territory, how to kind of trade off between making solid territory yourself now compared to building up influence that will help you acquire territory later in the game, how to connect groups together, how to keep your own groups alive, which patterns of stones are most useful compared to others. There's just immense knowledge. And human Go players have played this game for, it was discovered thousands of years ago, and human Go players have built up this immense knowledge base over the years. It's studied very deeply and played by something like 50 million players across the world, mostly in China, Japan, and Korea, where it's an important part of the culture, so much so that it's considered one of the four ancient arts that was required by Chinese scholars. So there's a deep history there. But there's interesting qualities. So if I sort of compare to chess, chess is in the same way as it is in Chinese culture for Go, and chess in Russia is also considered one of the sacred arts. So if we contrast sort of Go with chess, there's interesting qualities about Go. Maybe you can correct me if I'm wrong, but the evaluation of a particular static board is not as reliable. Like you can't, in chess you can kind of assign points to the different units, and it's kind of a pretty good measure of who's winning, who's losing. It's not so clear. Yeah, so in the game of Go, you find yourself in a situation where both players have played the same number of stones. Actually, captures at a strong level of play happen very rarely, which means that at any moment in the game, you've got the same number of white stones and black stones. And the only thing which differentiates how well you're doing is this intuitive sense of where are the territories ultimately going to form on this board? And if you look at the complexity of a real Go position, it's mind boggling that kind of question of what will happen in 300 moves from now when you see just a scattering of 20 white and black stones intermingled. And so that challenge is the reason why position evaluation is so hard in Go compared to other games. In addition to that, it has an enormous search space. So there's around 10 to the 170 positions in the game of Go. That's an astronomical number. And that search space is so great that traditional heuristic search methods that were so successful in things like Deep Blue and chess programs just kind of fall over in Go. So at which point did reinforcement learning enter your life, your research life, your way of thinking? We just talked about learning, but reinforcement learning is a very particular kind of learning. One that's both philosophically sort of profound, but also one that's pretty difficult to get to work as if we look back in the early days. So when did that enter your life and how did that work progress? So I had just finished working in the games industry at this startup company. And I took a year out to discover for myself exactly which path I wanted to take. I knew I wanted to study intelligence, but I wasn't sure what that meant at that stage. I really didn't feel I had the tools to decide on exactly which path I wanted to follow. So during that year, I read a lot. And one of the things I read was Saturn and Barto, the sort of seminal textbook on an introduction to reinforcement learning. And when I read that textbook, I just had this resonating feeling that this is what I understood intelligence to be. And this was the path that I felt would be necessary to go down to make progress in AI. So I got in touch with Rich Saturn and asked him if he would be interested in supervising me on a PhD thesis in computer go. And he basically said that if he's still alive, he'd be happy to. But unfortunately, he'd been struggling with very serious cancer for some years. And he really wasn't confident at that stage that he'd even be around to see the end event. But fortunately, that part of the story worked out very happily. And I found myself out there in Alberta. They've got a great games group out there with a history of fantastic work in board games as well, as Rich Saturn, the father of RL. So it was the natural place for me to go in some sense to study this question. And the more I looked into it, the more strongly I felt that this wasn't just the path to progress in computer go. But really, this was the thing I'd been looking for. This was really an opportunity to frame what intelligence means. Like what are the goals of AI in a clear, single clear problem definition, such that if we're able to solve that clear single problem definition, in some sense, we've cracked the problem of AI. So to you, reinforcement learning ideas, at least sort of echoes of it, would be at the core of intelligence. It is at the core of intelligence. And if we ever create a human level intelligence system, it would be at the core of that kind of system. Let me say it this way, that I think it's helpful to separate out the problem from the solution. So I see the problem of intelligence, I would say it can be formalized as the reinforcement learning problem, and that that formalization is enough to capture most, if not all of the things that we mean by intelligence, that they can all be brought within this framework and gives us a way to access them in a meaningful way that allows us as scientists to understand intelligence and us as computer scientists to build them. And so in that sense, I feel that it gives us a path, maybe not the only path, but a path towards AI. And so do I think that any system in the future that's solved AI would have to have RL within it? Well, I think if you ask that, you're asking about the solution methods. I would say that if we have such a thing, it would be a solution to the RL problem. Now, what particular methods have been used to get there? Well, we should keep an open mind about the best approaches to actually solve any problem. And the things we have right now for reinforcement learning, maybe I believe they've got a lot of legs, but maybe we're missing some things. Maybe there's gonna be better ideas. I think we should keep, let's remain modest and we're at the early days of this field and there are many amazing discoveries ahead of us. For sure, the specifics, especially of the different kinds of RL approaches currently, there could be other things that fall into the very large umbrella of RL. But if it's okay, can we take a step back and kind of ask the basic question of what is to you reinforcement learning? So reinforcement learning is the study and the science and the problem of intelligence in the form of an agent that interacts with an environment. So the problem you're trying to solve is represented by some environment, like the world in which that agent is situated. And the goal of RL is clear that the agent gets to take actions. Those actions have some effect on the environment and the environment gives back an observation to the agent saying, this is what you see or sense. And one special thing which it gives back is called the reward signal, how well it's doing in the environment. And the reinforcement learning problem is to simply take actions over time so as to maximize that reward signal. So a couple of basic questions. What types of RL approaches are there? So I don't know if there's a nice brief inwards way to paint the picture of sort of value based, model based, policy based reinforcement learning. Yeah, so now if we think about, okay, so there's this ambitious problem definition of RL. It's really, it's truly ambitious. It's trying to capture and encircle all of the things in which an agent interacts with an environment and say, well, how can we formalize and understand what it means to crack that? Now let's think about the solution method. Well, how do you solve a really hard problem like that? Well, one approach you can take is to decompose that very hard problem into pieces that work together to solve that hard problem. And so you can kind of look at the decomposition that's inside the agent's head, if you like, and ask, well, what form does that decomposition take? And some of the most common pieces that people use when they're kind of putting the solution method together, some of the most common pieces that people use are whether or not that solution has a value function. That means, is it trying to predict, explicitly trying to predict how much reward it will get in the future? Does it have a representation of a policy? That means something which is deciding how to pick actions. Is that decision making process explicitly represented? And is there a model in the system? Is there something which is explicitly trying to predict what will happen in the environment? And so those three pieces are, to me, some of the most common building blocks. And I understand the different choices in RL as choices of whether or not to use those building blocks when you're trying to decompose the solution. Should I have a value function represented? Should I have a policy represented? Should I have a model represented? And there are combinations of those pieces and, of course, other things that you could add into the picture as well. But those three fundamental choices give rise to some of the branches of RL with which we're very familiar. And so those, as you mentioned, there is a choice of what's specified or modeled explicitly. And the idea is that all of these are somehow implicitly learned within the system. So it's almost a choice of how you approach a problem. Do you see those as fundamental differences or are these almost like small specifics, like the details of how you solve a problem but they're not fundamentally different from each other? I think the fundamental idea is maybe at the higher level. The fundamental idea is the first step of the decomposition is really to say, well, how are we really gonna solve any kind of problem where you're trying to figure out how to take actions and just from this stream of observations, you've got some agent situated in its sensory motor stream and getting all these observations in, getting to take these actions, and what should it do? How can you even broach that problem? You know, maybe the complexity of the world is so great that you can't even imagine how to build a system that would understand how to deal with that. And so the first step of this decomposition is to say, well, you have to learn. The system has to learn for itself. And so note that the reinforcement learning problem doesn't actually stipulate that you have to learn. Like you could maximize your rewards without learning. It would just, wouldn't do a very good job of it. So learning is required because it's the only way to achieve good performance in any sufficiently large and complex environment. So that's the first step. And so that step gives commonality to all of the other pieces, because now you might ask, well, what should you be learning? What does learning even mean? You know, in this sense, you know, learning might mean, well, you're trying to update the parameters of some system, which is then the thing that actually picks the actions. And those parameters could be representing anything. They could be parameterizing a value function or a model or a policy. And so in that sense, there's a lot of commonality in that whatever is being represented there is the thing which is being learned, and it's being learned with the ultimate goal of maximizing rewards. But the way in which you decompose the problem is really what gives the semantics to the whole system. Like, are you trying to learn something to predict well, like a value function or a model? Are you learning something to perform well, like a policy? And the form of that objective is kind of giving the semantics to the system. And so it really is, at the next level down, a fundamental choice, and we have to make those fundamental choices as system designers or enable our algorithms to be able to learn how to make those choices for themselves. So then the next step you mentioned, the very first thing you have to deal with is, can you even take in this huge stream of observations and do anything with it? So the natural next basic question is, what is deep reinforcement learning? And what is this idea of using neural networks to deal with this huge incoming stream? So amongst all the approaches for reinforcement learning, deep reinforcement learning is one family of solution methods that tries to utilize powerful representations that are offered by neural networks to represent any of these different components of the solution, of the agent, like whether it's the value function or the model or the policy. The idea of deep learning is to say, well, here's a powerful toolkit that's so powerful that it's universal in the sense that it can represent any function and it can learn any function. And so if we can leverage that universality, that means that whatever we need to represent for our policy or for our value function or for a model, deep learning can do it. So that deep learning is one approach that offers us a toolkit that has no ceiling to its performance, that as we start to put more resources into the system, more memory and more computation and more data, more experience, more interactions with the environment, that these are systems that can just get better and better and better at doing whatever the job is they've asked them to do, whatever we've asked that function to represent, it can learn a function that does a better and better job of representing that knowledge, whether that knowledge be estimating how well you're gonna do in the world, the value function, whether it's gonna be choosing what to do in the world, the policy, or whether it's understanding the world itself, what's gonna happen next, the model. Nevertheless, the fact that neural networks are able to learn incredibly complex representations that allow you to do the policy, the model or the value function is, at least to my mind, exceptionally beautiful and surprising. Like, was it surprising to you? Can you still believe it works as well as it does? Do you have good intuition about why it works at all and works as well as it does? I think, let me take two parts to that question. I think it's not surprising to me that the idea of reinforcement learning works because in some sense, I think it's the, I feel it's the only thing which can ultimately. And so I feel we have to address it and there must be success as possible because we have examples of intelligence. And it must at some level be able to, possible to acquire experience and use that experience to do better in a way which is meaningful to environments of the complexity that humans can deal with. It must be. Am I surprised that our current systems can do as well as they can do? I think one of the big surprises for me and a lot of the community is really the fact that deep learning can continue to perform so well despite the fact that these neural networks that they're representing have these incredibly nonlinear kind of bumpy surfaces which to our kind of low dimensional intuitions make it feel like surely you're just gonna get stuck and learning will get stuck because you won't be able to make any further progress. And yet the big surprise is that learning continues and these what appear to be local optima turn out not to be because in high dimensions when we make really big neural nets, there's always a way out and there's a way to go even lower and then you're still not in a local optima because there's some other pathway that will take you out and take you lower still. And so no matter where you are, learning can proceed and do better and better and better without bound. And so that is a surprising and beautiful property of neural nets which I find elegant and beautiful and somewhat shocking that it turns out to be the case. As you said, which I really like to our low dimensional intuitions, that's surprising. Yeah, we're very tuned to working within a three dimensional environment. And so to start to visualize what a billion dimensional neural network surface that you're trying to optimize over, what that even looks like is very hard for us. And so I think that really, if you try to account for the, essentially the AI winter where people gave up on neural networks, I think it's really down to that lack of ability to generalize from low dimensions to high dimensions because back then we were in the low dimensional case. People could only build neural nets with 50 nodes in them or something. And to imagine that it might be possible to build a billion dimensional neural net and it might have a completely different, qualitatively different property was very hard to anticipate. And I think even now we're starting to build the theory to support that. And it's incomplete at the moment, but all of the theory seems to be pointing in the direction that indeed this is an approach which truly is universal both in its representational capacity, which was known, but also in its learning ability, which is surprising. And it makes one wonder what else we're missing due to our low dimensional intuitions that will seem obvious once it's discovered. I often wonder, when we one day do have AIs which are superhuman in their abilities to understand the world, what will they think of the algorithms that we developed back now? Will it be looking back at these days and thinking that, will we look back and feel that these algorithms were naive first steps or will they still be the fundamental ideas which are used even in 100,000, 10,000 years? It's hard to know. They'll watch back to this conversation and with a smile, maybe a little bit of a laugh. I mean, my sense is, I think just like when we used to think that the sun revolved around the earth, they'll see our systems of today, reinforcement learning as too complicated, that the answer was simple all along. There's something, just like you said in the game of Go, I mean, I love the systems of like cellular automata, that there's simple rules from which incredible complexity emerges, so it feels like there might be some really simple approaches, just like Rich Sutton says, right? These simple methods with compute over time seem to prove to be the most effective. I 100% agree. I think that if we try to anticipate what will generalize well into the future, I think it's likely to be the case that it's the simple, clear ideas which will have the longest legs and which will carry us furthest into the future. Nevertheless, we're in a situation where we need to make things work today, and sometimes that requires putting together more complex systems where we don't have the full answers yet as to what those minimal ingredients might be. So speaking of which, if we could take a step back to Go, what was MoGo and what was the key idea behind the system? So back during my PhD on Computer Go, around about that time, there was a major new development which actually happened in the context of Computer Go, and it was really a revolution in the way that heuristic search was done, and the idea was essentially that a position could be evaluated or a state in general could be evaluated not by humans saying whether that position is good or not, or even humans providing rules as to how you might evaluate it, but instead by allowing the system to randomly play out the game until the end multiple times and taking the average of those outcomes as the prediction of what will happen. So for example, if you're in the game of Go, the intuition is that you take a position and you get the system to kind of play random moves against itself all the way to the end of the game and you see who wins. And if black ends up winning more of those random games than white, well, you say, hey, this is a position that favors white. And if white ends up winning more of those random games than black, then it favors white. So that idea was known as Monte Carlo search, and a particular form of Monte Carlo search that became very effective and was developed in computer Go first by Remy Coulomb in 2006, and then taken further by others was something called Monte Carlo tree search, which basically takes that same idea and uses that insight to evaluate every node of a search tree is evaluated by the average of the random play outs from that node onwards. And this idea, when you think about it, and this idea was very powerful and suddenly led to huge leaps forward in the strength of computer Go playing programs. And among those, the strongest of the Go playing programs in those days was a program called MoGo, which was the first program to actually reach human master level on small boards, nine by nine boards. And so this was a program by someone called Sylvain Gelli, who's a good colleague of mine, but I worked with him a little bit in those days, part of my PhD thesis. And MoGo was a first step towards the latest successes we saw in computer Go, but it was still missing a key ingredient. MoGo was evaluating purely by random rollouts against itself. And in a way, it's truly remarkable that random play should give you anything at all. Why in this perfectly deterministic game that's very precise and involves these very exact sequences, why is it that randomization is helpful? And so the intuition is that randomization captures something about the nature of the search tree, from a position that you're understanding the nature of the search tree from that node onwards by using randomization. And this was a very powerful idea. And I've seen this in other spaces, talked to Richard Karp and so on, randomized algorithms somehow magically are able to do exceptionally well and simplifying the problem somehow. Makes you wonder about the fundamental nature of randomness in our universe. It seems to be a useful thing. But so from that moment, can you maybe tell the origin story and the journey of AlphaGo? Yeah, so programs based on Monte Carlo tree search were a first revolution in the sense that they led to suddenly programs that could play the game to any reasonable level, but they plateaued. It seemed that no matter how much effort people put into these techniques, they couldn't exceed the level of amateur Dan level Go players. So strong players, but not anywhere near the level of professionals, nevermind the world champion. And so that brings us to the birth of AlphaGo, which happened in the context of a startup company known as DeepMind. I heard of them. Where a project was born. And the project was really a scientific investigation where myself and Adger Huang and an intern, Chris Madison, were exploring a scientific question. And that scientific question was really, is there another fundamentally different approach to this key question of Go, the key challenge of how can you build that intuition and how can you just have a system that could look at a position and understand what move to play or how well you're doing in that position, who's gonna win? And so the deep learning revolution had just begun. That systems like ImageNet had suddenly been won by deep learning techniques back in 2012. And following that, it was natural to ask, well, if deep learning is able to scale up so effectively with images to understand them enough to classify them, well, why not go? Why not take the black and white stones of the Go board and build a system which can understand for itself what that means in terms of what move to pick or who's gonna win the game, black or white? And so that was our scientific question which we were probing and trying to understand. And as we started to look at it, we discovered that we could build a system. So in fact, our very first paper on AlphaGo was actually a pure deep learning system which was trying to answer this question. And we showed that actually a pure deep learning system with no search at all was actually able to reach human band level, master level at the full game of Go, 19 by 19 boards. And so without any search at all, suddenly we had systems which were playing at the level of the best Monte Carlo tree search systems, the ones with randomized rollouts. So first of all, sorry to interrupt, but that's kind of a groundbreaking notion. That's like basically a definitive step away from a couple of decades of essentially search dominating AI. So how did that make you feel? Was it surprising from a scientific perspective in general, how to make you feel? I found this to be profoundly surprising. In fact, it was so surprising that we had a bet back then. And like many good projects, bets are quite motivating. And the bet was whether it was possible for a system based purely on deep learning, with no search at all to beat a down level human player. And so we had someone who joined our team who was a down level player. He came in and we had this first match against him and... Which side of the bed were you on, by the way? The losing or the winning side? I tend to be an optimist with the power of deep learning and reinforcement learning. So the system won, and we were able to beat this human down level player. And for me, that was the moment where it was like, okay, something special is afoot here. We have a system which without search is able to already just look at this position and understand things as well as a strong human player. And from that point onwards, I really felt that reaching the top levels of human play, professional level, world champion level, I felt it was actually an inevitability. And if it was an inevitable outcome, I was rather keen that it would be us that achieved it. So we scaled up. This was something where, so I had lots of conversations back then with Demis Sassabis, the head of DeepMind, who was extremely excited. And we made the decision to scale up the project, brought more people on board. And so AlphaGo became something where we had a clear goal, which was to try and crack this outstanding challenge of AI to see if we could beat the world's best players. And this led within the space of not so many months to playing against the European champion Fan Hui in a match which became memorable in history as the first time a Go program had ever beaten a professional player. And at that time we had to make a judgment as to when and whether we should go and challenge the world champion. And this was a difficult decision to make. Again, we were basing our predictions on our own progress and had to estimate based on the rapidity of our own progress when we thought we would exceed the level of the human world champion. And we tried to make an estimate and set up a match and that became the AlphaGo versus Lee Sedol match in 2016. And we should say, spoiler alert, that AlphaGo was able to defeat Lee Sedol. That's right, yeah. So maybe we could take even a broader view. AlphaGo involves both learning from expert games and as far as I remember, a self play component to where it learns by playing against itself. But in your sense, what was the role of learning from expert games there? And in terms of your self evaluation, whether you can take on the world champion, what was the thing that you're trying to do more of? Sort of train more on expert games or was there's now another, I'm asking so many poorly phrased questions, but did you have a hope or dream that self play would be the key component at that moment yet? So in the early days of AlphaGo, we used human data to explore the science of what deep learning can achieve. And so when we had our first paper that showed that it was possible to predict the winner of the game, that it was possible to suggest moves, that was done using human data. A solely human data. Yeah, and so the reason that we did it that way was at that time we were exploring separately the deep learning aspect from the reinforcement learning aspect. That was the part which was new and unknown to me at that time was how far could that be stretched? Once we had that, it then became natural to try and use that same representation and see if we could learn for ourselves using that same representation. And so right from the beginning, actually our goal had been to build a system using self play. And to us, the human data right from the beginning was an expedient step to help us for pragmatic reasons to go faster towards the goals of the project than we might be able to starting solely from self play. And so in those days, we were very aware that we were choosing to use human data and that might not be the longterm holy grail of AI, but that it was something which was extremely useful to us. It helped us to understand the system. It helped us to build deep learning representations which were clear and simple and easy to use. And so really I would say it served a purpose not just as part of the algorithm, but something which I continue to use in our research today, which is trying to break down a very hard challenge into pieces which are easier to understand for us as researchers and develop. So if you use a component based on human data, it can help you to understand the system such that then you can build the more principled version later that does it for itself. So as I said, the AlphaGo victory, and I don't think I'm being sort of romanticizing this notion. I think it's one of the greatest moments in the history of AI. So were you cognizant of this magnitude of the accomplishment at the time? I mean, are you cognizant of it even now? Because to me, I feel like it's something that would, we mentioned what the AGI systems of the future will look back. I think they'll look back at the AlphaGo victory as like, holy crap, they figured it out. This is where it started. Well, thank you again. I mean, it's funny because I guess I've been working on, I've been working on ComputerGo for a long time. So I'd been working at the time of the AlphaGo match on ComputerGo for more than a decade. And throughout that decade, I'd had this dream of what would it be like to, what would it be like really to actually be able to build a system that could play against the world champion. And I imagined that that would be an interesting moment that maybe some people might care about that and that this might be a nice achievement. But I think when I arrived in Seoul and discovered the legions of journalists that were following us around and the 100 million people that were watching the match online live, I realized that I'd been off in my estimation of how significant this moment was by several orders of magnitude. And so there was definitely an adjustment process to realize that this was something which the world really cared about and which was a watershed moment. And I think there was that moment of realization. But it's also a little bit scary because if you go into something thinking it's gonna be maybe of interest and then discover that 100 million people are watching, it suddenly makes you worry about whether some of the decisions you'd made were really the best ones or the wisest, or were going to lead to the best outcome. And we knew for sure that there were still imperfections in AlphaGo, which were gonna be exposed to the whole world watching. And so, yeah, it was I think a great experience and I feel privileged to have been part of it, privileged to have led that amazing team. I feel privileged to have been in a moment of history like you say, but also lucky that in a sense I was insulated from the knowledge of, I think it would have been harder to focus on the research if the full kind of reality of what was gonna come to pass had been known to me and the team. I think it was, we were in our bubble and we were working on research and we were trying to answer the scientific questions and then bam, the public sees it. And I think it was better that way in retrospect. Were you confident that, I guess, what were the chances that you could get the win? So just like you said, I'm a little bit more familiar with another accomplishment that we may not even get a chance to talk to. I talked to Oriel Venialis about Alpha Star which is another incredible accomplishment, but here with Alpha Star and beating the StarCraft, there was already a track record with AlphaGo. This is the really first time you get to see reinforcement learning face the best human in the world. So what was your confidence like, what was the odds? Well, we actually. Was there a bet? Funnily enough, there was. So just before the match, we weren't betting on anything concrete, but we all held out a hand. Everyone in the team held out a hand at the beginning of the match. And the number of fingers that they had out on their hand was supposed to represent how many games they thought we would win against Lee Sedol. And there was an amazing spread in the team's predictions. But I have to say, I predicted four, one. And the reason was based purely on data. So I'm a scientist first and foremost. And one of the things which we had established was that AlphaGo in around one in five games would develop something which we called a delusion, which was a kind of in a hole in its knowledge where it wasn't able to fully understand everything about the position. And that hole in its knowledge would persist for tens of moves throughout the game. And we knew two things. We knew that if there were no delusions, that AlphaGo seemed to be playing at a level that was far beyond any human capabilities. But we also knew that if there were delusions, the opposite was true. And in fact, that's what came to pass. We saw all of those outcomes. And Lee Sedol in one of the games played a really beautiful sequence that AlphaGo just hadn't predicted. And after that, it led it into this situation where it was unable to really understand the position fully and found itself in one of these delusions. So indeed, yeah, 4.1 was the outcome. So yeah, and can you maybe speak to it a little bit more? What were the five games? What happened? Is there interesting things that come to memory in terms of the play of the human or the machine? So I remember all of these games vividly, of course. Moments like these don't come too often in the lifetime of a scientist. And the first game was magical because it was the first time that a computer program had defeated a world champion in this grand challenge of Go. And there was a moment where AlphaGo invaded Lee Sedol's territory towards the end of the game. And that's quite an audacious thing to do. It's like saying, hey, you thought this was going to be your territory in the game, but I'm going to stick a stone right in the middle of it and prove to you that I can break it up. And Lee Sedol's face just dropped. He wasn't expecting a computer to do something that audacious. The second game became famous for a move known as move 37. This was a move that was played by AlphaGo that broke all of the conventions of Go, that the Go players were so shocked by this. They thought that maybe the operator had made a mistake. They thought that there was something crazy going on. And it just broke every rule that Go players are taught from a very young age. They're just taught this kind of move called a shoulder hit. You can only play it on the third line or the fourth line, and AlphaGo played it on the fifth line. And it turned out to be a brilliant move and made this beautiful pattern in the middle of the board that ended up winning the game. And so this really was a clear instance where we could say computers exhibited creativity, that this was really a move that was something humans hadn't known about, hadn't anticipated. And computers discovered this idea. They were the ones to say, actually, here's a new idea, something new, not in the domains of human knowledge of the game. And now the humans think this is a reasonable thing to do. And it's part of Go knowledge now. The third game, something special happens when you play against a human world champion, which, again, I hadn't anticipated before going there, which is these players are amazing. Lee Sedol was a true champion, 18 time world champion, and had this amazing ability to probe AlphaGo for weaknesses of any kind. And in the third game, he was losing, and we felt we were sailing comfortably to victory. But he managed to, from nothing, stir up this fight and build what's called a double co, these kind of repetitive positions. And he knew that historically, no computer Go program had ever been able to deal correctly with double co positions. And he managed to summon one out of nothing. And so for us, this was a real challenge. Would AlphaGo be able to deal with this, or would it just kind of crumble in the face of this situation? And fortunately, it dealt with it perfectly. The fourth game was amazing in that Lee Sedol appeared to be losing this game. AlphaGo thought it was winning. And then Lee Sedol did something, which I think only a true world champion can do, which is he found a brilliant sequence in the middle of the game, a brilliant sequence that led him to really just transform the position. He kind of found just a piece of genius, really. And after that, AlphaGo, its evaluation just tumbled. It thought it was winning this game. And all of a sudden, it tumbled and said, oh, now I've got no chance. And it started to behave rather oddly at that point. In the final game, for some reason, we as a team were convinced, having seen AlphaGo in the previous game, suffer from delusions. We as a team were convinced that it was suffering from another delusion. We were convinced that it was misevaluating the position and that something was going terribly wrong. And it was only in the last few moves of the game that we realized that actually, although it had been predicting it was going to win all the way through, it really was. And so somehow, it just taught us yet again that you have to have faith in your systems. When they exceed your own level of ability and your own judgment, you have to trust in them to know better than you, the designer, once you've bestowed in them the ability to judge better than you can, then trust the system to do so. So just like in the case of Deep Blue beating Gary Kasparov, so Gary was, I think, the first time he's ever lost, actually, to anybody. And I mean, there's a similar situation with Lee Sedol. It's a tragic loss for humans, but a beautiful one, I think, that's kind of, from the tragedy, sort of emerges over time, emerges a kind of inspiring story. But Lee Sedol recently announced his retirement. I don't know if we can look too deeply into it, but he did say that even if I become number one, there's an entity that cannot be defeated. So what do you think about these words? What do you think about his retirement from the game ago? Well, let me take you back, first of all, to the first part of your comment about Gary Kasparov, because actually, at the panel yesterday, he specifically said that when he first lost to Deep Blue, he viewed it as a failure. He viewed that this had been a failure of his. But later on in his career, he said he'd come to realize that actually, it was a success. It was a success for everyone, because this marked transformational moment for AI. And so even for Gary Kasparov, he came to realize that that moment was pivotal and actually meant something much more than his personal loss in that moment. Lee Sedol, I think, was much more cognizant of that, even at the time. And so in his closing remarks to the match, he really felt very strongly that what had happened in the AlphaGo match was not only meaningful for AI, but for humans as well. And he felt as a Go player that it had opened his horizons and meant that he could start exploring new things. It brought his joy back for the game of Go, because it had broken all of the conventions and barriers and meant that suddenly, anything was possible again. So I was sad to hear that he'd retired, but he's been a great world champion over many, many years. And I think he'll be remembered for that ever more. He'll be remembered as the last person to beat AlphaGo. I mean, after that, we increased the power of the system. And the next version of AlphaGo beats the other strong human player 60 games to nil. So what a great moment for him and something to be remembered for. It's interesting that you spent time at AAAI on a panel with Garry Kasparov. What, I mean, it's almost, I'm just curious to learn the conversations you've had with Garry, because he's also now, he's written a book about artificial intelligence. He's thinking about AI. He has kind of a view of it. And he talks about AlphaGo a lot. What's your sense? Arguably, I'm not just being Russian, but I think Garry is the greatest chess player of all time, probably one of the greatest game players of all time. And you sort of at the center of creating a system that beats one of the greatest players of all time. So what is that conversation like? Is there anything, any interesting digs, any bets, any funny things, any profound things? So Garry Kasparov has an incredible respect for what we did with AlphaGo. And it's an amazing tribute coming from him of all people that he really appreciates and respects what we've done. And I think he feels that the progress which has happened in computer chess, which later after AlphaGo, we built the AlphaZero system, which defeated the world's strongest chess programs. And to Garry Kasparov, that moment in computer chess was more profound than Deep Blue. And the reason he believes it mattered more was because it was done with learning and a system which was able to discover for itself new principles, new ideas, which were able to play the game in a way which he hadn't always known about or anyone. And in fact, one of the things I discovered at this panel was that the current world champion, Magnus Carlsen, apparently recently commented on his improvement in performance. And he attributed it to AlphaZero, that he's been studying the games of AlphaZero. And he's changed his style to play more like AlphaZero. And it's led to him actually increasing his rating to a new peak. Yeah, I guess to me, just like to Garry, the inspiring thing is that, and just like you said, with reinforcement learning, reinforcement learning and deep learning, machine learning feels like what intelligence is. And you could attribute it to a bitter viewpoint from Garry's perspective, from us humans perspective, saying that pure search that IBM Deep Blue was doing is not really intelligence, but somehow it didn't feel like it. And so that's the magical. I'm not sure what it is about learning that feels like intelligence, but it does. So I think we should not demean the achievements of what was done in previous eras of AI. I think that Deep Blue was an amazing achievement in itself. And that heuristic search of the kind that was used by Deep Blue had some powerful ideas that were in there, but it also missed some things. So the fact that the evaluation function, the way that the chess position was understood, was created by humans and not by the machine is a limitation, which means that there's a ceiling on how well it can do. But maybe more importantly, it means that the same idea cannot be applied in other domains where we don't have access to the human grandmasters and that ability to encode exactly their knowledge into an evaluation function. And the reality is that the story of AI is that most domains turn out to be of the second type where knowledge is messy, it's hard to extract from experts, or it isn't even available. And so we need to solve problems in a different way. And I think AlphaGo is a step towards solving things in a way which puts learning as a first class citizen and says systems need to understand for themselves how to understand the world, how to judge the value of any action that they might take within that world and any state they might find themselves in. And in order to do that, we make progress towards AI. Yeah, so one of the nice things about taking a learning approach to the game of Go or game playing is that the things you learn, the things you figure out, are actually going to be applicable to other problems that are real world problems. That's ultimately, I mean, there's two really interesting things about AlphaGo. One is the science of it, just the science of learning, the science of intelligence. And then the other is while you're actually learning to figuring out how to build systems that would be potentially applicable in other applications, medical, autonomous vehicles, robotics, I mean, it's just open the door to all kinds of applications. So the next incredible step, really the profound step is probably AlphaGo Zero. I mean, it's arguable. I kind of see them all as the same place. But really, and perhaps you were already thinking that AlphaGo Zero is the natural. It was always going to be the next step. But it's removing the reliance on human expert games for pre training, as you mentioned. So how big of an intellectual leap was this that self play could achieve superhuman level performance in its own? And maybe could you also say, what is self play? Kind of mention it a few times. So let me start with self play. So the idea of self play is something which is really about systems learning for themselves, but in the situation where there's more than one agent. And so if you're in a game, and the game is played between two players, then self play is really about understanding that game just by playing games against yourself rather than against any actual real opponent. And so it's a way to kind of discover strategies without having to actually need to go out and play against any particular human player, for example. The main idea of Alpha Zero was really to try and step back from any of the knowledge that we put into the system and ask the question, is it possible to come up with a single elegant principle by which a system can learn for itself all of the knowledge which it requires to play a game such as Go? Importantly, by taking knowledge out, you not only make the system less brittle in the sense that perhaps the knowledge you were putting in was just getting in the way and maybe stopping the system learning for itself, but also you make it more general. The more knowledge you put in, the harder it is for a system to actually be placed, taken out of the system in which it's kind of been designed, and placed in some other system that maybe would need a completely different knowledge base to understand and perform well. And so the real goal here is to strip out all of the knowledge that we put in to the point that we can just plug it into something totally different. And that, to me, is really the promise of AI is that we can have systems such as that which, no matter what the goal is, no matter what goal we set to the system, we can come up with an algorithm which can be placed into that world, into that environment, and can succeed in achieving that goal. And then that, to me, is almost the essence of intelligence if we can achieve that. And so AlphaZero is a step towards that. And it's a step that was taken in the context of two player perfect information games like Go and chess. We also applied it to Japanese chess. So just to clarify, the first step was AlphaGo Zero. The first step was to try and take all of the knowledge out of AlphaGo in such a way that it could play in a fully self discovered way, purely from self play. And to me, the motivation for that was always that we could then plug it into other domains. But we saved that until later. Well, in fact, I mean, just for fun, I could tell you exactly the moment where the idea for AlphaZero occurred to me. Because I think there's maybe a lesson there for researchers who are too deeply embedded in their research and working 24 sevens to try and come up with the next idea, which is it actually occurred to me on honeymoon. And I was at my most fully relaxed state, really enjoying myself, and just bing, the algorithm for AlphaZero just appeared in its full form. And this was actually before we played against Lisa Dahl. But we just didn't. I think we were so busy trying to make sure we could beat the world champion that it was only later that we had the opportunity to step back and start examining that sort of deeper scientific question of whether this could really work. So nevertheless, so self play is probably one of the most profound ideas that represents, to me at least, artificial intelligence. But the fact that you could use that kind of mechanism to, again, beat world class players, that's very surprising. So to me, it feels like you have to train in a large number of expert games. So was it surprising to you? What was the intuition? Can you sort of think, not necessarily at that time, even now, what's your intuition? Why this thing works so well? Why it's able to learn from scratch? Well, let me first say why we tried it. So we tried it both because I feel that it was the deeper scientific question to be asking to make progress towards AI, and also because, in general, in my research, I don't like to do research on questions for which we already know the likely outcome. I don't see much value in running an experiment where you're 95% confident that you will succeed. And so we could have tried maybe to take AlphaGo and do something which we knew for sure it would succeed on. But much more interesting to me was to try it on the things which we weren't sure about. And one of the big questions on our minds back then was, could you really do this with self play alone? How far could that go? Would it be as strong? And honestly, we weren't sure. It was 50, 50, I think. If you'd asked me, I wasn't confident that it could reach the same level as these systems, but it felt like the right question to ask. And even if it had not achieved the same level, I felt that that was an important direction to be studying. And so then, lo and behold, it actually ended up outperforming the previous version of AlphaGo and indeed was able to beat it by 100 games to zero. So what's the intuition as to why? I think the intuition to me is clear, that whenever you have errors in a system, as we did in AlphaGo, AlphaGo suffered from these delusions. Occasionally, it would misunderstand what was going on in a position and miss evaluate it. How can you remove all of these errors? Errors arise from many sources. For us, they were arising both starting from the human data, but also from the nature of the search and the nature of the algorithm itself. But the only way to address them in any complex system is to give the system the ability to correct its own errors. It must be able to correct them. It must be able to learn for itself when it's doing something wrong and correct for it. And so it seemed to me that the way to correct delusions was indeed to have more iterations of reinforcement learning, that no matter where you start, you should be able to correct those errors until it gets to play that out and understand, oh, well, I thought that I was going to win in this situation, but then I ended up losing. That suggests that I was miss evaluating something. There's a hole in my knowledge, and now the system can correct for itself and understand how to do better. Now, if you take that same idea and trace it back all the way to the beginning, it should be able to take you from no knowledge, from completely random starting point, all the way to the highest levels of knowledge that you can achieve in a domain. And the principle is the same, that if you bestow a system with the ability to correct its own errors, then it can take you from random to something slightly better than random because it sees the stupid things that the random is doing, and it can correct them. And then it can take you from that slightly better system and understand, well, what's that doing wrong? And it takes you on to the next level and the next level. And this progress can go on indefinitely. And indeed, what would have happened if we'd carried on training AlphaGo Zero for longer? We saw no sign of it slowing down its improvements, or at least it was certainly carrying on to improve. And presumably, if you had the computational resources, this could lead to better and better systems that discover more and more. So your intuition is fundamentally there's not a ceiling to this process. One of the surprising things, just like you said, is the process of patching errors. It intuitively makes sense that this is, that reinforcement learning should be part of that process. But what is surprising is in the process of patching your own lack of knowledge, you don't open up other patches. You keep sort of, like there's a monotonic decrease of your weaknesses. Well, let me back this up. I think science always should make falsifiable hypotheses. So let me back up this claim with a falsifiable hypothesis, which is that if someone was to, in the future, take Alpha Zero as an algorithm and run it on with greater computational resources that we had available today, then I would predict that they would be able to beat the previous system 100 games to zero. And that if they were then to do the same thing a couple of years later, that that would beat that previous system 100 games to zero, and that that process would continue indefinitely throughout at least my human lifetime. Presumably the game of Go would set the ceiling. I mean. The game of Go would set the ceiling, but the game of Go has 10 to the 170 states in it. So the ceiling is unreachable by any computational device that can be built out of the 10 to the 80 atoms in the universe. You asked a really good question, which is, do you not open up other errors when you correct your previous ones? And the answer is yes, you do. And so it's a remarkable fact about this class of two player game and also true of single agent games that essentially progress will always lead you to, if you have sufficient representational resource, like imagine you had, could represent every state in a big table of the game, then we know for sure that a progress of self improvement will lead all the way in the single agent case to the optimal possible behavior, and in the two player case to the minimax optimal behavior. And that is the best way that I can play knowing that you're playing perfectly against me. And so for those cases, we know that even if you do open up some new error, that in some sense you've made progress. You're progressing towards the best that can be done. So AlphaGo was initially trained on expert games with some self play. AlphaGo Zero removed the need to be trained on expert games. And then another incredible step for me, because I just love chess, is to generalize that further to be in AlphaZero to be able to play the game of Go, beating AlphaGo Zero and AlphaGo, and then also being able to play the game of chess and others. So what was that step like? What's the interesting aspects there that required to make that happen? I think the remarkable observation, which we saw with AlphaZero, was that actually without modifying the algorithm at all, it was able to play and crack some of AI's greatest previous challenges. In particular, we dropped it into the game of chess. And unlike the previous systems like Deep Blue, which had been worked on for years and years, and we were able to beat the world's strongest computer chess program convincingly using a system that was fully discovered from scratch with its own principles. And in fact, one of the nice things that we found was that in fact, we also achieved the same result in Japanese chess, a variant of chess where you get to capture pieces and then place them back down on your own side as an extra piece. So a much more complicated variant of chess. And we also beat the world's strongest programs and reached superhuman performance in that game too. And it was the very first time that we'd ever run the system on that particular game, was the version that we published in the paper on AlphaZero. It just worked out of the box, literally, no touching it. We didn't have to do anything. And there it was, superhuman performance, no tweaking, no twiddling. And so I think there's something beautiful about that principle that you can take an algorithm and without twiddling anything, it just works. Now, to go beyond AlphaZero, what's required? AlphaZero is just a step. And there's a long way to go beyond that to really crack the deep problems of AI. But one of the important steps is to acknowledge that the world is a really messy place. It's this rich, complex, beautiful, but messy environment that we live in. And no one gives us the rules. Like no one knows the rules of the world. At least maybe we understand that it operates according to Newtonian or quantum mechanics at the micro level or according to relativity at the macro level. But that's not a model that's useful for us as people to operate in it. Somehow the agent needs to understand the world for itself in a way where no one tells it the rules of the game. And yet it can still figure out what to do in that world, deal with this stream of observations coming in, rich sensory input coming in, actions going out in a way that allows it to reason in the way that AlphaGo or AlphaZero can reason in the way that these go and chess playing programs can reason. But in a way that allows it to take actions in that messy world to achieve its goals. And so this led us to the most recent step in the story of AlphaGo, which was a system called MuZero. And MuZero is a system which learns for itself even when the rules are not given to it. It actually can be dropped into a system with messy perceptual inputs. We actually tried it in some Atari games, the canonical domains of Atari that have been used for reinforcement learning. And this system learned to build a model of these Atari games that was sufficiently rich and useful enough for it to be able to plan successfully. And in fact, that system not only went on to beat the state of the art in Atari, but the same system without modification was able to reach the same level of superhuman performance in go, chess, and shogi that we'd seen in AlphaZero, showing that even without the rules, the system can learn for itself just by trial and error, just by playing this game of go. And no one tells you what the rules are, but you just get to the end and someone says win or loss. You play this game of chess and someone says win or loss, or you play a game of breakout in Atari and someone just tells you your score at the end. And the system for itself figures out essentially the rules of the system, the dynamics of the world, how the world works. And not in any explicit way, but just implicitly, enough understanding for it to be able to plan in that system in order to achieve its goals. And that's the fundamental process that you have to go through when you're facing in any uncertain kind of environment that you would in the real world, is figuring out the sort of the rules, the basic rules of the game. That's right. So that allows it to be applicable to basically any domain that could be digitized in the way that it needs to in order to be consumable, sort of in order for the reinforcement learning framework to be able to sense the environment, to be able to act in the environment and so on. The full reinforcement learning problem needs to deal with worlds that are unknown and complex and the agent needs to learn for itself how to deal with that. And so MuZero is a further step in that direction. One of the things that inspired the general public and just in conversations I have like with my parents or something with my mom that just loves what was done is kind of at least the notion that there was some display of creativity, some new strategies, new behaviors that were created. That again has echoes of intelligence. So is there something that stands out? Do you see it the same way that there's creativity and there's some behaviors, patterns that you saw that AlphaZero was able to display that are truly creative? So let me start by saying that I think we should ask what creativity really means. So to me, creativity means discovering something which wasn't known before, something unexpected, something outside of our norms. And so in that sense, the process of reinforcement learning or the self play approach that was used by AlphaZero is the essence of creativity. It's really saying at every stage, you're playing according to your current norms and you try something and if it works out, you say, hey, here's something great, I'm gonna start using that. And then that process, it's like a micro discovery that happens millions and millions of times over the course of the algorithm's life where it just discovers some new idea, oh, this pattern, this pattern's working really well for me, I'm gonna start using that. And now, oh, here's this other thing I can do, I can start to connect these stones together in this way or I can start to sacrifice stones or give up on pieces or play shoulder hits on the fifth line or whatever it is. The system's discovering things like this for itself continually, repeatedly, all the time. And so it should come as no surprise to us then when if you leave these systems going, that they discover things that are not known to humans, that to the human norms are considered creative. And we've seen this several times. In fact, in AlphaGo Zero, we saw this beautiful timeline of discovery where what we saw was that there are these opening patterns that humans play called joseki, these are like the patterns that humans learn to play in the corners and they've been developed and refined over literally thousands of years in the game of Go. And what we saw was in the course of the training, AlphaGo Zero, over the course of the 40 days that we trained this system, it starts to discover exactly these patterns that human players play. And over time, we found that all of the joseki that humans played were discovered by the system through this process of self play and this sort of essential notion of creativity. But what was really interesting was that over time, it then starts to discard some of these in favor of its own joseki that humans didn't know about. And it starts to say, oh, well, you thought that the Knights move pincer joseki was a great idea, but here's something different you can do there which makes some new variation that humans didn't know about. And actually now the human Go players study the joseki that AlphaGo played and they become the new norms that are used in today's top level Go competitions. That never gets old. Even just the first to me, maybe just makes me feel good as a human being that a self play mechanism that knows nothing about us humans discovers patterns that we humans do. That's just like an affirmation that we're doing okay as humans. Yeah. We've, in this domain and other domains, we figured out it's like the Churchill quote about democracy. It's the, you know, it sucks, but it's the best one we've tried. So in general, taking a step outside of Go and you've like a million accomplishment that I have no time to talk about with AlphaStar and so on and the current work. But in general, this self play mechanism that you've inspired the world with by beating the world champion Go player. Do you see that as, do you see it being applied in other domains? Do you have sort of dreams and hopes that it's applied in both the simulated environments and the constrained environments of games? Constrained, I mean, AlphaStar really demonstrates that you can remove a lot of the constraints, but nevertheless, it's in a digital simulated environment. Do you have a hope, a dream that it starts being applied in the robotics environment? And maybe even in domains that are safety critical and so on and have, you know, have a real impact in the real world, like autonomous vehicles, for example, which seems like a very far out dream at this point. So I absolutely do hope and imagine that we will get to the point where ideas just like these are used in all kinds of different domains. In fact, one of the most satisfying things as a researcher is when you start to see other people use your algorithms in unexpected ways. So in the last couple of years, there have been, you know, a couple of nature papers where different teams, unbeknownst to us, took AlphaZero and applied exactly those same algorithms and ideas to real world problems of huge meaning to society. So one of them was the problem of chemical synthesis, and they were able to beat the state of the art in finding pathways of how to actually synthesize chemicals, retrochemical synthesis. And the second paper actually just came out a couple of weeks ago in Nature, showed that in quantum computation, you know, one of the big questions is how to understand the nature of the function in quantum computation and a system based on AlphaZero beat the state of the art by quite some distance there again. So these are just examples. And I think, you know, the lesson, which we've seen elsewhere in machine learning time and time again, is that if you make something general, it will be used in all kinds of ways. You know, you provide a really powerful tool to society, and those tools can be used in amazing ways. And so I think we're just at the beginning, and for sure, I hope that we see all kinds of outcomes. So the other side of the question of reinforcement learning framework is, you know, you usually want to specify a reward function and an objective function. What do you think about sort of ideas of intrinsic rewards of when we're not really sure about, you know, if we take, you know, human beings as existence proof that we don't seem to be operating according to a single reward, do you think that there's interesting ideas for when you don't know how to truly specify the reward, you know, that there's some flexibility for discovering it intrinsically or so on in the context of reinforcement learning? So I think, you know, when we think about intelligence, it's really important to be clear about the problem of intelligence. And I think it's clearest to understand that problem in terms of some ultimate goal that we want the system to try and solve for. And after all, if we don't understand the ultimate purpose of the system, do we really even have a clearly defined problem that we're solving at all? Now, within that, as with your example for humans, the system may choose to create its own motivations and subgoals that help the system to achieve its ultimate goal. And that may indeed be a hugely important mechanism to achieve those ultimate goals, but there is still some ultimate goal I think the system needs to be measurable and evaluated against. And even for humans, I mean, humans, we're incredibly flexible. We feel that we can, you know, any goal that we're given, we feel we can master to some degree. But if we think of those goals, really, you know, like the goal of being able to pick up an object or the goal of being able to communicate or influence people to do things in a particular way or whatever those goals are, really, they're subgoals, really, that we set ourselves. You know, we choose to pick up the object. We choose to communicate. We choose to influence someone else. And we choose those because we think it will lead us to something later on. We think that's helpful to us to achieve some ultimate goal. Now, I don't want to speculate whether or not humans as a system necessarily have a singular overall goal of survival or whatever it is. But I think the principle for understanding and implementing intelligence is, has to be, that if we're trying to understand intelligence or implement our own, there has to be a well defined problem. Otherwise, if it's not, I think it's like an admission of defeat, that for there to be hope for understanding or implementing intelligence, we have to know what we're doing. We have to know what we're asking the system to do. Otherwise, if you don't have a clearly defined purpose, you're not going to get a clearly defined answer. The ridiculous big question that has to naturally follow, because I have to pin you down on this thing, that nevertheless, one of the big silly or big real questions before humans is the meaning of life, is us trying to figure out our own reward function. And you just kind of mentioned that if you want to build intelligent systems and you know what you're doing, you should be at least cognizant to some degree of what the reward function is. So the natural question is what do you think is the reward function of human life, the meaning of life for us humans, the meaning of our existence? I think I'd be speculating beyond my own expertise, but just for fun, let me do that. Yes, please. And say, I think that there are many levels at which you can understand a system and you can understand something as optimizing for a goal at many levels. And so you can understand the, let's start with the universe. Does the universe have a purpose? Well, it feels like it's just at one level just following certain mechanical laws of physics and that that's led to the development of the universe. But at another level, you can view it as actually, there's the second law of thermodynamics that says that this is increasing in entropy over time forever. And now there's a view that's been developed by certain people at MIT that this, you can think of this as almost like a goal of the universe, that the purpose of the universe is to maximize entropy. So there are multiple levels at which you can understand a system. The next level down, you might say, well, if the goal is to maximize entropy, well, how can that be done by a particular system? And maybe evolution is something that the universe discovered in order to kind of dissipate energy as efficiently as possible. And by the way, I'm borrowing from Max Tegmark for some of these metaphors, the physicist. But if you can think of evolution as a mechanism for dispersing energy, then evolution, you might say, then becomes a goal, which is if evolution disperses energy by reproducing as efficiently as possible, what's evolution then? Well, it's now got its own goal within that, which is to actually reproduce as effectively as possible. And now how does reproduction, how is that made as effective as possible? Well, you need entities within that that can survive and reproduce as effectively as possible. And so it's natural that in order to achieve that high level goal, those individual organisms discover brains, intelligences, which enable them to support the goals of evolution. And those brains, what do they do? Well, perhaps the early brains, maybe they were controlling things at some direct level. Maybe they were the equivalent of preprogrammed systems, which were directly controlling what was going on and setting certain things in order to achieve these particular goals. But that led to another level of discovery, which was learning systems. There are parts of the brain which are able to learn for themselves and learn how to program themselves to achieve any goal. And presumably there are parts of the brain where goals are set to parts of that system and provides this very flexible notion of intelligence that we as humans presumably have, which is the ability to kind of, the reason we feel that we can achieve any goal. So it's a very long winded answer to say that, I think there are many perspectives and many levels at which intelligence can be understood. And at each of those levels, you can take multiple perspectives. You can view the system as something which is optimizing for a goal, which is understanding it at a level by which we can maybe implement it and understand it as AI researchers or computer scientists, or you can understand it at the level of the mechanistic thing which is going on that there are these atoms bouncing around in the brain and they lead to the outcome of that system is not in contradiction with the fact that it's also a decision making system that's optimizing for some goal and purpose. I've never heard the description of the meaning of life structured so beautifully in layers, but you did miss one layer, which is the next step, which you're responsible for, which is creating the artificial intelligence layer on top of that. And I can't wait to see, well, I may not be around, but I can't wait to see what the next layer beyond that be. Well, let's just take that argument and pursue it to its natural conclusion. So the next level indeed is for how can our learning brain achieve its goals most effectively? Well, maybe it does so by us as learning beings building a system which is able to solve for those goals more effectively than we can. And so when we build a system to play the game of Go, when I said that I wanted to build a system that can play Go better than I can, I've enabled myself to achieve that goal of playing Go better than I could by directly playing it and learning it myself. And so now a new layer has been created, which is systems which are able to achieve goals for themselves. And ultimately there may be layers beyond that where they set sub goals to parts of their own system in order to achieve those and so forth. So the story of intelligence, I think, is a multi layered one and a multi perspective one. We live in an incredible universe. David, thank you so much, first of all, for dreaming of using learning to solve Go and building intelligent systems and for actually making it happen and for inspiring millions of people in the process. It's truly an honor. Thank you so much for talking today. Okay, thank you. Thanks for listening to this conversation with David Silver and thank you to our sponsors, Masterclass and Cash App. Please consider supporting the podcast by signing up to Masterclass at masterclass.com slash Lex and downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LexFriedman. And now let me leave you with some words from David Silver. My personal belief is that we've seen something of a turning point where we're starting to understand that many abilities like intuition and creativity that we've previously thought were in the domain only of the human mind are actually accessible to machine intelligence as well. And I think that's a really exciting moment in history. Thank you for listening and hope to see you next time.
David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning | Lex Fridman Podcast #86
The following is a conversation with Richard Dawkins, an evolutionary biologist and author of The Selfish Gene, The Blind Watchmaker, The God Delusion, The Magic of Reality, and The Greatest Show of Earth and his latest All Growing God. He is the originator and popularizer of a lot of fascinating ideas in evolutionary biology and science in general, including, funny enough, the introduction of the word meme in his 1976 book, The Selfish Gene, which, in the context of a gene centered view of evolution, is an exceptionally powerful idea. He's outspoken, bold, and often fearless in the defense of science and reason, and in this way, is one of the most influential thinkers of our time. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with 5 stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEX PODCAST. Cash App lets you send money to friends, buy bitcoin, and invest in the stock market with as little as one dollar. Since Cash App allows you to send and receive money digitally, peer to peer, security in all digital transactions is very important. Let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now we just need to do the same for autonomous vehicles and artificial intelligence systems in general. So again, if you get Cash App from the App Store or Google Play and use the code LEX PODCAST, you get ten dollars and Cash App will also donate ten dollars to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Richard Dawkins. Do you think there's intelligent life out there in the universe? Well, if we accept that there's intelligent life here and we accept that the number of planets in the universe is gigantic, I mean, 10 to the 22 stars has been estimated, it seems to me highly likely that there is not only life in the universe elsewhere, but also intelligent life. If you deny that, then you're committed to the view that the things that happened on this planet are staggeringly improbable, I mean, ludicrously off the charts improbable. And I don't think it's that improbable. Certainly the origin of life itself, there are really two steps, the origin of life, which is probably fairly improbable, and then the subsequent evolution to intelligent life, which is also fairly improbable. So the juxtaposition of those two, you could say, is pretty improbable, but not 10 to the 22 improbable. It's an interesting question, maybe you're coming on to it, how we would recognize intelligence from outer space if we encountered it. The most likely way we would come across them would be by radio. It's highly unlikely they'd ever visit us. But it's not that unlikely that we would pick up radio signals, and then we would have to have some means of deciding that it was intelligent. People involved in the SETI program discuss how they would do it, and things like prime numbers would be an obvious way for them to broadcast, to say, we are intelligent, we are here. I suspect it probably would be obvious, actually. Well, that's interesting, prime numbers, so the mathematical patterns, it's an open question whether mathematics is the same for us as it would be for aliens. I suppose we could assume that ultimately, if we're governed by the same laws of physics, then we should be governed by the same laws of mathematics. I think so. I suspect that they will have Pythagoras theorem, etc. I don't think their mathematics will be that different. Do you think evolution would also be a force on the alien planets as well? I stuck my neck out and said that if ever that we do discover life elsewhere, it will be Darwinian life, in the sense that it will work by some kind of natural selection, the nonrandom survival of randomly generated codes. It doesn't mean that the genetic, it would have to have some kind of genetics, but it doesn't have to be DNA genetics, probably wouldn't be actually. But I think it would have to be Darwinian, yes. So some kind of selection process. Yes, in the general sense, it would be Darwinian. So let me ask kind of an artificial intelligence engineering question. So you've been an outspoken critic of, I guess, what could be called intelligent design, which is an attempt to describe the creation of a human mind and body by some religious folks, religious folks used to describe. So broadly speaking, evolution is, as far as I know, again, you can correct me, is the only scientific theory we have for the development of intelligent life. Like there's no alternative theory, as far as I understand. None has ever been suggested, and I suspect it never will be. Well, of course, whenever somebody says that, a hundred years later. I know. It's a risk. It's a risk. It's a risk. But what a bet. I mean, I'm pretty confident. But it would look, sorry, yes, it would probably look very similar, but it's almost like Einstein general relativity versus Newtonian physics. It'll be maybe an alteration of the theory or something like that, but it won't be fundamentally different. But okay. So now for the past 70 years, even before the AI community has been trying to engineer intelligence, in a sense, to do what intelligent design says, you know, was done here on earth. What's your intuition? Do you think it's possible to build intelligence, to build computers that are intelligent, or do we need to do something like the evolutionary process? Like there's no shortcuts here. That's an interesting question. I'm committed to the belief that is ultimately possible because I think there's nothing nonphysical in our brains. I think our brains work by the laws of physics. And so it must, in principle, it'd be possible to replicate that. In practice, though, it might be very difficult. And as you suggest, it may be the only way to do it is by something like an evolutionary process. I'd be surprised. I suspect that it will come, but it's certainly been slower in coming than some of the early pioneers thought it would be. Yeah. But in your sense, is the evolutionary process efficient? So you can see it as exceptionally wasteful in one perspective, but at the same time, maybe that is the only path. It's a paradox, isn't it? I mean, on the one side, it is deplorably wasteful. It's fundamentally based on waste. On the other hand, it does produce magnificent results. I mean, the design of a soaring bird, an albatross, a vulture, an eagle, is superb. An engineer would be proud to have done it. On the other hand, an engineer would not be proud to have done some of the other things that evolution has served up. Some of the sort of botched jobs that you can easily understand because of their historical origins, but they don't look well designed. Do you have examples of bad design? My favorite example is the recurrent laryngeal nerve. I've used this many times. This is a nerve. It's one of the cranial nerves, which goes from the brain, and the end organ is that it supplies is the voice box, the larynx. But it doesn't go straight to the larynx. It goes right down into the chest and then loops around an artery in the chest and then comes straight back up again to the larynx. And I've assisted in the dissection of a giraffe's neck, which happened to have died in a zoo. And we saw the recurrent laryngeal nerve whizzing straight past the larynx, within an inch of the larynx, down into the chest, and then back up again, which is a detour of many feet. Very, very inefficient. The reason is historical. The ancestors are fish ancestors, the ancestors of all mammals and fish. The most direct pathway of that, of the equivalent of that nerve, there wasn't a larynx in those days, but it innervated part of the gills. The most direct pathway was behind that artery. And then when the mammal, when the tetrapods, when the land vertebrae started evolving, and then the neck started to stretch, the marginal cost of changing the embryological design to jump that nerve over the artery was too great. Or rather, each step of the way was a very small cost, but the cost of actually jumping it over would have been very large. As the neck lengthened, it was a negligible change to just increase the length of the detour a tiny bit, a tiny bit, a tiny bit, each millimeter at a time, didn't make any difference. But finally, when you get to a giraffe, it's a huge detour and no doubt is very inefficient. Now that's bad design. Any engineer would reject that piece of design. It's ridiculous. And there are quite a number of examples, as you'd expect. It's not surprising that we find examples of that sort. In a way, what's surprising is there aren't more of them. In a way, what's surprising is that the design of living things is so good. So natural selection manages to achieve excellent results, partly by tinkering, partly by coming along and cleaning up initial mistakes and, as it were, making the best of a bad job. That's really interesting. I mean, it is surprising and beautiful and it's a mystery from an engineering perspective that so many things are well designed. I suppose the thing we're forgetting is how many generations have to die for that. That's the inefficiency of it. Yes, that's the horrible wastefulness of it. So yeah, we marvel at the final product, but yeah, the process is painful. Elon Musk describes human beings as potentially what he calls the biological bootloader for artificial intelligence or artificial general intelligence is used as the term. It's kind of like super intelligence. Do you see superhuman level intelligence as potentially the next step in the evolutionary process? Yes, I think that if superhuman intelligence is to be found, it will be artificial. I don't have any hope that we ourselves, our brains will go on getting larger in ordinary biological evolution. I think that's probably come to an end. It is the dominant trend or one of the dominant trends in our fossil history for the last two or three million years. Brain size? Brain size, yes. So it's been swelling rather dramatically over the last two or three million years. That is unlikely to continue. The only way that happens is because natural selection favors those individuals with the biggest brains and that's not happening anymore. Right. So in general, in humans, the selection pressures are not, I mean, are they active in any form? Well, in order for them to be active, it would be necessary that the most, let's call it intelligence. Not that intelligence is simply correlated with brain size, but let's talk about intelligence. In order for that to evolve, it's necessary that the most intelligent beings have the most, individuals have the most children. And so intelligence may buy you money, it may buy you worldly success, it may buy you a nice house and a nice car and things like that if you have a successful career. It may buy you the admiration of your fellow people, but it doesn't increase the number of offspring that you have. It doesn't increase your genetic legacy to the next generation. On the other hand, artificial intelligence, I mean, computers and technology generally, is is evolving by a non genetic means, by leaps and bounds, of course. And so what do you think, I don't know if you're familiar, there's a company called Neuralink, but there's a general effort of brain computer interfaces, which is to try to build a connection between the computer and the brain to send signals both directions. And the long term dream there is to do exactly that, which is expand, I guess, expand the size of the brain, expand the capabilities of the brain. Do you see this as interesting? Do you see this as a promising possible technology? Or is the interface between the computer and the brain, like the brain is this wet, messy thing that's just impossible to interface with? Well, of course, it's interesting, whether it's promising, I'm really not qualified to say. What I do find puzzling is that the brain being as small as it is compared to a computer and the individual components being as slow as they are compared to our electronic components, it is astonishing what it can do. I mean, imagine building a computer that fits into the size of a human skull. And with the equivalent of transistors or integrated circuits, which work as slowly as neurons do. It's something mysterious about that, something, something must be going on that we don't understand. So I have just talked to Roger Penrose, I'm not sure you're familiar with his work. And he also describes this kind of mystery in the mind, in the brain, that as he sees a materialist, so there's no sort of mystical thing going on. But there's so much about the material of the brain that we don't understand. That might be quantum mechanical in nature and so on. So there the idea is about consciousness. Do you have any, have you ever thought about, do you ever think about ideas of consciousness or a little bit more about the mystery of intelligence and consciousness that seems to pop up just like you're saying from our brain? I agree with Roger Penrose that there is a mystery there. I mean, he's one of the world's greatest physicists. I can't possibly argue with his... But nobody knows anything about consciousness. And in fact, if we talk about religion and so on, the mystery of consciousness is so awe inspiring and we know so little about it that the leap to sort of religious or mystical explanations is too easy to make. I think that it's just an act of cowardice to leap to religious explanations and Roger doesn't do that, of course. But I accept that there may be something that we don't understand about it. So correct me if I'm wrong, but in your book, Selfish Gene, the gene centered view of evolution allows us to think of the physical organisms as just the medium through which the software of our genetics and the ideas sort of propagate. So maybe can we start just with the basics? What in this context does the word meme mean? It would mean the cultural equivalent of a gene, cultural equivalent in the sense of that which plays the same role as the gene in the transmission of culture and the transmission of ideas in the broadest sense. And it's a useful word if there's something Darwinian going on. Obviously, culture is transmitted, but is there anything Darwinian going on? And if there is, that means there has to be something like a gene, which becomes more numerous or less numerous in the population. So it can replicate? It can replicate. Well, it clearly does replicate. There's no question about that. The question is, does it replicate in a sort of differential way in a Darwinian fashion? Could you say that certain ideas propagate because they're successful in the meme pool? In a sort of trivial sense, you can. Would you wish to say, though, that in the same way as an animal body is modified, adapted to serve as a machine for propagating genes, is it also a machine for propagating memes? Could you actually say that something about the way a human is, is modified, adapted, is modified, adapted for the function of meme propagation? That's such a fascinating possibility, if that's true. That it's not just about the genes which seem somehow more comprehensible as these things of biology. The idea that culture or maybe ideas, you can really broadly define it, operates under these mechanisms. Even morphology, even anatomy does evolve by memetic means. I mean, things like hairstyles, styles of makeup, circumcision, these things are actual changes in the body form which are nongenetic and which get passed on from generation to generation or sideways like a virus in a quasi genetic way. But the moment you start drifting away from the physical, it becomes interesting because the space of ideas, ideologies, political systems. Of course, yes. So what's your sense? Are memes a metaphor more or are they really, is there something fundamental, almost physical presence of memes? Well, I think they're a bit more than a metaphor. And I mentioned the physical bodily characteristics which are a bit trivial in a way, but when things like the propagation of religious ideas, both longitudinally down generations and transversely as in a sort of epidemiology of ideas, when a charismatic preacher converts people, that resembles viral transmission. Whereas the longitudinal transmission from grandparent to parent to child, et cetera, is more like conventional genetic transmission. That's such a beautiful, especially in the modern day idea. Do you think about this implication in social networks where the propagation of ideas, the viral propagation of ideas, and has the new use of the word meme to describe? Well, the internet, of course, provides extremely rapid method of transmission. Before, when I first coined the word, the internet didn't exist. And so I was thinking that in terms of books, newspapers, broader radio, television, that kind of thing. Now an idea can just leap around the world in all directions instantly. And so the internet provides a step change in the facility of propagation of memes. How does that make you feel? Isn't it fascinating that sort of ideas, it's like you have Galapagos Islands or something, it's the 70s, and the internet allowed all these species to just like globalize. And in a matter of seconds, you can spread the message to millions of people. And these ideas, these memes can breed, can evolve, can mutate. And there's a selection, and there's like different, I guess, groups that have all like, there's a dynamics that's fascinating here. Do you think, yes, basically, do you think your work in this direction, while fundamentally was focused on life on Earth, do you think it should continue, like to be taken further? Well, I do think it would probably be a good idea to think in a Darwinian way about this sort of thing. We conventionally think of the transmission of ideas from an evolutionary context as being limited to, in our ancestors, people living in villages, living in small bands where everybody knew each other, and ideas could propagate within the village, and they might hop to a neighboring village, occasionally, and maybe even to a neighboring continent eventually. And that was a slow process. Nowadays, villages are international. I mean, you have people, it's been called echo chambers, where people are in a sort of internet village, where the other members of the village may be geographically distributed all over the world, but they just happen to be interested in the same things, use the same terminology, the same jargon, have the same enthusiasm. So, people like the Flat Earth Society, they don't all live in one place, they find each other, and they talk the same language to each other, they talk the same nonsense to each other. And they, so this is a kind of distributed version of the primitive idea of people living in villages and propagating their ideas in a local way. Is there Darwinist parallel here? So, is there evolutionary purpose of villages, or is that just a... I wouldn't use a word like evolutionary purpose in that case, but villages will be something that just emerged, that's the way people happen to live. And in just the same kind of way, the Flat Earth Society, societies of ideas emerge in the same kind of way in this digital space. Yes, yes. Is there something interesting to say about the, I guess, from a perspective of Darwin, could we fully interpret the dynamics of social interaction in these social networks? Or is there some much more complicated thing need to be developed? Like, what's your sense? Well, a Darwinian selection idea would involve investigating which ideas spread and which don't. So, some ideas don't have the ability to spread. I mean, the Flat Earth, Flat Earthism is, there are a few people believe in it, but it's not going to spread because it's obvious nonsense. But other ideas, even if they are wrong, can spread because they are attractive in some sense. So the spreading and the selection in the Darwinian context is, it just has to be attractive in some sense. Like we don't have to define, like it doesn't have to be attractive in the way that animals attract each other. It could be attractive in some other way. Yes. All that matters is, all that is needed is that it should spread. And it doesn't have to be true to spread. In truth, there's one criterion which might help an idea to spread. But there are other criteria which might help it to spread. As you say, attraction in animals is not necessarily valuable for survival. The famous peacock's tail doesn't help the peacock to survive. It helps it to pass on its genes. Similarly, an idea which is actually rubbish, but which people don't know is rubbish and think is very attractive will spread in the same way as a peacock's gene spread. It's a small sidestep. I remember reading somewhere, I think recently, that in some species of birds, sort of the idea that beauty may have its own purpose and the idea that some birds, I'm being ineloquent here, but there's some aspects of their feathers and so on that serve no evolutionary purpose whatsoever. There's somebody making an argument that there are some things about beauty that animals do that may be its own purpose. Does that ring a bell for you? Does that sound ridiculous? I think it's a rather distorted bell. Darwin, when he coined the phrase sexual selection, didn't feel the need to suggest that what was attractive to females, usually is males attracting females, that what females found attractive had to be useful. He said it didn't have to be useful. It was enough that females found it attractive. And so it could be completely useless, probably was completely useless in the conventional sense, but was not at all useless in the sense of passing on, Darwin didn't call them genes, but in the sense of reproducing. Others, starting with Wallace, the co discoverer of natural selection, didn't like that idea and they wanted sexually selected characteristics like peacock's tails to be in some sense useful. It's a bit of a stretch to think of a peacock's tail as being useful, but in the sense of survival, but others have run with that idea and have brought it up to date. And so there are two schools of thought on sexual selection, which are still active and about equally supported now. Those who follow Darwin in thinking that it's just enough to say it's attractive and those who follow Wallace and say that it has to be in some sense useful. Do you fall into one category or the other? No, I'm open minded. I think they both could be correct in different cases. I mean, they've both been made sophisticated in a mathematical sense, more so than when Darwin and Wallace first started talking about it. I'm Russian, I romanticize things, so I prefer the former, where the beauty in itself is a powerful attraction, is a powerful force in evolution. On religion, do you think there will ever be a time in our future where almost nobody believes in God, or God is not a part of the moral fabric of our society? Yes, I do. I think it may happen after a very long time. It may take a long time for that to happen. So do you think ultimately for everybody on Earth, religion, other forms of doctrines, ideas could do better job than what religion does? Yes. I mean, following in truth, reason. Well, truth is a funny, funny word. And reason too. There's, yeah, it's a difficult idea now with truth on the internet, right, and fake news and so on. I suppose when you say reason, you mean the very basic sort of inarguable conclusions of science versus which political system is better. Yes, yes. I mean, truth about the real world, which is ascertainable by, not just by the more rigorous methods of science, but by just ordinary sensory observation. So do you think there will ever be a time when we move past it? Like, I guess another way to ask it, are we hopelessly, fundamentally tied to religion in the way our society functions? Well, clearly all individuals are not hopelessly tied to it because many individuals don't believe. You could mean something like society needs religion in order to function properly, something like that. And some people have suggested that. What's your intuition on that? Well, I've read books on it and they're persuasive. I don't think they're that persuasive though. I mean, some people suggested that society needs a sort of figurehead, which can be a non existent figurehead in order to function properly. I think there's something rather patronising about the idea that, well, you and I are intelligent enough not to believe in God, but the plebs need it sort of thing. And I think that's patronising. And I'd like to think that that was not the right way to proceed. But at the individual level, do you think there's some value of spirituality? Sort of, if I think sort of as a scientist, the amount of things we actually know about our universe is a tiny, tiny, tiny percentage of what we could possibly know. So just from everything, even the certainty we have about the laws of physics, it seems to be that there's yet a huge amount to discover. And therefore we're sitting where 99.99% of things are just still shrouded in mystery. Do you think there's a role in a kind of spiritual view of that, sort of a humbled spiritual view? I think it's right to be humble. I think it's right to admit that there's a lot we don't know, a lot we don't understand, a lot that we still need to work on. We're working on it. What I don't think is that it helps to invoke supernatural explanations. If our current scientific explanations aren't adequate to do the job, then we need better ones. We need to work more. And of course, the history of science shows just that, that as science goes on, problems get solved one after another, and the science advances as science gets better. But to invoke a non scientific, non physical explanation is simply to lie down in a cowardly way and say, we can't solve it, so we're going to invoke magic. Don't let's do that. Let's say we need better science. We need more science. It may be that the science will never do it. It may be that we will never actually understand everything. And that's okay, but let's keep working on it. A challenging question there is, do you think science can lead us astray in terms of the humbleness? So there's some aspect of science, maybe it's the aspect of scientists and not science, but of sort of a mix of ego and confidence that can lead us astray in terms of discovering the, you know, some of the big open questions about the universe. I think that's right. I mean, there are, there are arrogant people in any walk of life and scientists are no exception to that. And so there are arrogant scientists who think we've solved everything. Of course we haven't. So humility is a proper stance for a scientist. I mean, it's a proper working stance because it encourages further work. But in a way to resort to a supernatural explanation is a kind of arrogance because it's saying, well, we don't understand it scientifically. Therefore the non scientific religious supernatural explanation must be the right one. That's arrogant. What is, what is humble is to say we don't know and we need to work further on it. So maybe if I could psychoanalyze you for a second, you have at times been just slightly frustrated with people who have supernat, you know, have a supernatural. Has that changed over the years? Have you become like, how do people that kind of have a seek supernatural explanations, how do you see those people as human beings as it's like, do you see them as dishonest? Do you see them as, um, sort of, uh, ignorant? Do you see them as, I don't know, is it like, how do you think of certainly not, not, not dishonest. And, and I mean, obviously many of them are perfectly nice people. So I don't, I don't sort of despise them in that sense. Um, I think it's often a misunderstanding that, that, um, people will jump from the admission that we don't understand something. They will jump straight to what they think of as an alternative explanation, which is the supernatural one, which is not an alternative. It's a non explanation. Um, instead of jumping to the conclusion that science needs more work, that we need to actually get, do some better, better science. So, um, I don't have, I mean, personal antipathy towards such people. I just think they're, they're misguided. So what about this really interesting space that I have trouble with? So religion I have a better grasp on, but, um, there's a large communities, like you said, Flat Earth community, uh, that I've recently, because I've made a few jokes about it. I saw that there's, I've noticed that there's people that take it quite seriously. So there's this bigger world of conspiracy theorists, which is a kind of, I mean, there's elements of it that are religious as well, but I think they're also scientific. So the, the basic credo of a conspiracy theorist is to question everything, which is also the credo of a good scientist, I would say. So what do you make of this? I mean, I think it's probably too easy to say that by labeling something conspiracy, you therefore dismiss it. I mean, occasionally conspiracies are right. And so we shouldn't dismiss conspiracy theories out of hand. We should examine them on their own merits. Flat Earthism is obvious nonsense. We don't have to examine that much further. Um, but, um, I mean, there may be other conspiracy theories which are actually right. So I've, you know, grew up in the Soviet Union. So I, I just, you know, uh, the space race was very influential for me on both sides of the coin. Uh, you know, there's a conspiracy theory that we never went to the moon. Right. And it's, uh, it's like, I cannot understand it and it's very difficult to rigorously scientifically show one way or the other. It's just, you have to use some of the human intuition about who would have to lie, who would have to work together. And it's clear that very unlikely, uh, good behind that is my general intuition that most people in this world are good. You know, in order to really put together some conspiracy theories, there has to be a large number of people working together and essentially being dishonest. Yes, which is improbable. The sheer number who would have to be in on this conspiracy and the sheer detail, the attention to detail they'd have had to have had and so on. I'd also worry about the motive and why would anyone want to suggest that it didn't happen? What's the, what's the, why is it so hard to believe? I mean, the, the physics of it, the mathematics of it, the, the idea of computing orbits and, and, and trajectories and things, it, it all works mathematically. Why wouldn't you believe it? It's a psychology question because there's something really pleasant about, um, you know, pointing out that the emperor has no clothes when everybody like, uh, you know, thinking outside the box and coming up with the true answer where everybody else is diluted. There's something, I mean, I have that for science, right? You want to prove the entire scientific community wrong. That's the whole. That's, that's, that's right. And, and of course, historically, lone geniuses have come out right sometimes, but often people with who think they're a lone genius much more often turn out not to. Um, so you have to judge each case on its merits. The mere fact that you're a maverick, the mere fact that you, you're going against the current tide doesn't make you right. You've got to show you're right by looking at the evidence. So because you focus so much on, on religion and disassembled a lot of ideas there and I just, I was wondering if, if you have ideas about conspiracy theory groups, because it's such a prevalent, even reaching into, uh, presidential politics and so on. It seems like it's a very large communities that believe different kinds of conspiracy theories. Is there some connection there to your thinking on religion? And it is curious. It's a matter. It's an obvious difficult thing. Uh, I don't understand why people believe things that are clearly nonsense, like, well, flat earth and also the conspiracy about not landing on the moon or, um, that, um, the, that the United States engineered 9 11 that, that kind of thing. Um, so it's not clearly nonsense. It's extremely unlikely. Okay. It's extremely unlikely that religion is a bit different because it's passed down from generation to generation. So many of the people who are religious, uh, got it from their parents who got it from their parents who got it from their parents and childhood indoctrination is a very powerful force. But these things like the nine 11 conspiracy theory, the, um, Kennedy assassination conspiracy theory, the man on the moon conspiracy theory, these are not childhood indoctrination. These are, um, presumably dreamed up by somebody who then tells somebody else who then wants to believe it. And I don't know why people are so eager to fall in line with some, just some person that they happen to read or meet who spins some yarn. I can kind of understand why they believe what their parents and teachers told them when they were very tiny and not capable of critical thinking for themselves. So I sort of get why the great religions of the world like Catholicism and Islam go on persisting. It's because of childhood indoctrination, but that's not true of flat earthism and sure enough flat earthism is a very minority cult way larger than I ever realized. Well, yes, I know, but so that's a really clean idea and you've articulated that in your new book and then, and I'll go on God and in God, the illusion is the early indoctrination. That's really interesting that you can get away with a lot of out there ideas in terms of religious texts. If, um, the age at which you convey those ideas at first is a young age. So indoctrination is sort of an essential element of propagation of religion. So let me ask on the morality side in the books that I mentioned, God, delusion, and I'll go on God. You described that human beings don't need religion to be moral. So from an engineering perspective, we want to engineer morality into AI systems. So in general, where do you think morals come from in humans? A very complicated and interesting question. It's clear to me that the moral standards, the moral values of our civilization changes as the decades go by, certainly as the centuries go by, even as the decades go by. And we in the 21st century are quite clearly labeled 21st century people in terms of our moral values. There's a spread. I mean, some of us are a little bit more ruthless, some of us more conservative, some of us more liberal and so on. But we all subscribe to pretty much the same views when you compare us with say 18th century, 17th century people, even 19th century, 20th century people. So we're much less racist, we're much less sexist and so on than we used to be. Some people are still racist and some are still sexist, but the spread has shifted. The Gaussian distribution has moved and moves steadily as the centuries go by. And that is the most powerful influence I can see on our moral values. And that doesn't have anything to do with religion. I mean, the religion, sorry, the morals of the Old Testament are Bronze Age models. They're deplorable and they are to be understood in terms of the people in the desert who made them up at the time. And so human sacrifice, an eye for an eye, a tooth for a tooth, petty revenge, killing people for breaking the Sabbath, all that kind of thing, inconceivable now. So at some point religious texts may have in part reflected that Gaussian distribution at that time. I'm sure they did. I'm sure they always reflect that, yes. And then now, but the sort of almost like the meme, as you describe it, of ideas moves much faster than religious texts do, than new religions. Yes. So basing your morals on religious texts, which were written millennia ago, is not a great way to proceed. I think that's pretty clear. So not only should we not get our morals from such texts, but we don't. We quite clearly don't. If we did, then we'd be discriminating against women and we'd be racist, we'd be killing homosexuals and so on. So we don't and we shouldn't. Now, of course, it's possible to use your 21st century standards of morality and you can look at the Bible and you can cherry pick particular verses which conform to our modern morality, and you'll find that Jesus says some pretty nice things, which is great. But you're using your 21st century morality to decide which verses to pick, which verses to reject. And so why not cut out the middleman of the Bible and go straight to the 21st century morality, which is where that comes from. It's a much more complicated question. Why is it that morality, moral values change as the centuries go by? They undoubtedly do. And it's a very interesting question to ask why. It's another example of cultural evolution, just as technology progresses, so moral values progress for probably very different reasons. But it's interesting if the direction in which that progress is happening has some evolutionary value or if it's merely a drift that can go into any direction. I'm not sure it's any direction and I'm not sure it's evolutionarily valuable. What it is is progressive in the sense that each step is a step in the same direction as the previous step. So it becomes more gentle, more decent by modern standards, more liberal, less violent. But more decent, I think you're using terms and interpreting everything in the context of the 21st century because Genghis Khan would probably say that this is not more decent because we're now, you know, there's a lot of weak members of society that we're not murdering. Yes. I was careful to say by the standards of the 21st century, by our standards, if we with hindsight look back at history, what we see is a trend in the direction towards us, towards our present, our present value system. For us, we see progress, but it's an open question whether that won't, you know, I don't see necessarily why we can never return to Genghis Khan times. We could. I suspect we won't. But if you look at the history of moral values over the centuries, it is in a progressive, I use the word progressive not in a value judgment sense, in the sense of a transitive sense. Each step is the same, is the same direction as the previous step. So things like we don't derive entertainment from torturing cats. We don't derive entertainment from like the Romans did in the Colosseum from that state. Or rather we suppress the desire to get, I mean, to have play. It's probably in us somewhere. So there's a bunch of parts of our brain, one that probably, you know, limbic system that wants certain pleasures. And that's I don't, I mean, I wouldn't have said that, but you're at liberty to think that you like, well, no, there's a, there's a Dan Carlin of hardcore history. There's a really nice explanation of how we've enjoyed watching the torture of people, the fighting of people, just to torture the suffering of people throughout history as entertainment until quite recently. And now everything we do with sports, we're kind of channeling that feeling into something else. I mean, there, there is some dark aspects of human nature that are underneath everything. And I do hope this like higher level software we've built will keep us at bay. I'm also Jewish and have history with the Soviet Union and the Holocaust. And I clearly remember that some of the darker aspects of human nature creeped up there. They do. There have been, there have been steps backwards admittedly, and the Holocaust is an obvious one. But if you take a broad view of history, it's the same direction. So Pamela McCordick in Machines Who Think has written that AI began with an ancient wish to forge the gods. Do you see, it's a poetic description I suppose, but do you see a connection between our civilizations, historic desire to create gods, to create religions and our modern desire to create technology and intelligent technology? I suppose there's a link between an ancient desire to explain away mystery and science, but intelligence, artificial intelligence, creating gods, creating new gods. And I forget, I read somewhere a somewhat facetious paper which said that we have a new god is called Google and we pray to it and we worship it and we ask its advice like an Oracle and so on. That's fun. You don't see that, you see that as a fun statement, a facetious statement. You don't see that as a kind of truth of us creating things that are more powerful than ourselves and natural. It has a kind of poetic resonance to it, which I get, but I wouldn't, I wouldn't have bothered to make the point myself, put it that way. All right. So you don't think AI will become our new god, a new religion, a new gods like Google? Well, yes. I mean, I can see that the future of intelligent machines or indeed intelligent aliens from outer space might yield beings that we would regard as gods in the sense that they are so superior to us that we might as well worship them. That's highly plausible, I think. But I see a very fundamental distinction between a god who is simply defined as something very, very powerful and intelligent on the one hand and a god who doesn't need explaining by a progressive step by step process like evolution or like engineering design. So suppose we did meet an alien from outer space who was marvelously, magnificently more intelligent than us and we would sort of worship it for that reason. Nevertheless, it would not be a god in the very important sense that it did not just happen to be there like god is supposed to. It must have come about by a gradual step by step incremental progressive process, presumably like Darwinian evolution. There's all the difference in the world between those two. Intelligence, design comes into the universe late as a product of a progressive evolutionary process or progressive engineering design process. So most of the work is done through this slow moving progress. Exactly. Yeah. Yeah. But there's still this desire to get answers to the why question that if the world is a simulation, if we're living in a simulation, that there's a programmer like creature that we can ask questions of. Well, let's pursue the idea that we're living in a simulation, which is not totally ridiculous, by the way. There we go. Then you still need to explain the programmer. The programmer had to come into existence by some... Even if we're in a simulation, the programmer must have evolved. Or if he's in a sort of... Or she. If she's in a meta simulation, then the meta program must have evolved by a gradual process. You can't escape that. Fundamentally, you've got to come back to a gradual incremental process of explanation to start with. There's no shortcuts in this world. No, exactly. But maybe to linger on that point about the simulation, do you think it's an interesting thing? Basically, you talk to... Bored the heck out of everybody asking this question, but whether you live in a simulation, do you think... First, do you think we live in a simulation? Second, do you think it's an interesting thought experiment? It's certainly an interesting thought experiment. I first met it in a science fiction novel by Daniel Galloy called Counterfeit World, in which it's all about... I mean, our heroes are running a gigantic computer which simulates the world, and something goes wrong, and so one of them has to go down into the simulated world in order to fix it. And then the denouement of the thing, the climax to the novel, is that they discover that they themselves are in another simulation at a higher level. So I was intrigued by this, and I love others of Daniel Galloy's science fiction novels. Then it was revived seriously by Nick Bostrom... Bostrom talking to him in an hour. And he goes further, not just treat it as a science fiction speculation, he actually thinks it's positively likely. I mean, he thinks it's very likely, actually. He makes a probabilistic argument, which you can use to come up with very interesting conclusions about the nature of this universe. I mean, he thinks that we're in a simulation done by, so to speak, our descendants of the future. But it's still a product of evolution. It's still ultimately going to be a product of evolution, even though the super intelligent people of the future have created our world, and you and I are just a simulation, and this table is a simulation and so on. I don't actually in my heart of hearts believe it, but I like his argument. Well, so the interesting thing is that I agree with you, but the interesting thing to me, if I were to say, if we're living in a simulation, that in that simulation, to make it work, you still have to do everything gradually, just like you said. That even though it's programmed, I don't think there could be miracles. Well, no, I mean, the programmer, the higher, the upper ones have to have evolved gradually. However, the simulation they create could be instantaneous. I mean, they could be switched on and we come into the world with fabricated memories. True, but what I'm trying to convey, so you're saying the broader statement, but I'm saying from an engineering perspective, both the programmer has to be slowly evolved and the simulation because it's like, from an engineering perspective. Oh yeah, it takes a long time to write a program. No, like just, I don't think you can create the universe in a snap. I think you have to grow it. Okay. Well, that's a good point. That's an arguable point. By the way, I have thought about using the Nick Bostrom idea to solve the riddle of how you were talking. We were talking earlier about why the human brain can achieve so much. I thought of this when my then 100 year old mother was marveling at what I could do with a smartphone and I could call, look up anything in the encyclopedia, I could play her music that she liked and so on. She said, but it's all in that tiny little phone. No, it's out there. It's in the cloud. And maybe most of what we do is in a cloud. So maybe if we are a simulation, even all the power that we think is in our skull, it actually may be like the power that we think is in the iPhone. But is that actually out there in an interface to something else? I mean, that's what, including Roger Penrose with panpsychism, that consciousness is somehow a fundamental part of physics, that it doesn't have to actually all reside inside. But Roger thinks it does reside in the skull, whereas I'm suggesting that it doesn't, that there's a cloud. That'd be a fascinating notion. On a small tangent, are you familiar with the work of Donald Hoffman, I guess? Maybe not saying his name correctly, but just forget the name, the idea that there's a difference between reality and perception. So like we biological organisms perceive the world in order for the natural selection process to be able to survive and so on. But that doesn't mean that our perception actually reflects the fundamental reality, the physical reality underneath. Well, I do think that although it reflects the fundamental reality, I do believe there is a fundamental reality, I do think that our perception is constructive in the sense that we construct in our minds a model of what we're seeing. And so this is really the view of people who work on visual illusions, like Richard Gregory, who point out that things like a Necker cube, which flip from a two dimensional picture of a cube on a sheet of paper, we see it as a three dimensional cube, and it flips from one orientation to another at regular intervals. What's going on is that the brain is constructing a cube, but the sense data are compatible with two alternative cubes. And so rather than stick with one of them, it alternates between them. I think that's just a model for what we do all the time when we see a table, when we see a person, when we see anything, we're using the sense data to construct or make use of a perhaps previously constructed model. I noticed this when I meet somebody who actually is, say, a friend of mine, but until I kind of realized that it is him, he looks different. And then when I finally clock that it's him, his features switch like a Necker cube into the familiar form. As it were, I've taken his face out of the filing cabinet inside and grafted it onto or used the sense data to invoke it. Yeah, we do some kind of miraculous compression on this whole thing to be able to filter out most of the sense data and make sense of it. That's just a magical thing that we do. So you've written several, many amazing books, but let me ask, what books, technical or fiction or philosophical, had a big impact on your own life? What books would you recommend people consider reading in their own intellectual journey? Darwin, of course. The original. I'm actually ashamed to say I've never read Darwin. He's astonishingly prescient because considering he was writing in the middle of the 19th century, Michael Gieselin said he's working 100 years ahead of his time. Everything except genetics is amazingly right and amazingly far ahead of his time. And of course, you need to read the updatings that have happened since his time as well. I mean, he would be astonished by, well, let alone Watson and Crick, of course, but he'd be astonished by Mendelian genetics as well. Yeah, it'd be fascinating to see what he thought about DNA, what he would think about DNA. I mean, yes, it would. Because in many ways, it clears up what appeared in his time to be a riddle. The digital nature of genetics clears up what was a problem, what was a big problem. Gosh, there's so much that I could think of. I can't really... Is there something outside sort of more fiction? When you think young, was there books that just kind of outside of kind of the realm of science or religion that just kind of sparked your journey? Yes. Well, actually, I suppose I could say that I've learned some science from science fiction. I mentioned Daniel Galloy, and that's one example, but another of his novels called Dark Universe, which is not terribly well known, but it's a very, very nice science fiction story. It's about a world of perpetual darkness. And we're not told at the beginning of the book why these people are in darkness. They stumble around in some kind of underground world of caverns and passages, using echolocation like bats and whales to get around. And they've adapted, presumably by Darwinian means, to survive in perpetual total darkness. But what's interesting is that their mythology, their religion has echoes of Christianity, but it's based on light. And so there's been a fall from a paradise world that once existed where light reigns supreme. And because of the sin of mankind, light banished them. So they no longer are in light's presence, but light survives in the form of mythology and in the form of sayings like, there's a great light almighty. Oh, for light's sake, don't do that. And I hear what you mean rather than I see what you mean. So some of the same religious elements are present in this other totally kind of absurd different form. Yes. And so it's a wonderful, I wouldn't call it satire, because it's too good natured for that. I mean, a wonderful parable about Christianity and the doctrine, the theological doctrine of the fall. So I find that kind of science fiction immensely stimulating. Fred Hoyle's The Black Cloud. Oh, by the way, anything by Arthur C. Clarke I find very wonderful too. Fred Hoyle's The Black Cloud, his first science fiction novel, where he, well, I learned a lot of science from that. It suffers from an obnoxious hero, unfortunately, but apart from that, you learn a lot of science from it. Another of his novels, A for Andromeda, which by the way, the theme of that is taken up by Carl Sagan's science fiction novel, another wonderful writer, Carl Sagan, Contact, where the idea is, again, we will not be visited from outer space by physical bodies. We will be visited possibly, we might be visited by radio, but the radio signals could manipulate us and actually have a concrete influence on the world if they make us or persuade us to build a computer, which runs their software. So that they can then transmit their software by radio, and then the computer takes over the world. And this is the same theme in both Hoyle's book and Sagan's book, I presume. I don't know whether Sagan knew about Hoyle's book, probably did. But it's a clever idea that we will never be invaded by physical bodies. The War of the Worlds of H.G. Wells will never happen. But we could be invaded by radio signals, code, coded information, which is sort of like DNA. And we are, I call them, we are survival machines of our DNA. So it has great resonance for me, because I think of us, I think of bodies, physical bodies, biological bodies, as being manipulated by coded information, DNA, which has come down through generations. And in the space of memes, it doesn't have to be physical, it can be transmitted through the space of information. That's a fascinating possibility, that from outer space we can be infiltrated by other memes, by other ideas, and thereby controlled in that way. Let me ask the last, the silliest, or maybe the most important question. What is the meaning of life? What gives your life fulfillment, purpose, happiness, meaning? From a scientific point of view, the meaning of life is the propagation of DNA, but that's not what I feel. That's not the meaning of my life. So the meaning of my life is something which is probably different from yours and different from other people's, but we each make our own meaning. So we set up goals, we want to achieve, we want to write a book, we want to do whatever it is we do, write a quartet, we want to win a football match. And these are short term goals, well, maybe even quite long term goals, which are set up by our brains, which have goal seeking machinery built into them. But what we feel, we don't feel motivated by the desire to pass on our DNA, mostly. We have other goals which can be very moving, very important. They could even be called as called spiritual in some cases. We want to understand the riddle of the universe, we want to understand consciousness, we want to understand how the brain works. These are all noble goals. Some of them can be noble goals anyway. And they are a far cry from the fundamental biological goal, which is the propagation of DNA. But the machinery that enables us to set up these higher level goals is originally programmed into us by natural selection of DNA. The propagation of DNA. But what do you make of this unfortunate fact that we are mortal? Do you ponder your mortality? Does it make you sad? I ponder it. It would, it makes me sad that I shall have to leave and not see what's going to happen next. If there's something frightening about mortality, apart from sort of missing, as I said, something more deeply, darkly frightening, it's the idea of eternity. But eternity is only frightening if you're there. Eternity before we were born, billions of years before we were born, and we were effectively dead before we were born. As I think it was Mark Twain said, I was dead for billions of years before I was born and never suffered the smallest inconvenience. That's how it's going to be after we leave. So I think of it as really, mortality is a frightening prospect. And so the best way to spend it is under a general anesthetic, which is what it'll be. Beautifully put. Richard, it is a huge honor to meet you, to talk to you. Thank you so much for your time. Thank you very much. Thanks for listening to this conversation with Richard Dawkins. And thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LEXPodcast. If you enjoy this podcast, subscribe on YouTube, review with 5 stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words of wisdom from Richard Dawkins. We are going to die. And that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly, those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people. In the teeth of these stupefying odds, it is you and I, in our ordinariness, that are here. We privileged few who won the lottery of birth against all odds. How dare we whine at our inevitable return to that prior state from which the vast majority have never stirred. Thank you for listening and hope to see you next time.
Richard Dawkins: Evolution, Intelligence, Simulation, and Memes | Lex Fridman Podcast #87
The following is a conversation with Eric Weinstein, the second time we've spoken on this podcast, he's a mathematician with a bold and piercing intelligence, unafraid to explore the biggest questions in the universe and shine a light on the darkest corners of our society. He is the host of the portal podcast, a part of which he recently released his 2013 Oxford lecture on his theory of geometric unity that is at the center of his lifelong efforts to arrive at a theory of everything that unifies the fundamental laws of physics. This conversation was recorded recently in the time of the coronavirus pandemic for everyone feeling the medical, psychological and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, review it with five stars and apple podcasts, support it on Patreon or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. This show is presented by cash app, the number one finance app in the app store. When you get it, use code Lex podcast. Cash app lets you send money to friends by Bitcoin and invest in the stock market with as little as $1. Since cash app does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of the fractional orders is an algorithmic Marvel. So big props to the cash app engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction of the stock market, making trading more accessible to new investors and diversification much easier. So again, if you get cash app from the app store, Google play and use code Lex podcast, you get $10 and cash app will also donate $10 to first an organization that is helping to advance robotics and STEM education for young people around the world. And now here's my conversation with Eric Weinstein. Do you see a connection between world war II and the crisis we're living through right now? Sure. The need for collective action, reminding ourselves of the fact that all of these abstractions, like everyone should just do exactly what he or she wants to do for himself and leave everyone else alone. None of these abstractions work in a global crisis. And this is just a reminder that we didn't somehow put all that behind us. When I hear stories about my grandfather who was in the army. And so the Soviet union where most people die, when you're in the army, there's a brotherhood that happens. There's a love that happens. Do you think that's something we're going to see here? Uh, since we're not there, I mean, what the Soviet union went through, I mean, the enormity of the war on, uh, the Russian doorstep, this is different. What we're going through now is not, we can't talk about Stalingrad and COVID in the same breath yet. We're not ready. And the, the sort of, uh, you know, just the sense of like the great patriotic war and the way in which I was very moved by the Soviet custom of, of newlyweds going and visiting war memorials on their wedding day. It's like the happiest day of your life. You have to say thank you to the people who made it possible. We're not there. We're, we're just restarting history. We, you know, I've called this on the Rogan program. I called it the great nap, the 75 years with, um, very little by historical standards in, in terms of really profound disruption. And so when you call it the great nap, meaning lack of deep global tragedy, well, lack of realized global tragedy. So I think that the development, for example, of the hydrogen bomb, you know, was something that happened during the great nap. And that doesn't mean that people who lived during that time didn't feel feared and no anxiety, but it was to say that most of the violent potential of the human species was not realized. It was in the form of potential energy. And this is the thing that I've sort of taken issue with, with the description of Steven Pinker's optimism is that if you look at the realized kinetic variables, things have been getting much better for a long time, which is the great nap, but it's not as if, uh, our fragility has not grown our dependence on electronic systems, our vulnerability to disruption. And so all sorts of things have gotten much better. Other things have gotten much worse and the destructive potential is skyrocketed. It's tragedy. The only way we wake up from the big nap. Well, no, you could also have a, you know, jubilation about positive things, but it's harder to get people's attention. Can you give an example of a big global positive thing that could happen? I think that when, for example, just historically speaking, uh, HIV went from being a death sentence to something that people could live with for a very long period of time. It would be great if that had happened on a Wednesday, right? Like all at once, like you knew that things had changed. And so the bleed in somewhat kills the, the sort of the Wednesday effect where it all happens on a particular day at a particular moment. I think if you look at the stock market here, you know, there's a very clear moment where you can see that the market absorbs the idea of the coronavirus. I think that with respect to, um, positives, the moon landing was the best example of a positive that happened at a particular time or, uh, recapitulating the Soviet American, uh, link up in terms of, um, Skylab and Soyuz, right? Like that was a huge moment when you actually had these two nations connecting. Uh, in orbit. And so, yeah, there are great moments where something beautiful and wonderful and amazing happens, you know, but it's just, there are fewer of, that's why, that's why as much as I can't imagine proposing to somebody at a sporting event, when you have like 30,000 people waiting and you know, like she says, yes, it's pretty exciting. So I think that we shouldn't, we shouldn't discount that. So how bad do you think it's going to get in terms of, um, of the global suffering that we're going to experience with this, with this crisis? I can't figure this one out. I'm just not smart enough. Something is going weirdly wrong. They're almost like two separate storylines. We in one storyline, we aren't taking things nearly seriously enough. We see people using food packaging lids as masks who are doctors or nurses. Um, we hear horrible stories about people dying needlessly due to triage. And that's a very terrifying story. On the other hand, there's this other story, which says there are tons of ventilators someplace. We've got lots of masks, but they haven't been released. We've got hospital ships where none of the beds are being used. And it's very confusing to me that somehow these two stories give me the feeling that they both must be true simultaneously, and they can't both be true in any kind of standard way, whether I don't know whether it's just that I'm dumb, but I can't get one or the other story to quiet down. So I think weirdly, this is much more serious than we had understood it. And it's not nearly as serious as some people are making it out to be at the same time and that we're not being given the tools to actually understand, Oh, here's how to interpret the data, or here's the issue with the personal protective equipment is actually a jurisdictional battle or a question of who pays for it rather than a question of whether it's present or absent. I don't understand the details of it, but something is wildly off in our ability to understand where we are. So that's, that's policy that's institutions. What about, do you think about the quiet suffering of millions of people that have lost their job? Is this a temporary thing? I mean, what I'm my ears, not to the suffering of those people who have lost their job or the 50% possibly a small businesses that are going to go bankrupt, do you think about that? Sure. It's suffering. Well, and how that might arise itself could be not quiet too. I mean, right. That's the, could be a depression. This could go from recession to depression and depression could go to armed conflict and then to war. So it's not a very, um, abstract causal chain that gets us to the point where we can begin with quiet suffering and anxiety and all of these sorts of things and people losing their jobs and people dying from stress and all sorts of things. But, um, look, anything powerful enough to put us all indoors in a, I mean, I think about this as an incredible experiment. Imagine that you proposed, Hey, I want to do a bunch of research. Let's figure out what changes in our emissions, emissions profiles for our carbon footprints when we're all indoors or what happens to traffic patterns or what happens to the vulnerability of retail sales, uh, as Amazon gets stronger, you know, et cetera, et cetera. I believe that in many of those situations, um, we're running an incredible experiment and I, am I worried for us all? Yes, there are some bright spots. One of which is that when you're ordered to stay indoors, people are going to feel entitled and the usual thing that people are going to hit when they hear that they've lost your job, you know, there's this kind of tough, um, tough love attitude that you see, particularly in the United States, like, Oh, you lost your job, poor baby. Well, go retrain, get another one. I think there's going to be a lot less appetite for that. Um, because we've been asked to sacrifice, to risk, to act collectively. And that's the interesting thing. What does that reawaken in us? Maybe the idea that we actually are nations and that, you know, you're fellow countrymen may, may start to mean something to more people. It certainly means something to people in the military, but I wonder how many people who aren't in the military start to think about this as like, Oh yeah, we are kind of running separate experiments and we are not China. So you think this is kind of a period that might be studied for years to come. From my perspective, we are a part of experiment, but I don't feel like we have access to the full range of knowledge. But I don't feel like we have access to the full data, the full data of the experiment, we're just like little mice in a large, does this one make sense to you? I'm, I'm romanticizing it and I keep connecting it to world war II. So I keep connecting to historical events and making sense of them through that way or reading the plague by Camus, like almost kind of telling narratives and stories, but it might, I'm not hearing the suffering that people are going through because I think that's quiet there. Everybody's numb currently. They're not realizing what it means to have lost your job and to have lost your business. There's kind of a, I don't, I, um, I'm afraid how that fear will materialize itself once the numbness wears out. And especially if this lasts for many months, then if it's connected to the incompetence of the CDC and the WHO and our government and perhaps the election process, you know, my biggest fear is that the elections get delayed or something like that. So the, the, the basic mechanisms of our democracy get slowed or damaged in some way that then mixes with the fear that people have that turns to panic, that turns to anger, that anger. Can I just play with that for a little bit? Sure. What if in fact, all of that structure that you grew up thinking about, and again, you grew up in two places, right? So, uh, when you were inside the U S we tend to look at all of these things as museum pieces, like how often do we amend the constitution anymore? And in some sense, if you think about the Jewish tradition of Simha Torah, you've got this beautiful scroll that has been lovingly hand drawn and calligraphy, um, that's very valuable. And it's very important that you not treat it as a relic to be revered. And so we, one day a year, we dance with the Torah and we hold this incredibly vulnerable document up and we treat it as if, uh, you know, it was Ginger Rogers being, uh, led by Fred Astaire. Well, that is how you become part of your country. In fact, maybe the, maybe the election will be delayed. Maybe extraordinary powers will be used. Maybe any one of a number of things will indicate that you're actually living through history. This isn't a museum piece that you were handed by your great, great grandparents. But you're kind of suggesting that there might be a, like a community thing that pops up like, like, um, as opposed to, uh, an angry revolution. It might have a positive effect of, well, for example, are you telling me that if the right person stood up and called for us to sacrifice PPE, uh, for our nurses and our, our MDs who are on the front lines, that like people wouldn't reach down deep in their own supply that they've been like stocking and carefully storing them just say, like, say here, take it. Like right now, an actual leader would use this time to bring out the heroic character and I'm going to just go wildly patriotic cause I frigging love this country, we've got this dormant population in the us that loves leadership and country and pride in our freedom and not being told what to do. And we still have this thing that binds us together and all of them, the merchants of division just be gone. I totally agree with you. There's a, I think there is a deep hunger for that leadership. Why hasn't that, why, why hasn't one of us, we don't have the right search surgeon general, we have a guy saying, you know, come on guys, don't buy masks. They don't really work for you. Save them for our healthcare professionals. No, you can't do that. You have to say, you know what, these masks actually do work and they more work to protect other people from you, but they would work for you. They'll keep you somewhat safer if you wear them. Here's the deal. You've got somebody who's taking huge amounts of viral load all the time because the patients are shedding. Do you want to protect that person who's volunteered to be on the front line, who's up sleepless nights? You just changed the message. You stop lying to people. You just, you level with them. It's like, it's bad. Absolutely. But that's a, that's a little bit specific. So you, you have to be just honest about the facts of the situation. Yes. But I think you were referring to something bigger than just that inspiring, like, you know, rewriting the constitution, sort of rethinking how we work as a nation. Yeah. I think you should probably, you know, amend the constitution once or twice in a lifetime so that you don't get this distance from the foundational documents and, you know, part of the problem is that we've got two generations on top that feel very connected to the U S they feel bought in and we've got three generations below it's a little bit like watching your parents riding the tricycle that they were supposed to pass on to you. And it's like, you're now too old to ride a tricycle and they're still whooping it up, ringing the bell with the streamers coming off the handlebars. And you're just thinking, do you guys never get bored? Do you never pass a torch? Do you really want it? We had five septuagenarians all born in the forties running for president of the United States when Clovis sure dropped out. The youngest was Warren. We had Warren Biden, Sanders, Bloomberg, and Trump from like 1949 to 1941. All who had been the oldest president at inauguration and nobody's, nobody says grandma and grandpa, you're embarrassing us except Joe Rogan. Let me put it on you. You have a big platform. You're somewhat of an intelligent, eloquent guy. What, what role do you somewhat, what role do you play? Why aren't you that leader? Well, you're, I mean, I would argue that you're in, in ways becoming a leader. In ways becoming that leader. So I haven't taken enough risk. Is that your idea? What should I do or say at the moment? No, you're a little bit, no, you have taken quite a big risks and we'll, we'll talk about it. All right. But you're also on the outside shooting in, meaning, um, you're, uh, dismantling the institution from the outside as opposed to becoming the institution. Do you remember that thing you brought up when you were on the view, the view? I'm sorry. When you were on Oprah, I didn't make, I didn't get the invite. Sorry. When you were on Bill Maher's program, what was that thing you were saying? They don't know we're here. They may watch us. Yeah. They may quietly slip us a direct message, but they pretend that this internet thing is, uh, some dangerous place where only lunatics play. Well, who has the bigger platform, the portal or Bill Maher's program or the view, Bill Maher and the view in terms of viewership or in terms of what's the metric of size? Well, first of all, the key thing is, um, take, take a newspaper and even imagine that it's completely fake. Okay. And then there's very little in the way of circulation. Yet imagine that it's an a hundred year old paper and that it's still part of this game, this internal game of media. The key point is, is that those sources that have that kind of, um, mark of respectability to the institutional structures matter in a way that even if I say something on a very large platform that makes a lot of sense, if it's outside of what I've called the gated institutional narrative or gin, I'm sorry, institutional narrative or gin, it sort of doesn't matter to the institutions. So the game is if it happens outside of the club, we can pretend that it never happened. How can you get the credibility and the authority from outside the, the gated institutional narrative? Well, first of all, you and I both share, um, institutional credibility coming from organizations. So you, we were both at MIT, were you at Harvard at any point? Nope. Okay. Well, I lived in Harvard square. So did I, but you know, at some level, the issue isn't whether you have credentials in that sense. The key question is, can you be trusted to file a flight plan and not deviate from that flight plan when you are in an interview situation, will you stick to the talking points? Not, and that's why you're not going to be allowed in the general conversation, which amplifies these sentiments, but I'm still trying to, um, so your, your point, it would be, is that we're, let's say both. So you've done how many Joe Rogan for I've done for two, right? So both of us are somewhat frequent guests. The show is huge. You know, the power as well as I do, and people are going to watch this conversation. A huge number watched our last one, by the way, I want to thank you for that one. That was a terrific, terrific conversation. Really did change my life. Like you're brilliant interviewer. So thank you. Thank you. That was that you changed my life too. That you gave me a chance. So I was so glad I did that one. What I would say is, is that we keep mistaking how big the audience is for whether or not you have the kiss and the kiss is a different thing. Yes. Yeah. Well, it doesn't, it's not an acronym yet. Okay. Um, it's uh, but thank you for asking. It's a question of, are you part of the inter interoperable institution friendly discussion? And that's the discussion which we ultimately have to break into. But that's what I'm trying to get at is how do we, how do you, how does Eric Weinstein become the president of the United States? I shouldn't become the president of the United States. Not interested. Thank you very much for asking. Okay. Get into a leadership position where I guess I don't know what that means, but where you can inspire millions of people to, uh, the inspire the sense of community, inspire the, the kind of actions required to overcome hardship, the kind of hardship that we may be experiencing to inspire people, to work hard and face the difficult, hard facts of the realities we're living through all those kinds of things that you're talking about. That leader, you know, can that leader emerge from the current institutions or alternatively, can it also emerge from the outside? I guess that's what I was asking. So my belief is, is that this is the last hurrah for the elderly centrist kleptocrats. Can you define each of those terms? Okay. Elderly. I mean people who were born at least a year before I was, that's a joke. You can laugh. Uh, no, because I'm born at the cusp of the gen X boomer divide. Um, centrist they're pretending, you know, there are two parties, Democrat and Republican party in the United States. I think it's easier to think of the mainstream of both of them as part of a, an aggregate party that I sometimes call the looting party, which gets us to kleptocracy, which is ruled by thieves. And the great temptation has been to treat the U S like a trough. And you just have to get yours because it's not like we're doing anything productive. So everybody's sort of looting the family mansion and somebody stole the silver and somebody is cutting the pictures out of the frames and you know, roughly speaking, we're watching our elders, uh, we'll live it up in a way that doesn't make sense to the rest of us. Okay. So if it's the last hurrah, this is the time for leaders to step up. We're not ready yet. We're not ready. I just disagree with that. I call, I call out, you know, the head of the CDC should resign, should resign. The surgeon general should resign. Trump should resign. Pelosi should resign. De Blasio should resign. I understand that. So that's why. So we'll wait. No, but that's not how revolutions work. You don't wait for people to resign. You, uh, step up and inspire the alternative. Do you remember the Russian revolution of 1907? It's before my time, but there wasn't a Russian revolution of 1907. So you're thinking we're in 1907. I'm saying we're too early. But we got this, you know, Spanish flu came in 17, 18. So I would argue that there's a lot of parallels there or there were one. I think it's not time yet. Like John Prine, the, uh, uh, the songwriter just died of COVID. That was a pretty big, really? Yeah. By the way, you, yes, of course. I, um, every time we do this, uh, we discover our mutual appreciation of obscure brilliant witty songwriter. He's really, he's really quite good, right? He's, he's really good. Yeah. He died. My understanding is that he passed recently due to complications of Corona. Yeah. So we haven't had large enough, enough large, large enough shocking deaths yet, picturesque deaths, deaths of a family that couldn't get treatment. There are stories that will come and break our hearts and we have not had enough of those. The visuals haven't come in, but I think they're coming. Well, we'll find out. But that you gotta, you have to be there. He has to be there when they come. I mean, but we didn't get the visual for example of falling man from nine 11. Right. So the outside world did, but Americans were not, it was thought that we would be too delicate. So just the way you remember Pulitzer prize winning photographs from the Vietnam era, you don't easily remember the photographs from all sorts of things that have happened since because something changed in our media. We are in sense that we cannot feel or experience our own lives and the tragedy that would animate us to action. Yeah. But I think there, again, I think there's going to be that suffering that's going to build and build and build in terms of businesses, mom and pop shops that close. And I, like, I think for myself, I think often that, that I'm being weak and, and like I feel like I should be doing something. I should be becoming a leader on a small scale. You can't, this is not world war II, and this is not Soviet Russia. Why not? Why not? Because our internal programming, the malware that sits between our ears is much different than the propaganda is malware of the Soviet era. I mean, people were both very indoctrinated and also knew that some level it was BS. They had a double mind. I don't know. There must be a great word in Russian for being able to think both of those things simultaneously. You don't think people are actually sick of the partisanship, sick of incompetence. Yeah, but I called for revolt the other day on Joe Rogan. People found it quixotic. Well, because I think you're not, I think revolt is different. I think that's like, okay, I'm really angry. I'm, I'm furious. I cannot stand that this is my country at the moment. I'm embarrassed. So let's build a better one. Yeah. Right. That's the, I'm in. Okay. So, well, okay, so let's take over a few universities. Let's start running a different experiment at some of our better universities. Like when I did this experiment and I said, what, at this, if this were 40 years ago, the median age, I believe of a university president was 51 that would have the person in gen X and we'd have a bunch of millennial presidents, a bunch of, you know, more than half gen X it's almost 100% baby boom at this point. Um, and how did that happen? We can get into how they changed retirement, but this generation of people are not going to be able to do that. But this generation above us does not feel for even even the older generous silent generous. I had Roger Penrose on my program. Excellent. And I thank you. I really appreciate that. And I asked him a question that was very important to me. I said, look, you're in your late eighties. Is there anyone you could point to as a successor that we should be watching? We can get excited. You know, I said, here's an opportunity to pass the baton and he said, well, let me, let me hold off on that. It was like, Oh, is it ever the right moment to point to somebody younger than you to keep your flame alive after you're gone? And also like, I don't know whether, I'm just going to admit to this. People treat me like I'm crazy for caring about the world after I'm dead or wanting to be remembered after you're gone. Like, well, what does it matter to you? You're gone. It's this deeply sort of secular somatic perspective on everything where we don't, you know, that phrase in a, as time goes by, it says it's still the same old story, a fight for love and glory, a case of do or die. I don't think people imagined then that there wouldn't be a story about fighting for love and glory. And like we are so out of practice about fighting, you know, rivals for love and and and and fighting for glory and something bigger than yourself. But the hunger is there. Well, that was the point then, right? The whole idea is that Rick was, you know, it was like Han Solo of his time. He's just like, I stick my neck out for nobody. You know, it's like, Oh, come on, Rick, you're just pretending you actually have a big soul. Right. And so at some level, that's the question. Do we have a big soul or is it just all bullshit? So yeah, I think, I think there's huge Manhattan project style projects, whether you talk about physical infrastructure or going to Mars, you know, the SpaceX NASA efforts or huge, huge scientific efforts. Well, we need to get back into the institutions and we need to remove the weak leadership that we have weak leaders and the weak leaders need to be removed and they need to seat people more dangerous than the people who are currently sitting in a lot of those chairs. Yeah. Or build new institutions. Good luck. Well, so one of the nice things of, uh, from the internet is for example, somebody like you can have a bigger voice than almost anybody at the particular institutions we're talking about. That's true. But the thing is I might say something. You can count on the fact that the, you know, provost at Princeton isn't going to say anything. Yeah. What do you mean to, to afraid? Well, if that person were to give an interview, how are things going in research at Princeton? Well, I'm hesitant to say it, but they're perhaps as good as they've ever been and I think they're going to get better. Oh, is that right? All fields? Yep. I don't see a weak one. It's just like, okay, great. Who are you and what are you even saying? We're just used to total nonsense. 24 seven. Yeah. What do you think might be a beautiful thing that comes out of this? Like what is there a hope that like a little inkling, a little fire of hope you have about our time right now? Yeah. I think one thing is coming to understand that the freaks, weirdos, mutants, and other, uh, near do wells, uh, sometimes referred to as grifters. I like that one. Grifters, uh, and gadflies were very often the earliest people on the coronavirus. That's a really interesting question. Why was that? And it seems to be that they had already paid such a social price that they weren't going to be beaten up by being, um, told that, Oh my God, you're xenophobic. You just hate China, you know, or wow, you sound like a conspiracy theorist. Um, so if you'd already paid those prices, you were free to think about this. And everyone in an institutional framework was terrified that they didn't want to be seen as the alarmist, the, um, chicken little. And so that's why you have this confidence where, you know, the Blasio says, you know, get on with your lives, get back in there and celebrate Chinese new year in Chinatown. Uh, despite coronavirus, it's like, okay, really? So you just always thought everything would automatically be okay if you, if you adapted, sorry, if you adopted that posture. So you think, uh, this time reveals the weakness of our institutions and reveals the strength of our gadflies and the weirdos and the. No, not necessarily the strength, but the, the, the value of freedom, like a different way of saying it would be, wow, even your gadflies and your grifters were able to beat your institutional folks because your institutional folks were playing with a giant mental handicap. So just imagine like we were in the story of Harrison Bergeron by Vonnegut and our smartest people were all subjected to, uh, distracting noises every seven seconds. Well, they would be functionally much dumber because they couldn't continue a thought through all the disturbance. So in some sense, that's a little bit like what belonging to an institution is, is that if you have to make a public statement, of course the surgeon general is going to be the worst because they're, they're just playing with too much of a handicap. There are too many institutional players are like, don't screw us up. And so the person has to say something wrong. We're going to back propagate a falsehood. And this is very interesting. Some of my socially oriented friends say, Eric, I don't understand what you're on about. Of course masks work, but you know what they're trying to do. They're trying to get us not to buy up the masks for the doctors. And I think, okay, so you imagine that we can just create scientific fiction at will so that you can run whatever social program you want. This is what I, you know, my point is get out of my lab, get out of the lab. You don't belong in the lab. You're not meant for the lab. You're constitutionally incapable of being around the lab. You need to leave the lab. You think the CDC and WHO knew that masks work and we're trying to, and we're trying to sort of imagine that people are kind of stupid and they would buy masks in excess if they were told that masks work. Is that like, uh, cause this does seem to be a particularly clear example of mistakes made. You're asking me this question. No, you're not. What do you think, Lex? Well, I actually probably disagree with you a little bit. Great. Let's do it. I think it's not so easy to be honest with the populace when the danger of panic is always around the corner. So I think the kind of honesty you exhibit appeals to a certain class of brave intellectual minds that, uh, it appeals to me, but I don't know from the perspective of WHO, I don't know if it's so obvious that they should, um, be honest 100% of the time with people. I'm not saying you should be perfectly transparent and 100% honest. I'm saying that the quality of your lies has to be very high and it has to be public spirited. There's a big difference between, so I'm not, I'm not a child about this. I'm not saying that when you're at war, for example, you turn over all of your plans to the enemy because it's important that you're transparent with 360 degree visibility. Far from it. What I'm saying is something has been forgotten and I forgot who it was who told it to me, but it was a fellow graduate student in the Harvard math department and he said, you know, I learned one thing being out in the workforce because he was one of the few people who had had a work life in the department as a grad student. And he said, you can be friends with your boss, but if you're going to be friends with your boss, you have to be doing a good job at work. And there's an analog here, which is if you're going to be reasonably honest with the population, you have to be doing a good job at work as the surgeon general or as the head of the CDC. So if you're doing a terrible job, you're supposed to resign. And then the next person is supposed to say, look, I'm not going to lie to you. I inherited the situation. It was in a bit of disarray. But I had several requirements before I agreed to step in and take the job because I needed to know I could turn it around. I needed to know that I had clear lines of authority. I needed to know that I had the resources available in order to rectify the problem. And I needed to know that I had the ability and the freedom to level with the American people directly as I saw fit. All of my wishes were granted. And that's why I'm happy here on Monday morning. I've got my sleeves rolled up. Boy, do we got a lot to do. So please come back in two weeks and then ask me how I'm doing then. And I hope to have something to show you. That's how you do it. So why is that excellence and basic competence missing? The big net. You see, you come from multiple traditions where it was very important to remember things. The Soviet tradition made sure that you remembered the sacrifices that came in that war and the Jewish tradition we're doing this on Passover, right? Okay. Well, every year we tell one simple story. Well, why can't it be different every year? Maybe we could have a rotating series of seven stories because it's the one story that you need. It's like, you know, you work with the men in black group, right? And it's the last suit that you'll ever need. This is the last story that you ever need. Don't think I fell for your neuralyzer last time. In any event, we tell one story because it's the, get out of Dodge story. There's a time when you need to not wait for the bread to rise. And that's the thing, which is even if you live through a great nap, you deserve to know what it feels like to have to leave everything that has become comfortable and, and unworkable. It's sad that you need, you need that tragedy. I imagine to have the tradition of remembering it's, it's sad to to think that because things have been nice and comfortable means that we can't have great competent leaders, which is kind of the implied statement. Like, can we have great leaders who take big risks, who are, who inspire hard work, who deal with difficult truth, even though things have been comfortable? Well, we know what those people sound like. I mean, you know, if, for example, Jaco Willink suddenly threw his hat into the ring, everyone would say, okay, right. Party's over. It's time to get up at four 30 and really work hard. And we've got to get back into fighting shape. And yeah, but Jaco is a very special, I think, that whole group of people by profession, put themselves in the way of, and into hardship on a daily basis. And he's not, I don't, well, I don't know, but he's probably not going to be, well, could Jaco be president? Okay. But it doesn't have to be Jaco, right? Like in other words, if it was Kai Lenny or if it was Alex Honnold from rock climbing, right. But they're just serious people. They're serious people who can't afford your BS. Yeah. But why do we have serious people that do rock climbing and uh, don't have serious people who lead the nation? That seems to. Because that was a, those skills needed in rock climbing are not good during the big nap. And at the tail end of the big nap, they would get you fired. But I don't, don't you think there's a fundamental part of human nature that desires to, to excel, to be exceptionally good at your job? Yeah. But what is your job? I mean, in other words, my, my, my point to you is if you, if you're a general in a peacetime army and your major activity is playing war games, what if the skills needed to win war games are very different than the skills needed to win wars? Because you know how the war games are scored and you've, you've done money ball, for example, with war games, you figured out how to win games on paper. So then the advancement skill becomes divergent from the, uh, ultimate skill that it was proxying for. Yeah. But you create this, we're good as human beings to, I mean, I, at least me, I can't do a big nap. So at any one moment when I finish something, a new dream pops up. So going to Mars, what do you like to do? You like to do Brazilian jujitsu? Well, first of all, I like to do every, you like to play guitar, guitar, you do this podcast, you do theory. You're always, you're constantly taking risks and exposing yourself. Right? Why? Because you've got one of those crazy, I'm sorry to say it. You've got an Eastern European Jewish personality, which I'm still tied to, and I'm a couple of generations more distant than you are. And I've held on to that thing because it's valuable to me. You don't think there's a huge percent of the populace, even in the United States. That's that's that might be a little bit dormant, but do you know Anna Hatchian from the red scare podcast? Did you interview her? Yeah. Yeah, yeah, yeah. I listened. Yeah, yeah, she was great. She was great, right? Yeah. She's fun. She's, she's terrific. But she also has the same thing going on. And I made a joke in the liner notes for that episode, which is somewhere on the road from Stalingrad to forever 21, something was lost. Like how can Stalingrad and forever 21 be in the same sentence? And, you know, in part it's that weird thing. It's like trying to remember even words like I'm in Russian and Hebrew things, like it's like what pom yet then the score, you know, these words have much more potency about memory and I don't know. I do, I think, I think there's still a dormant populace that craves leaders on a small scale and large scale. And I hope to be that leader on a small scale. And I think you sir have a role to be a leader. You kids go ahead without me. I'm just gonna, I'm going to do a little bit of weird podcast. I see now you're, you're putting on your, uh, Joe Rogan hat. Uh, he says, I'm just a comedian. Oh no, I'm not saying I'm just a, it's not that if I say I want to lead too much because of the big nap, there's like a group, a chorus of automated idiots and their first thought is like, ah, I knew it. So it's a power grab all along. Why should you lead? You know, it's just like, and so the idea is you're just trying to skirt around, not stepping on all of the idiot landmines. It's like, okay, so now I'm going to hear that in my inbox for the next three days. Okay. So lead by example, just live. No, I mean, the issue platform, look, we should take over the institutions. There are institutions. We've got bad leadership. We should mutiny and we should inject a, I don't know, 15% 20% uh, disagreeable, dissident, very aggressive loner, individual mutant freaks, all the people that you go to see Avengers movies about or the X men or whatever it is and stop pretending that everything good comes out of some great giant inclusive, communal, uh, 12 hour meeting. It's like, stop it. That's not how shit happens. You recently published the video of a lecture you gave at Oxford presenting some aspects of a theory, uh, theory of everything called geometric unity. So this was a work of 30, 30 plus years. This is life's work. Let me ask her the, the silly old question. How do you feel as a human? Excited, scared, the experience of posting it. You know, it's funny. One of the, one of the things that you, you learn to feel as an academic is, um, the great sins you can commit in academics, uh, is to show yourself to be a non serious person to show yourself to have delusions, to avoid the standard practices, which everyone has signed up for. And you know, it's weird because like, you know that those people are going to be angry. He did what, you know, why would he do that? And, and what we're referring to, for example, there's traditions of sort of publishing incrementally, certainly not trying to have a theory of everything, perhaps working within the academic departments, all those things. So that's true. And so you're going outside of all of that. Well, I mean, I was going inside of all of that and we did not come to terms when I was inside and what they did was so outside to me was so weird, so freakish, like the most senior, respectable people at the most senior, respectable places were functionally insane as far as I could tell. And again, it's like being functionally stupid. If you're the head of the CDC or something where, you know, you're giving recommendations out that aren't based on what you actually believe. They're based on what you think you have to be doing. Well, in some sense, I think that that's a lot of how I saw the math and physics world as the physics world was really crazy and the math world was considerably less crazy, just very strict and kind of dogmatic. Well, we'll psychoanalyze those folks, but I really want to maybe linger on it a little bit longer of how you feel because yeah, so this is such a, such a special moment in your life. I really appreciate it. It's a great question. So that if we can pair off some of that other, those other issues. Um, it's new being able to say what the observers is, which was my attempt to replace space time with something that is both closely related to space, time and not space time. Um, so I used to carry the number 14 as a closely guarded secret in my life and uh, we're 14 is really four dimensions of space and time plus 10. Extra dimensions of rulers and protractors or for the cool kids out there, uh, symmetric two tensors. She had a geometric, a complicated, beautiful geometric view of the world that you cared with you for a long time. Yeah. Did you, did you have friends that you, um, colleagues, essentially? No. Talked. No. In fact, part of these, part of some of these stories are me, coming out of the world, to my friends, um, and I use the phrase coming out because I think that gays have monopolized the concept of the closet. Many of us are in closets having nothing to do with our sexual orientation. Um, yeah, I didn't really feel comfortable talking to almost anyone. So this was a closely guarded, uh, secret. And I think that I let on in some ways that I was up to something and probably I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was up to something and probably, but it was a very weird life. So I had to, I had to have a series of things that I pretended to care about so that I could use that as the stalking horse for what I really cared about. And to your point, um, I never understood this whole thing about theories of everything. Like if you were going to go into something like theoretical physics, isn't that what you would normally pursue? Like wouldn't it be crazy to do something that difficult and that poorly paid if you were going to try to do something other than figure out what this is all about? Now I have to reveal my cards, my sort of weaknesses and lack and understanding of the music of physics and math departments. But there's an analogy here to artificial intelligence and often folks come in and say, okay, so there's a giant department working on quote unquote artificial intelligence, right? But why is nobody actually working on intelligence? Like you're all just building little toys, right? You're not actually trying to understand. And that breaks a lot of people. Uh, they, it confuses them because like, okay, so I'm at MIT, I'm at Stanford, I'm at Harvard, I'm here. I dreamed of being working on artificial intelligence. Why is everybody not actually working on intelligence? And I have the same kind of sense that that's what working on the theory of everything is that strangely you somehow become an outcast for even, but we know why this is right. Why? Well, it's because let's take the artificial, let's, let's play with AGI for example. I think that the idea starts off with nobody really knows how to work on that. And so if we don't know how to work on it, we choose instead to work on a program that is tangentially related to it. So we do a component of a program that is related to that big question because it's felt like at least I can make progress there. And that wasn't where I was, where I was in, it's funny there was this book of a called Frieden Uhlenbeck and it had this weird mysterious line in the beginning of it. And I tried to get clarification of this weird mysterious line and everyone said wrong things. And then I said, okay, well, so I can tell that nobody's thinking properly because I just asked the entire department and nobody has a correct interpretation of this. And so, you know, it's a little bit like you see a crime scene photo and you have a different idea. Like there's a smoking gun and you figure that's actually a cigarette lighter. I don't really believe that. And then there's like a pack of cards and you think, Oh, that looks like the blunt instrument that the person was beaten with. You know, so you have a very different idea about how things go. And very quickly you realize that there's no one thinking about that. There's a few human sides to this and technical sides, both of which I'd love to try to get down to. So the human side, I can tell from my perspective, I think it was before April 1st, April Fools, maybe the day before, I forget, but I was laying in bed in the middle of the night and somehow it popped up, you know on my feed somewhere that your beautiful face is speaking live and I clicked. And you know, it's kind of weird how the universe just brings things together in this kind of way. And all of a sudden I realized that there's something big happening at this particular moment. It's strange. On a day, like any day and all of a sudden you were thinking of, you had this somber tone, like you were serious, like you were going through some difficult decision and it seems strange. I almost thought you were maybe joking, but there was a serious decision being made and it was a wonderful experience to go through with you. I really appreciate it. I mean it was April 1st. Yeah, it was, it's kind of fascinating. I mean it's just the whole experience. And so I want to ask, I mean thank you for letting me be part of that kind of journey of decision making that took 30 years, but why now? Why did you think, why did you struggle so long not to release it and decide to release it now? While the whole world is on lockdown on April Fools, is it just because you like the comedy of absurd ways that the universe comes together? I don't think so. I think that the COVID epidemic is the end of the big nap. And I think that I actually tried this seven years earlier in Oxford. So I, uh, and it was too early. Which part was too, is it the platform? Cause your platform is quite different now actually the internet. I remember you, uh, I read several of your brilliant answers that people should read for the edge questions. One of them was related to the internet. And it was the first one. Was it the first one? Yeah. An essay called go virtual young man. Yeah. Yeah. That seemed, that's like forever ago now. Well that was 10 years ago. And that's exactly what I did is I decamped to the internet, which is where the portal lives, the portal, the portal, the portal. Well, so the whole, the theme, that's the ominous theme music, which you just listened to forever. I actually started recording a tiny guitar licks, uh, for the audio portion, not for the video portion. Um, you've kind of inspired me with bringing your guitar into the story, but keep going. So you thought, so the Oxford was like step one and you kind of, you put your foot into the, in the water to sample it, but it was too cold at the time. So you didn't want to step in. I was just really disappointed. What was disappointing about that experience? It's very, it's a hard thing to talk about. It has to do with the fact that, and I can see this, this, you know, this mirrors a disappointment within myself. There are two separate issues. One is the issue of making sure that the idea is actually heard and explored. And the other is the, I is the question about will I become disconnected from my work because it will be ridiculed. It will, it will be immediately improved. It will be found to be derivative of something that occurred in some paper in 1957. When the community does not want you to gain a voice, it's a little bit like a policeman deciding to weirdly enforce all of these little known regulations against you. And you know, sometimes nobody else. And I think that's kind of, you know, this weird thing where I just don't believe that we can reach the final theory necessarily within the political economy of academics. So if you think about how academics are tortured by each other and how they're paid and where they have freedom and where they don't, I actually weirdly think that that system of selective pressures is going to eliminate anybody who's going to make real progress. So that's interesting. So if you look at the story of Andrew Wiles, for example, with from our last term, I mean, he, as far as I understand, he pretty much isolated himself from the world of academics in terms of the big, the bulk of the work he did. And it from my perspective is dramatic and fun to read about, but it seemed exceptionally stressful. The first step he took, the first steps he took when actually making the work public that seemed to me it would be hell, but it's like so artificially dramatic, you know, he leads up to it at a series of lectures. He doesn't want to say it. And then he finally says it at the end, because obviously this comes out of a body of work where, I mean, the funny part about from us last theorem is that wasn't originally thought to be a deep and meaningful problem. It was just an easy to state one that had gone unsolved. But if you think about it, it became attached to the body of regular theory. So he built up this body of regular theory gets all the way up to the end announces. And then like, there's this whole drama about, okay, somebody's checking the proof. I don't understand what's going on in line 37, you know, and like, Oh, is this serious? It seems a little bit more serious than we knew. I mean, do you see parallels? Do you share the concern that your experience might be something similar? Well, in his case, I think that if I recall correctly, his original proof was unsalvageable. He actually came up with a second proof with a colleague, Richard Taylor. And it was that second proof which carried the day. So it was a little bit that he got put under incredible pressure and then had to succeed in a new way, having failed the first time, which is like even a weirder and stranger story. That's an incredible story in some sense. But I mean, are you, I'm trying to get a sense of the kind of stress. I think that this is okay, but I'm rejecting what I don't think people understand with me is the scale of the critique. It's like, I don't, you, people say, well, you must implicitly agree with this and implicitly agree. It's like, no, try me ask before you, you decide that I am mostly in agreement with the community about how these things should be handled or what these things mean. Can you, can you elaborate? And also just why, um, does criticism matter so much here? So you seem to dislike the burden of criticism that it will choke away all different kinds of criticism. There's constructive criticism and there's destructive criticism. And what I don't like is I don't like a community that can't, first of all, like if you take the physics community, like just the way we screwed up on masks and PPE, uh, just the way we screwed up in the financial crisis and mortgage backed securities, we screwed up on string theory. Can we just forget the string theory happened or sure, but somebody should say that, right? Somebody should say, you know, it didn't work out. Yeah. But okay. But you're asking this, like why do you guys get to keep the prestige after failing for 35 years? Yeah. That's an interesting question. You guys, because to me, look these things, if there is a theory of everything to be had, right? It's going to be a relatively small group of people where this will be sorted out. Absolutely. It's, it's, it's not tens of thousands. It's probably hundreds at the top. But within that, within that community, there is the assholes. There's the, I mean, they, you have, you always in this world have people who are kind, open minded. It's a question about, okay, let's imagine for example, that you have a story where you believe that ulcers are definitely caused by stress and you've never questioned it. Or maybe you felt like the Japanese came out of the blue and attacked us at Pearl Harbor, right? And now somebody introduces a new idea to you, which is like, what if it isn't stress at all? Or what if we actually tried to make resource star of Japan attack us somewhere in the Pacific so we could have cast this belly to enter the Asian theater person's original idea is like, what, what are you even saying? You know, it's like too crazy. Well, when Dirac in 1963 talked about the importance of beauty as a guiding principle in physics and he wasn't talking about the scientific method, that was crazy talk, but he was actually making a great point and he was using Schrodinger. And I think it was Schrodinger was standing in for him and he said that if your equations don't agree with experiment, that's kind of a minor detail. If they have true beauty in them, you should explore them because very often the agreement with experiment is that it is an issue of fine tuning of your model of the instantiation. And so it doesn't really tell you that your model is wrong. And of course Heisenberg told Dirac that his model was wrong because that the proton and the electron should be the same mass if they are each other's anti particles. And that was an irrelevant kind of silliness rather than a real threat to the Dirac theory. But okay. So amidst all this silliness, I'm hoping that we could talk about the journey that geometric unity has taken and will take as an idea and an idea that we'll see the light. Yeah. That. So first of all, let's, I'm thinking of writing a book called geometric unity for idiots. Okay. And I need you as a consultant. So can we, first of all, I hope I have the trademark on geometric unit. You do. Good. Can you give a basic introduction of the goals of geometric unity? The basic tools of mathematics use the viewpoints in general for idiots. Sure. Like me. Okay. Great. Fun. So what's the goal of geometric unity? The goal of geometric unity is to start with something so completely bland that you can simply say, well, that's a something that begins the game is as close to a mathematical. Nothing is possible. In other words, I can't answer the question. Why is there something rather than nothing? But if there has to be a something that we begin from, let it begin from something that's like a blank canvas. Let's even more basic. So what is something, what are we trying to describe here? Right now we have a model of our world and it's got two sectors. Two sectors. One of the sectors is called general relativity. The other is called the standard model. So we'll call it GR for general relativity and SM for standard model. What's the difference between the two? What are the two described? So general relativity gives pride of place to gravity and everything else is acting as a sort of a back, a backup singer. Gravity is the star of the show. Gravity is the star of general relativity. And in the standard model, the other three non gravitational forces. So if there are four forces that we know about three of the four non gravitational, that's where they get to shine. Great. So tiny little particles and how they interact with each other. So photons, gluons and so called intermediate vector bosons. Those are the things that the standard model showcases and general relativity showcases gravity. And then you have matter, which is accommodated in both theories, but much more beautifully inside of the standard model. So what, what is a theory of everything do? So, so first of all, I think that that's, that's the first place where we haven't talked enough. We assume that we know what it means, but we don't actually have any idea what it means. And what I claim it is, is that it's a theory where the questions beyond that theory are no longer of a mathematical nature. In other words, if I say, let us take, um, X to be a four dimensional manifold to a mathematician or a physicist. I've said very little. I've simply said there's some place for calculus and linear algebra to, to, uh, to dance together and to play. And that's what manifolds are. They're the most natural place where, where our two greatest math theories can really, uh, intertwine. Which are the two? Oh, you mean calculus and linear algebra. Right. Okay. Now the question is beyond that. So it's sort of like saying, I'm an artist and I want to order a canvas. Okay. Now the question is, does the canvas paint itself? Does the can, does the canvas come up with an artist and paint an ink, which then paint the canvas? Like that's the, that's the hard part about theories of everything, which I don't think people talk enough about. Can we just, you bring up Escher and the hand that draws itself. Is it the fire that lights itself or drawing hands, the drawing hands. Yeah. And, uh, every time I start to think about that, my mind like, uh, shuts down. Well, don't do that. There's a spark and this is the most beautiful part. We should do this together. No, it's beautiful, but, uh, this robot's brain, uh, sparks fly. So can we try to say the same thing over and over in different ways about what, what, what you mean by that having to be a thing we have to contend with? Sure. Like why, why do you think that creating a theory of everything, as you call the source code are understanding our source code require a view like the hand that draws itself. Okay. Well here's what goes on in the regular physics picture. We've got these two main theories, general relativity and the standard model. Right? Okay. Think of general relativity as more or less, the theory of the canvas. Okay. Maybe you have the canvas in a particularly rigid shape. Maybe you've measured it. So it's got length and it's got angle, but more or less it's just canvas and length and angle. And that's all that really general relativity is, but it allows the canvas to warp a bit. Okay. Then we have the second thing, which is this import of foreign libraries where, which aren't tied to space and time. So we've got this crazy set of symmetries called SU three cross SU two cross U one. We've got this collection of 16 particles in a generation, which are these sort of twisted spinners. And we've got three copies of them. Then we've got this weird Higgs field that comes in and like Deus ex machina solves all the problems that have been created in the play that can't be resolved otherwise. So that's the standard model of quantum field theory just plopped on top. It's a problem of the double origin story. One origin story is about space and time. The other origin story is about what we would call internal quantum numbers and internal symmetries. And then there was an attempt to get one to follow from the other called Kaluza Klein theory, which didn't work out. And this is sort of in that vein. So you said origin story. So in the hand that draws itself, what is it? So it's, it's as if you had the canvas and then you ordered up also give me paint brushes, paints, pigments, pencils, and artists. But you're saying that's like, if you want to create a universe from scratch, the canvas should be generating the paintbrushes and the paintbrushes and the artists, right? Like you should, who's the artist in this analogy? Well, this is sorry. Then we're going to get into a religious thing and I don't want to do that. Okay. Well, you know my shtick, which is that we are the AI. We have two great stories about the simulation and artificial general intelligence. In one story, man fears that some program we've given birth to will become self aware, smarter than us and we'll take over in another story. There are genius simulators and we live in their simulation and we haven't realized that those two stories are the same story. In one case, we are the simulator. In another case, we are the simulated and if you buy those and you put them together, we are the AGI and whether or not we have simulators, we may be trying to wake up by learning our own source code. So this could be our Skynet moment, which is one of the reasons I have some issues around it. I think we'll talk about that cause I, well that's the issue of the emergent artist within the story just to get back to the point. Okay. So, so now the key point is the standard way we tell the story is that Einstein sets the canvas and then we order all the stuff that we want and then that paints the picture that is our universe. So you order the, the, the paint, you order the artist, you order the brushes and that then when you collide the two gives you two separate origin stories. The canvas came from one place and everything else came from somewhere else. So what are the mathematical tools required to, to construct consistent geometric theory? Yeah. You know, make this concrete. Well, somehow you need to get three copies, for example, of generations with 16 particles each, right? And so the question would be like, well, there's a lot, there's a lot of special personality in those symmetries. Where would they come from? So for example, you've got what would be called grand unified theories that sound like, um, SU five, uh, the George I. Glashow theory. There's something that should be called spin 10, but physicists insist on calling it SO 10. There's something called the petit salon theory that tends to be called SU four cross SU two cross SU two, which should be called spin six cross spin four. I can get into all of these. What are they all accomplishing? They're all taking the known forces that we see and packaging them up to say, we can't get rid of the second origin story, but we can at least make that origin story more unified. So they're trying grand unification is the attempt to. And that's a mistake in your, in your. It's not a mistake that the problem is, is it was born lifeless. When, when George I. And Glashow first came out with the SU five theory, um, it was very exciting because it could be tested in a South Dakota, um, mine filled up with like, I dunno, cleaning fluid or something like that. And they looked for proton decay and didn't see it. And then they gave up because in that day, when your experiment didn't work, you gave up on the theory. It didn't come to us born of a fusion between Einstein and, and, and bore, you know, and that was kind of the problem is that it had this weird parenting where it was just on the bore side. There was no Einsteinian contribution. Lex, how can I help you most? I'm trying to figure, what questions do you want to ask so that you get the most satisfying answers? Uh, there's, there's a, there's a bunch, there's a bunch of questions I want to ask. I mean, one, and I'm trying to sneak up on you somehow to reveal in a accessible way, then the nature of our universe. So I can just give you a guess, right? We have to be very careful that we're not claiming that this has been accepted. This is a speculation, but I will, I will make the speculation that what, I think what you would want to ask me is how can the canvas generate all the stuff that usually has to be ordered separately? All right. Should we do that? Let's go there. Okay. Okay. So the first thing is, is that you have a concept in computers called technical debt. You're coding and you cut corners and you know, you're going to have to do it right before the thing is safe for the world, but you're piling up some series of IO use to yourself and your project as you're going along. So the first thing is we can't figure out if you have only four degrees of freedom. And that's what your canvas is. How do you get at least Einstein's world? Einstein says, look, it's not just four degrees of freedom, but there need to be rulers and protractors to measure length and angle in the world. You can't just have a flabby four degrees of freedom. So the first thing you do is you create 10 extra variables, which is like if we can't choose any particular set of rulers and protractors to measure length and angle, let's take the, take the set of all possible rulers and protractors. And that would be called symmetric non degenerate two tensors on the tangent space of the four manifold X four. Now because there are four degrees of freedom, you start off with four dimensions. Then you need four rulers for each of those different directions. So that's four that gets us up to eight variables. And then between four original variables, there are six possible angles. So four plus four plus six is equal to 14. So now you've replaced X four with another space, which in the lecture, I think I called you 14, but I'm now calling Y 14. This is one of the big problems of working on something in private is every time you pull it out, you sort of can't remember it. You name something, something new. Okay. So you've got a 14 dimensional world, which is the original four dimensional world plus a lot of extra gadgetry for measurement. Yeah. And because you're not in the four dimensional world, you don't have the technical debt. No, now you've got a lot of technical debt because now you have to explain away a 14 dimensional world, which is a big, you're taking a huge advance on your payday check, right? But aren't more dimensions allow you more freedom to, I mean, maybe, but you have to get rid of them somehow because we don't perceive them. So eventually you have to collapse it down to the thing that we perceive or you have to sample a four dimensional filament within that 14 dimensional world known as a section of a bundle. Okay. So how do we get from the four 14 dimensional world where I imagine a lot of, oh, wait, wait, wait. Yep. You're cheating. The first question was how do we get something from almost nothing? Like how do we get the, if I've said that the who and the what in the newspaper story that is a theory theory of everything are bosons and Fermions. So let's make the who, the Fermions and the what the bosons think of it as the players and the equipment for a game. Are we supposed to be thinking of actual physical things with mass or energy? Okay. So think about everything you see in this room. So from chemistry, you know, it's all protons, neutrons and electrons, but from a little bit of late 1960s physics, we know that the protons and neutrons are all made of up quarks and down quarks. So everything in this room is basically up quarks, down quarks, and electrons stuck together with, with the, the, what the equipment. Okay. Now the way we see it currently is we see that there are space time indices, which we would call spinners that correspond to the who that is the Fermions, the matter, the stuff, the up quarks, the down quarks, the electrons. And there are also 16 degrees of freedom that come from this in this space of internal quantum numbers. So in my theory, in 14 dimensions, there's no internal quantum number space that figures in. It's all just spin oriel. So spinners in 14 dimensions without any festooning with extra linear algebraic information. There's a concept of a, of, of, of spinners, which is natural if you have a manifold with length and angle and Y 14 is almost a manifold with length and angle. It's, it's so close. It's in other words, because you're looking at the space of all rulers and protractors, maybe it's not that surprising that a space of rulers and protractors might come very close to having rulers and protractors on it itself. Like can you measure the space of measurements and you almost can't in a space that has length and angle. If it doesn't have a topological obstruction comes with these objects called spinners. Now spinners are the stuff of of our world. We are made of spinners. They are the most important, really deep object that I can tell you about. They were very surprising. What is a spinner? So famously, there are these weird things that require 720 degrees of rotation in order to come back to normal. And that doesn't make sense. And the reason for this is that there's a knottedness in our three dimensional world that people don't observe. And you know, you can famously see it by this Dirac string trick. So if you take a glass of water, imagine that this was a tumbler and I didn't want to spill any of it. And the question is if I rotate the cup without losing my grip on the base, 360 degrees and I can't go backwards, is there any way I can take a sip? And the answer is this weird motion, which is go over first and under second. And that that's 720 degrees of rotation to come back to normal so that I can take a sip. Well, that weird principle, which sometimes is known as the Philippine wine glass dance because waitresses in the Philippines apparently learned how to do this. Um, so that that move defines if you will, this hidden space that nobody knew was there of spinners, which Dirac figured out when he took the square root of something called the Klein Gordon equation, uh, which I think had earlier, um, work incorporated from Cartan and killing and company in mathematics. So spinners are one of the most profound aspects of human existence. I mean, forgive me for the perhaps dumb questions, but, uh, would a spinner be the mathematical objects that's the basic unit of our universe? When you, when you start with a manifold, um, which is just like something like a donut or a sphere circle or a Mobius band, a spinner is usually the first wildly surprising thing that you found was hidden in your original purchase. So you, you order a manifold and you didn't even realize it's like buying a house and finding a panic room inside that you hadn't counted on. It's very surprising when you understand that spinners are running around on your spaces. Again, perhaps a dumb question, but we're talking about 14 dimensions and four dimensions. What is the manifold we're operating under? So in my case, it's proto space time. It's before, it's before Einstein can slap rulers and protractors on space time. What do you mean by that? Sorry to interrupt is space. Time is the four D manifold. Space time is a four dimensional manifold with extra structure. What's the extra structure? It's called a semi Ramanian or pseudo Ramanian metric. In essence, there is something akin to a four by four symmetric manifold. Four symmetric matrix from which is equivalent to length and angle. So when I talk about rulers and protractors, or I talk about length and angle, or I talk about Ramanian or pseudo Ramanian or semi Ramanian met manifolds, I'm usually talking about the same thing. Can you measure how long something is and what the angle is between two different rays or vectors? So that's what Einstein gave us as his arena, his place to play, his his canvas. So there's a bunch of questions I can ask here. But like I said, I'm working on this book, Geometric Unity for Idiots. And and I think what would be really nice as your editor to have like beautiful, maybe even visualizations that people could try to play with, try to try to reveal small little beauties about the way you're thinking about the score. Well, I usually use the Joe Rogan program for that. Sometimes I have him doing the Philippine wine glass dance. I had the hop vibration. The part of the problem is that most people don't know this language about spinners, bundles, metrics, gauge fields. And they're very curious about the theory of everything, but they have no understanding of even what we know about our own world. Is it, is it a hopeless pursuit? So like even gauge theory, right? Just this, I mean, it seems to be very inaccessible. Is there some aspect of it that could be made accessible? I mean, I could go to the board right there and give you a five minute lecture on gauge theory that would be better than the official lecture on gauge theory. You would know what gauge theory was. So it is, it's, it's possible to make it accessible, but nobody does. Like, in other words, you're going to watch over the next year, lots of different discussions about quantum entanglement or, you know, the multiverse. Where are we now? Or, you know, many worlds, are they all equally real? Yeah. Right. I mean, yeah, that's okay. But you're not going to hear anything about the hop vibration except if it's from me and I hate that. Why, why can't you be the one? Well, because I'm going a different path. I think that we've made a huge mistake, which is we have things we can show people about the actual models. We can push out visualizations where they they're not listening by analogy. They're watching the same thing that we're seeing. And as I've said to you before, this is like choosing to perform sheet music that hasn't been performed in a long time. Or, you know, the experts can't afford orchestras. So they just trade Beethoven symphonies as sheet music. And they, Oh, wow, that was beautiful. But it's like, nobody heard anything. They just looked at the score. Well, that's how mathematicians and physicists trade papers and ideas is that they, they write down the things that represent stuff. I want to at least close out the thought line that you started, which is how does the canvas order all of this other stuff into being so I at least want to say some incomprehensible things about that. And then we'll, we'll have that much done. All right. And that just point, does it have to be incomprehensible? Do you know what the Schrodinger equation is? Yes. Do you know what the Dirac equation is? What does no mean? Well, my point is you're going to have some feeling that, you know, what the Schrodinger equation is, as soon as we get to the Dirac equation, your eyes are going to get a little bit glazed, right? So now why is that? Well, the answer to me is, is that you, you want to ask me about the theory of everything, but you haven't even digested the theory of everything as we've had it since 1928 when Dirac came out with his equation. So for whatever reason, and this isn't a hit on you, you haven't been motivated enough in all the time that you've been on earth to at least get as far as the Dirac equation. And this was very interesting to me after I gave the talk in Oxford new scientist. Who had done kind of a hatchet job on me to begin with sent a reporter to come to the third version of the talk that I gave. And that person had never heard of the Dirac equation. So you have a person who's completely professionally, not qualified to ask these questions wanting to know, well, how does, how does your theory solve new problems and like, well, in the case of the Dirac equation, well, tell me about that. I don't know what that is. So then the point is, okay, I got it. You're not even caught up minimally to where we are now. And that's not a knock on you. Almost nobody even knows where you are. And that's not a knock on you, almost nobody is. Yeah. But then how does it become my job to digest what has been available for like over 90 years? Well, to me, the open question is whether what's been available for over 90 years can be, um, there could be, uh, a blueprint of a journey that one takes to understand it, not to do that with you. And I, I, one of the things I think I've been relatively successful at, for example, you know, when you ask other people what gauge theory is, you get these very confusing responses and my response is much simpler. It's, oh, it's a theory of, uh, differentiation where when you calculate the instantaneous rise over run, you measure the rise, not from a flat horizontal, but from a custom endogenous reference level. What do you mean by that? It's like, okay. And then I do this thing with Mount Everest, which is Mount Everest is how high then they give the height I say above what then they say sea level. And I say, which sea is that in Nepal? Like, oh, I guess there isn't a sea cause it's landlocked. It's like, okay, well, what do you mean by sea level? Oh, there's this thing called the geoid I'd never heard of. Oh, that's the reference level. That's a custom reference level that we imported. So you, all sorts of people have remembered the exact height of Mount Everest without ever knowing what it's a height from. Well, in this case, engage theory, there's a hidden reference level where you measure the rise in rise over run to give the slope of the line. What if you have different concepts of what, of where that rise should be measured from that vary within the theory that are endogenous to the theory. That's what gauge theory is. Okay. We have a video here, right? Yeah. Okay. Okay. I'm going to use my phone. If I want to measure my hand and its slope, this is my attempt to measure it using standard calculus. In other words, the reference level is apparently flat and I measure the rise above that phone using my hand. Okay. If I want to use gauge theory, it means I can do this or I can do that, or I can do this, or I can do this, or I could do what I did from the beginning. Okay. At some level, that's what gauge theory is. Now that is an act. No, I've never heard anyone describe it that way. So while the community may say, well, who is this guy and why does he have the right to talk in public? I'm waiting for somebody to jump out of the woodwork and say, you know, Eric's whole shtick about rulers and protractors, uh, leading to a derivative. Derivatives are measured as rise over run above reference level. The reference levels don't fit together. Like I go through this whole shtick in order to make it accessible. I've never heard anyone say it. I'm trying to make Prometheus would like to discuss fire with everybody else. All right. I'm going to just say one thing to close out the earlier line, which is what I think we should have continued with. When you take the naturally occurring spinners, the unadorned spinners, the naked spinners, not on this 14 dimensional manifold, but on something very closely tied to it, which I've called the chimeric tangent bundle, that is the object which stands in for the thing that should have had length and angle on it, but just missed. Okay. When you take that object and you form spinners on that and you don't adorn them. So you're still in the single origin story. You get very large spin oriel objects upstairs on this 14 dimensional world. Why 14, which is part of the observers. When you pull that information back from Y 14 down to X four, it miraculously looks like the adorned spinners, the festoon spinners, the spinners that we play with in ordinary reality. In other words, the 14 dimensional world looks like a four dimensional world plus a 10 dimensional compliment. So 10 plus four equals 14, that 10 dimensional compliment, which is called a normal bundle, generates spin properties, internal quantum numbers that look like the things that give our, our particles personality that make let's say up quarks and down quarks charged by negative one third or plus two thirds, you know, that kind of stuff, or whether or not, you know, some quarks feel the weak side. Quarks feel the weak force and other quarks do not. So the X four generates Y 14 Y 14 generates something called the chimeric tangent bundle chimeric tangent bundle generates unadorned spinners. The unadorned spinners get pulled back from 14 down to four where they look like adorned spinners. And we have the right number of them. You thought you needed three. You only got two, but then something else that you'd never seen before broke apart on this journey and it broke into another copy of the thing that you already have two copies of one piece of that thing broke off. So now you have two generations plus an imposter third generation, which is, I don't know why we never talk about this possibility in regular physics. And then you've got a bunch of stuff that we haven't seen, which has descriptions. So people always say, does it make any falsifiable predictions? Yes, it does. It says that the matter that you should be seeing, um, next has particular properties that can be read off like, like a weak ISIS spin, weak hypercharge, like the responsiveness to the strong force. The one I can't tell you is what energy scale it would happen at. So you would, if you can't say if those characteristics can be detected with the current, but it may be that somebody else can. I'm not a physicist. I'm not a quantum field theorist. I can't, I don't know how you would do that. The hope for me is that there's some simple explanations for all of it. Like, should we have a drink? You're having fun. No, I'm trying to have fun with you. You know, there's a bunch of fun things to talk about here. Anyway, that was how I got what I thought you wanted, which is, if you think about the fermions as the artists and the bosons as the brushes and the paint, what I told you is that's how we get the artists. What are the open questions for you in this? What were the challenges? So you're not done. Well, there's, there's things that I would like to have in better order. So a lot of people will say, see, if you're going to do this, you have to say, see, the reason I hesitate on this is I just have a totally different view than the community. So for example, I believe that general relativity began in 1913 with Einstein and Grossman. Now that was the first of like four major papers in this line of thinking. To most physicists, general relativity happened when Einstein produced, uh, a divergence free, um, gradient, which turned out to be the gradient of the, of the so called Hilbert or Einstein Hilbert action. And from my perspective, that wasn't true. This is that it began when Einstein said, look, this is about, um, differential geometry and it's the final answer is going to look like a curvature tensor on one side and matter and energy on the other side. And that was enough. And then he published a wrong version of it where it was the Ricci tensor, not the Einstein tensor. Then he corrected the reach, the Ricci tensor to make it into the Einstein tensor, then he corrected that to add a cosmological constant. I can't stand that the community thinks in those terms. There's some things about which, like there's a question about which contraction do I use? There's an Einstein contraction. There's a Ricci contraction. They both go between the same spaces. I'm not sure what I should do. I'm not sure which contraction I should choose. This is called a shiab operator for ship in a bottle and my stuff. You have this big platform in many ways that inspires people's curiosity about physics and mathematics. Right. Now, and I'm one of those people and, but then you start using a lot of words that I don't understand and, or like I might know them, but I don't understand. And what's unclear to me, if I'm supposed to be listening to those words, or if it's just, if this is one of those technical things that's intended for a very small community, or if I'm supposed to actually take those words and start, you know, a multi year study, not, not a serious study, but a, the community study, but the kind of study when you, you're interested in learning about machine learning, for example, or any kind of discipline, that's where I'm a little bit confused. So you've, you've speak beautifully about ideas. You often reveal the beauty in math, in geometry, and I'm unclear in what are the steps I should be taking. I, I'm curious, how can I explore? How can I play with something? How can I play with these ideas? And, and, and enjoy the beauty of not necessarily understanding the depth of the theory that you're presenting, but start to share in the beauty, as opposed to sharing and enjoying the beauty of just the way, the passion with which you speak, which is in itself fun to listen to, but also starting to be able to understand some aspects of this theory that I can enjoy it to, and start to build an intuition, what the heck we're even talking about, because you're basically saying we need to throw a lot of our ideas of, of views of the universe out. And I'm trying to find accessible ways in, not in this conversation. No, I appreciate that. So one of the things that I've done is I've, I've picked on one paragraph from Edward Witten, and I said, this is the paragraph. If I could only take one paragraph with me, this is the one I'd take. And it's almost all in prose, not an equation. And he says, look, this is, this is our knowledge of the universe at its deepest level. And he was writing this during the 1980s. And he has three separate points that constitute our deepest knowledge. And those three points refer to equations, one to the Einstein field equation, one to the Dirac equation, and one to the Yang Mills Maxwell equation. Now, one thing I would do is take a look at that paragraph and say, okay, what do these three lines mean? Like it's a finite amount of verbiage. You can write down every word that you don't know. And you can say, what do I think done now? Young man. Yes. There's a beautiful wall in Stony Brook, New York built by someone who I know you will interview named Jim Simons and Jim Simons. He's not the artist, but he's the guy who funded it. World's greatest hedge fund manager. And on that wall contain the three equations that Witten refers to in that paragraph. And so that is the transmission from the paragraph or graph to the wall. Now that wall needs an owner's manual, which Roger Penrose has written called the road to reality. And let's call that the tome. So this is the subject of the so called graph wall tome project that is going on in our discord server and our general group around the portal community, which is how do you take something that purports in one paragraph to say what the deepest understanding man has of the universe in which he lives, it's memorialized on a wall, which nobody knows about, which is an incredibly gorgeous piece of, uh, of art. And that was written up in a book, which is, has been written for no man. Right. Maybe, maybe it's for a woman. I don't know, but no, no one should be able to read this book because either you're a professional and you know, a lot of this book, in which case it's kind of a refresher to see how Roger thinks about these things, or you don't even know that this book is a self contained, uh, invitation to understanding our deepest nature. So I would say find yourself in the graph wall tome transmission sequence and join the graph wall tome project if that's of interest. Okay. Beautiful. Uh, now just to linger on a little longer, what kind of journey do you see geometric community taking? I don't know. I mean, that's the thing is that. First of all, the professional community has to get very angry and outraged and they have to work through their feeling that this is nonsense. This is bullshit or like, no, wait a minute. This is really cool. Actually, I need some clarification over here. So there's going to be some sort of weird coming back together process. Are you already hearing murmurings of that? It was very funny. Officially I've seen very little. So it's perhaps happening quietly. Yeah. You, you often talk about, we need to get off this planet. Yep. Can I try to sneak up on that by asking what in your kind of view is the difference, the gap between the science of it, the theory and the actual engineering of building something that leverages the theory to do something? Like how big is that? We don't know. Gap. I mean, if you have 10 extra dimensions to play with that are the rules of protractors of the world themselves, can you gain access to those dimensions? Do you have a hunch? So I don't know. I don't want to get ahead of myself because you have to appreciate, I can have hunches and I can, I can jaw off. But one of the ways that I'm succeeding in this world is to not bow down to my professional communities nor to ignore them. Like I'm actually in the middle of a world where I'm not going to ignore them, like I'm actually interested in the criticism. I just want to denature it so that it's not mostly interpersonal and irrelevant. I believe that they don't want me to speculate and I don't need to speculate about this. I can simply say I'm open to the idea that it may have engineering prospects and it may be a death sentence. We may find out that there's not enough new here that even if it were right, that there would be nothing new to do. Can't tell you that's what you mean by death sentences. There would not be exciting breakthroughs. Wouldn't it be terrible if you couldn't, like you can do new things in an Einsteinian world that you couldn't do in a Newtonian world, right? You know, like you have twin paradoxes or Lorentz contraction of length or any one of a number of new cool things happen in relativity theory that didn't happen for Newton. What if there wasn't new stuff to do at the next and final level? Yeah, that would be quite sad. Let me ask a silly question, but we'll say it with a straight face. Impossible. So let me mention Elon Musk. What are your thoughts about he's more, you're more on the physics theory side of things, he's more on the physics engineering side of things in terms of SpaceX efforts, what do you think of his efforts to, uh, get off this planet? Well, I think he's the other guy who's semi serious about getting off this planet. I think there are two of us who are semi serious about getting off the planet. What do you think about his methodology and yours when you look at them? Don't, and I don't want to be against you because like I was so excited that like your top video was Ray Kurzweil and then I did your podcast and we had some chemistry, so it zoomed up and I thought, okay, I'm going to beat Ray Kurzweil. So just as I'm coming up on Ray Kurzweil, you're like, and now Alex Fridman special Elon Musk and he blew me out of the water. So I don't want to be petty about it. I want to say that I don't, but I am. Okay. But here's the funny part. Um, he's not taking enough risk. Like he's trying to get us to Mars. Imagine that he got us to Mars, the moon, and we'll throw in Titan and nowhere good enough, the diversification level is too low. Now there's a compatibility. First of all, I don't think Elon is serious about Mars. I think Elon is using Mars as a, as a narrative, as a story, as a dream to make the moon jealous to make the, uh, uh, I think he's using it as a story to organize us, to reacquaint ourselves with our need for space, our need to get off this planet. It's a concrete thing. He shown that, um, many people think that he's shown that he's the most brilliant and capable person on the planet. I don't think that's what he showed. I think he showed that the rest of us have forgotten our capabilities. And so he's like the only guy who has still kept the faith and is like, what's wrong with you people? So you think the lesson we should draw from Elon Musk is there's, uh, there's a capable person within, within a lot of us, Elon makes sense to me in what way he's doing, what any sensible person should do. He's trying incredible things and he's partially succeeding, partially failing to try to solve the obvious problems before, you know, when he comes up with things like, uh, you know, I got it. We'll come up with a battery company, but batteries aren't sexy. So we'll, we'll make a car around it. It's like, great, you know, or, um, any one of a number of things. Elon is behaving like a sane person and I view everyone else is insane. And my feeling is, is that we really have to get off this planet. We have to get out of this. We have to get out of the neighborhood. To linger on a little bit. Do you think that's a physics problem or an engineering problem? I think it's a cowardice problem. I think that we're afraid that we had 400 hitters of the mind, like Einstein and Dirac and that, that era is done. And now we're just sort of copy editors. So it's some of it money, like if we become brave enough to go outside the solar system, can we afford to financially? Well, I think that that's not really the issue. The issue is look what Elon did well, he amassed a lot of money and then he, you know, he plowed it back in and he spun, spun the wheel and he made more money and now he's got F you money. Now the problem is, is that a lot of the people who have F you money are not people whose middle finger you ever want to see. I want to see Elon's middle finger. I want to see what he's doing by that. Or like when you say, fuck it, I'm going to do the biggest possible. Do whatever the fuck you want, right? Fuck you. Fuck anything that gets in his way that he can afford to push out of his way. And you're saying he's not actually even doing that enough. No, I'm he's not going, please. I'm going to go. Elon's doing fine with his money. I just want him to enjoy himself, have the most, you know, Dionysian, but you're saying Mars is playing it safe. He doesn't know how to do anything else. He knows rockets and he might know some physics at a fundamental level. Yeah. I guess, okay, just, let me just go right back to how much physics do you really, how much brilliant breakthrough ideas on the physics side do you need to get off this planet? I don't know. And I don't know whether like in my most optimistic dream, I don't know whether my stuff gets us out of this. Like in my most optimistic dream, I don't know whether my stuff gets us off the planet, but it's hope it's hope that there's a more fundamental theory that we can access that we don't need. Um, you know, whose elegance and beauty will suggest that this is probably the way the universe goes. Like you have to say this weird thing, which is this, I believe, and this, I believe is a very dangerous statement, but this, I believe, I believe that my theory, um, points the way now, Elon might or might not be able to access my theory. I don't know. I don't know what he knows, but keep in mind, why are we all so focused on Elon? It's really weird. It's kind of creepy too. Why he's just the person who's just asking the, the obvious questions and doing whatever he can, but he makes sense to me. You see Craig Venter makes sense to me. Jim Watson makes sense to me, but we're focusing on Elon. Because he's, he's somehow is rare. Well, that's the weird thing. Like we've come up with a system that eliminates all Elon from our pipeline and Elon somehow, uh, snuck through when they weren't quality adjusting everything, you know? And this, this idea of, uh, of disc, right? Distributed idea suppression complex. Yeah. Is that what's bringing the Elans of the world down? You know, it's so funny. It's like, he's asking Joe Rogan, like, is that a joint, you know, it's like, well, what, what will happen if I smoke it? What will happen to the stock price? What will happen if I scratch myself in public? What will happen if I say what I think about Thailand or COVID or who knows what? And everybody's like, don't say that, say this, go do this, go do that. Well, it's crazy making, it's absolutely crazy making. And if you think about what we put through people through, um, we, we need to get people who can use FU money, the FU money they need to insulate themselves from all of the people who know better, because the, the, my nightmare is, is that why did we only get one Elon? What if we were supposed to have thousands and thousands of Elans? And the weird thing is like, this is all that remains you're, you're looking at like OB one and Yoda, and it's like, this is the only, this is all that's left after X, uh, order 66 has been executed. And that's the thing that's really upsetting to me is we used, we used to have Elon's five deep. And then we could talk about Elon in the context of his cohort. But this is like, if you were to see a giraffe in the Arctic with no trees around, you'd think why the long neck, what a strange sight, you know? You know, how do we get more Elans? How do we change these? So I think the use, so we know MIT and Harvard, so maybe returning to our previous conversation, my sense is that the Elans of the world are supposed to come from MIT and Harvard, right? And how do you change? Let's think of one that MIT sort of killed. Have any names in mind? Aaron Schwartz leaps to my mind. Yeah. Okay. Are we MIT supposed to shield the Aaron Schwartz's from, I don't know, journal publishers, or are we supposed to help the journal publishers so that we can throw 35 year sentences in his face or whatever it is that we did that depressed him? Okay. So here's my point. Yeah. I want MIT to go back to being the home of Aaron Schwartz, and if you want to send Aaron Schwartz to a state where he's looking at 35 years in prison or something like that, you are my sworn enemy. You are not MIT. Yeah. You are the traitors, uh, irresponsible, middle brow, pencil pushing green eyeshade fool that needs to not be in the seat at the, at the presidency of MIT period, the end, get the fuck out of there and let one of our people sit in that chair. And the thing that you've articulated is that the people in those chairs are not the way they are because they're evil or somehow morally compromised is that it's just the, that's the distributed nature is that there's some kind of aspect of the system that people who wed themselves to the system, they adapt every instinct. And the fact is, is that they're not going to be on Joe Rogan smoking a blunt. Let me ask a silly question. Do you think institutions generally just tend to become that? No. We get some of the institutions, we get Caltech. Here's what we're supposed to have. We're supposed to have Caltech. We're supposed to have a read. We're supposed to have deep springs. We're supposed to have MIT. We're supposed to have a part of Harvard. And when the sharp elbow crowd comes after the shelf, sharp, uh, mind crowd, we're supposed to break those sharp elbows and say, don't come around here again. So what are the weapons that the sharp minds are supposed to use in our modern day? So to reclaim MIT, what, what is the, what's the future? Are you kidding me? First of all, assume that this is being seen at MIT. Hey everybody is okay. Hey everybody, try to remember who you are. You're the guys who put the police car on top of the great dump. You guys came up with the great breast of knowledge. You created a Tetris game in the green building. Now, what is your problem? Is your problem they killed one of your own. You should make their life a living hell. You should be the ones who keep the mayor memory of Aaron Schwartz alive and all of those hackers and all of those mutants, you know, it's like it's either our place or it isn't. And if we have to throw 12 more pianos off of the roof, right? If Harold Edgerton was taking those photographs, you know, uh, with slow mo back in the forties, if Noam Chomsky is on your faculty, what the hell is wrong with you kids? You are the most creative and insightful people and you can't figure out how to defend Aaron Schwartz. That's on you guys. So some of that is giving more power to the young, like you said, no, it's taking power from taking power from the feeble and the middle Brown. Yeah. But how do you, what is the mechanism to me? I don't know. You, you have some nine volt batteries, copper wire. I, uh, I tend to, do you have a capacitor? I tend to believe you have to create an alternative and, uh, make the alternative so much better that it makes MIT obsolete unless they change. And that's what forces change. So as opposed to somehow, okay, so use projection mapping, what's projection mapping, where you take some complicated edifice and you map all of its planes. And then you actually project some unbelievable graphics, re skinning a building, let's say at night. That's right. Yeah. Okay. So you want to do some graffiti art with like basically want to hack the system. No, I'm saying, look, listen to me. Yeah. We're smarter than they are. And they, you know what they say? They say things like, okay, I think we need some geeks. Get me two PhDs. Right. You treat PhDs like that. That's a bad move. Because PhDs are capable and we act like our job is to peel grapes for our betters. Yeah. That's a strange thing. And I, I you speak about it very eloquently is how we treat basically the greatest minds in the world, which is like at their prime, which is PhD students like that. We pay them nothing. Uh, I'm done with it. Yeah. Right. We got to take what's ours. So, so take back MIT, become ungovernable, become ungovernable. And by the way, when you become ungovernable, don't do it by throwing food. Don't do it by pouring salt on the lawn, like a jerk, do it through brilliance, because what you Caltech and MIT can do, and maybe Rensselaer Polytechnic or Worcester Polytech, I don't know. Lehigh. God damn it. What's wrong with you technical people? You act like you're a servant class. It's unclear to me how you reclaim it except with brilliance, like you said. Uh, but to me that the way you reclaim it with brilliance is to go outside the system. Aaron Schwartz came from the Elon Musk class. What are you guys going to do about it? Right. The super capable people need to flex, need to be individual. They need to stop giving away all their power to, you know, a zeitgeist or a community or this or that you're not, you're not indoor cats. You're outdoor cats. Go be outdoor cats. Do you think we're going to see this, this kind of one asking me, you know, before, like what about the world war II generation? Right. Oh, and I'm trying to say is that there's a technical revolt coming here's you want to talk about it, but I'm trying to lead it. I'm trying to see, no, you're not trying to lead it. I'm trying to get a blueprint here. All right, Lex. Yeah. How angry are you about our country pretending that you and I can't actually do technical subjects so that they need an army of, uh, kids coming in from four countries in Asia. It's not about the four countries in Asia. It's not about those kids. It's about lying about us that we don't care enough about science and technology that we're incapable of it as if we don't have Chinese and Russians and Russians and Koreans and Croatians. Like we've got everybody here. The only reason you're looking outside is, is that you want to hire cheap people from the family business because you don't want to pass the family business on. And you know what? You didn't really build the family business. It's not yours to decide. You the boomers and you the silent generation, you did your bit, but you also fouled a lot of stuff up and you're custodians. You are caretakers. You were supposed to hand something. What you did instead was to gorge yourself on cheap foreign labor, which you then held up as being much more brilliant than your own children, which was never true. But I'm trying to understand how we create a better system without anger, without revolution, not, not, not by kissing and hugs and, and, but by any, I don't understand within MIT what the mechanism of building a better MIT is. We're not going to pay Elsevier. Aaron Schwartz was right. JSTOR is an abomination. But why, who within MIT, who within institutions is going to do that? When just like you said, the people who are running the show are more senior. I don't know, get Frank Wilczek to speak out. So you're, it's basically individuals that step up. I mean, one of the surprising things about Elon is that one person can inspire so much. He's got academic freedom. It just comes from money. I don't agree with that. That you think money. Okay. So yes, certainly. Sorry. And testicles. Yes. I think that testicles are more important than money or guts. I think I do agree with you. You speak about this a lot that because the money in academic institutions has been so constrained that people are misbehaving and horrible. Yes. But I don't think that if we reverse that and give a huge amount of money, people will all of a sudden behave well. I think it also takes guts. No, you need to give people security. Security. Yes. Like you need to know that you have a job on Monday when on Friday you say, I'm not so sure I really love diversity and inclusion. And somebody is like, wait, what? You didn't love diversity? We had a statement on diversity and you wouldn't sign. Are you against the inclusion part or are you against diverse? Do you just not like people like you? Like actually that has nothing to do with anything. You're making this into something that it isn't. I don't want to sign your goddamn stupid statement and get out of my lab, right? Get out of my lab. It all begins from the middle finger. Get out of my lab. The administrators need to find other work. Yeah. Listen, I agree with you and I hope to seek your advice and wisdom as we change this, because I'd love to see... I will visit you in prison if that's what you're asking. I have no... I think prison is great. You get a lot of reading done and good working out. Well, let me ask something I brought up before is the Nietzsche quote of beware that when fighting monsters, you yourself do not become a monster. For when you gaze long into the abyss, the abyss gazes into you. Are you worried that your focus on the flaws in the system that we've just been talking about has damaged your mind or the part of your mind that's able to see the beauty in the world in the system that because you have so sharply been able to see the flaws in the system, you can no longer step back and appreciate its beauty? Look, I'm the one who's trying to get the institutions to save themselves by getting rid of their inhabitants, but leaving the institution like a neutron bomb that removes the unworkable leadership class, but leaves the structures. So the leadership class is really the problem. The leadership class is the problem. But the individual, like the professors, the individual scholars... No, the professors are going to have to go back into training to remember how to be professors. Like people are cowards at the moment because if they're not cowards, they're unemployed. Yeah, that's one of the disappointing things I've encountered is to me, tenure... But nobody has tenure now. Whether they do or not, they certainly don't have the kind of character and fortitude that I was hoping to see. But they'd be gone. See, you're dreaming about the people who used to live at MIT. You're dreaming about the previous inhabitants of your university. And if you looked at somebody like, you know, Isidore Singer is very old. I don't know what state he's in, but that guy was absolutely the real deal. And if you look at Noam Chomsky, tell me that Noam Chomsky has been muzzled. Right? Yeah. Now, what I'm trying to get at is you're talking about younger energetic people, but those people... Like when I say something like, I'm against... I'm for inclusion and I'm for diversity, but I'm against diversity and inclusion TM, like the movement. Well, I couldn't say that if I was a professor. Oh my God, he's against our sacred document. Okay. Well, in that kind of a world, do you want to know how many things I don't agree with you on? Like we could go on for days and days and days, all the nonsense that you've parroted inside of the institution. Any sane person like has no need for it. They have no want or desire. Do you think you have to have some patience for nonsense when many people work together in a system? How long has string theory gone on for? And how long have I been patient? Okay. So you're talking about... There's a limit to patience. You're talking about like 36 years of modern nonsense and string theory. So you can do like eight to 10 years, but not more. I can do 40 minutes. This is 36 years. Well, you've done that over two hours already. No, but I appreciate it. But it's been 36 years of nonsense since the anomaly cancellation in string theory. It's like, what are you talking about about patience? I mean, Lex, you're not even acting like yourself. You're trying to stay in the system. I'm not trying... I'm trying to see if perhaps... So my hope is that the system just has a few assholes in it, which you highlight, and the fundamentals of the system are broken. Because if the fundamentals of the systems are broken, then I just don't see a way for MIT to succeed. Like, I don't see how young people take over MIT. I don't see how... By inspiring us. You know, the great part about being at MIT, like when you saw the genius in these pranks, the heart, the irreverence, it's like, don't... We were talking about Tom Lehrer the last time. Tom Lehrer was as naughty as the day is long. Agreed? Agreed. Was he also a genius? Was he well spoken? Was he highly cultured? He was so talented, so intellectual that he could just make fart jokes morning, noon and night. Okay. Well, in part, the right to make fart jokes, the right to, for example, put a functioning phone booth that was ringing on top of the great dome at MIT has to do with we are such bad asses that we can actually do this stuff. Well, don't tell me about it anymore. Go break the law. Go break the law in a way that inspires us and makes us not want to prosecute you. Break the law in a way that lets us know that you're calling us out on our bullshit, that you're filled with love, and that our technical talent has not gone to sleep, it's not incapable. And if the idea is that you're going to dig a moat around the university and fill it with tiger sharks, that's awesome because I don't know how you're going to do it. But if you actually manage to do that, I'm not going to prosecute you under a reckless endangerment. That's beautifully put. I hope those, first of all, they'll listen, I hope young people at MIT will take over in this kind of way. In the introduction to your podcast episode on Jeffrey Epstein, you give to me a really moving story, but unfortunately for me, too brief, about your experience with a therapist and a lasting terror that permeated your mind. Can you go there, can you tell? I don't think so. I mean, I appreciate what you're saying. I said it obliquely, I said enough. There are bad people who cross our paths and the current vogue is to say, oh, I'm a survivor, I'm a victim, I can do anything I want. This is a broken person and I don't know why I was sent to a broken person as a kid. And to be honest with you, I also felt like in that story, I say that I was able to say no and this was like the entire weight of authority and he was misusing his position and I was also able to say no. What I couldn't say no to was having him re inflicted in my life. Right, so you were sent back a second time. I tried to complain about what had happened and I tried to do it in a way that did not immediately cause horrific consequences to both this person and myself because we don't have the tools to deal with sexual misbehavior. We have nuclear weapons, we don't have any way of saying this is probably not a good place or a role for you at this moment as an authority figure and something needs to be worked on. So in general, when we see somebody who is misbehaving in that way, our immediate instinct is to treat the person as Satan and we understand why. We don't want our children to be at risk. Now I personally believe that I fell down on the job and did not call out the Jeffrey Epstein thing early enough because I was terrified of what Jeffrey Epstein represents and this recapitulated the old terror trying to tell the world this therapist is out of control. And when I said that, the world responded by saying, well, you have two appointments booked and you have to go for the second one. So I got re inflicted into this office on this person who was now convinced that I was about to tear down his career and his reputation and might have been on the verge of suicide for all I know. I don't know. But he was very, very angry and he was furious with me that I had breached a sacred confidence of his office. What kind of ripple effects does that have? Has that had to the rest of your life? The absurdity and the cruelty of that? I mean, there's no sense to it. Well, see, this is the thing people don't really grasp, I think there's an academic who I got to know many years ago, um, named Jennifer fried, who has a theory of betrayal, which she calls institutional betrayal. And her gambit is, is that when you were betrayed by an institution that is sort of like a fiduciary or a parental obligation to take care of you, that you find yourself in a far different situation with respect to trauma than if you were betrayed by somebody who's a peer. And so I think that my, in my situation, um, I kind of repeat a particular dynamic with authority. I come in not following all the rules, trying to do some things, not trying to do others, blah, blah, blah. And then I get into a weird relationship with authority. And so I have more experience with what I would call institutional betrayal. Now, the funny part about it is that when you don't have masks or PPE in a influenza like pandemic and you missing ICU beds and ventilators, that is ubiquitous institutional betrayal. So I believe that in a weird way, I was very early, the idea of, and this is like the really hard concept pervasive or otherwise universal institutional betrayal where all of the institutions you can count on any hospital to not charge you properly for what their services are. You can count on no pharmaceutical company to produce the drug that will be maximally beneficial to the people who take it. You know that your financial professionals are not simply working in your best interest. And that issue had to do with the way in which growth left our system. So I think that the weird thing is, is that this first institutional betrayal by a therapist left me very open to the idea of, okay, well maybe the schools are bad. Maybe the hospitals are bad. Maybe the drug companies are bad. Maybe our food is off. Maybe our journalists are not serving journalistic ends. And that was what allowed me to sort of go all the distance and say, huh, I wonder if our problem is that something is causing all of our sensemaking institutions to be off. That was the big insight and that tying that to a single ideology. What if it's just about growth? They were all built on growth and now we've promoted people who are capable of keeping quiet that their institutions aren't working. So we've, the privileged silent aristocracy, the people who can be counted upon, not to mention a fire when a raging fire is tearing through a building. But nevertheless, it's how big of a psychological burden is that? It's huge. It's terrible. It's crushing. It's very, it's very comforting to be the parental, I mean, I don't know. I treasure, I mean, we were just talking about MIT. We can, until I can intellectualize and agree with everything you're saying, but there's a comfort, a warm blanket of being within the institution and up until Aaron Schwartz, let's say, in other words, now, if I look at the provost and the president as mommy and daddy, you did what to my big brother? You did what to our family? You sold us out in which way? What secrets left for China? You hired which workforce? You did what to my wages? You took this portion of my grant for what purpose? You just stole my retirement through a fringe rate. What did you do? But can you still, I mean, the thing is about this view you have is it often turns out to be sadly correct. Well, this is the thing. But let me just, in this silly, hopeful thing, do you still have hope in institutions? Can you within your, psychologically, I'm referring not intellectually, because you have to carry this burden, can you still have a hope within you? When you sit at home alone and as opposed to seeing the darkness within these institutions, seeing a hope. Well, but this is the thing. I want to confront, not for the purpose of a dust up. I believe, for example, if you've heard episode 19, that the best outcome is for Carol Greider to come forward, as we discussed in episode 19, and say, you know what, I screwed up. He did call. He did suggest the experiment. I didn't understand that it was his theory that was producing it. I was slow to grasp it. But my bad. And I don't want to pay for this bad choice on my part, let's say. For the rest of my career, I want to own up, and I want to help make sure that we do what's right with what's left. And that's one little case within the institution that you would like to see made. I would like to see MIT very clearly come out and say, Margot O'Toole was right when she said David Baltimore's lab here produced some stuff that was not reproducible with Teresa Imanishi Kari's research. I want to see the courageous people. I would like to see the Aaron Schwartz wing of the computer science department. Yeah, wouldn't, no, let's think about it. Wouldn't that be great if we said, you know, an injustice was done and we're going to write that wrong just as if this was Alan Turing? Which I don't think they've righted that wrong. Well then let's have the Turing Schwartz wing. They're starting a new college of computing. It wouldn't be wonderful to call it the Turing Schwartz wing. I would like to have the Madame Wu wing of the physics department. And I'd love to have the Emmy Nerder statue in front of the math department. I mean, like you want to get excited about actual diversity and inclusion? Yeah. Well, let's go with our absolute best people who never got theirs because there is structural bigotry, you know? But if we don't actually start celebrating the beautiful stuff that we're capable of when we're handed heroes and we fumble them into the trash, what the hell? I mean, Lex, this is such nonsense. We just pulling our head out. You know, on everyone's cecum should be tattooed, if you can read this, you're too close. Beautifully put and I'm a dreamer just like you. So I don't see as much of the darkness genetically or due to my life experience, but I do share the hope. From my teeth, the institution that we care a lot about. You both do. Yeah. And a Harvard institution I don't give a damn about, but you do. So I love Harvard. I'm just kidding. I love Harvard, but Harvard and I have a very difficult relationship. And part of what, you know, when you love a family that isn't working, I don't want to trash. I didn't bring up the name of the president of MIT during the Aaron Schwartz period. It's not vengeance. I want the rot cleared out. I don't need to go after human beings. Yeah. Just like you said with the, with the disc formulation, the individual human beings aren't don't necessarily carry them. It's those chairs that are so powerful that in which they sit. It's the chairs, not the humans, not the humans without naming names. Can you tell the story of your struggle during your time at Harvard, maybe in a way that tells the bigger story of the struggle of young bright minds that are trying to come up with big, bold ideas within the institutions that we're talking about? You can start. I mean, in part, uh, it starts with, uh, coffee with, uh, a couple, uh, of Croatians in the math department at MIT. And, um, we used to talk about, um, music and dance and math and physics and love and all this kind of stuff as Eastern Europeans, uh, love to, and I ate it up and my friend Gordon, uh, who was, uh, an instructor in the MIT math department when I was a graduate student at Harvard said to me, I'm probably gonna do a bad version of her accent, but here we go. It, um, will I see you tomorrow at the secret seminar? And I said, w what secret seminar, Eric, don't joke. I said, I'm not used to this style of humor. Then she's Eric, the secret seminar that your advisor is running. I said, what are you talking about? Ha ha ha, uh, you know, your advisor is running a secret seminar on this aspect. I think it was like the churn Simon's invariant. I'm not sure what the topic was again, but she gave me the room number and the time and she was like not cracking a smile. I've never known her to make this kind of a joke. And I thought this was crazy and I was trying to have an advisor. I didn't want an advisor, but people said you have to have one. So I took one and I went to this room at like 15 minutes early and there was not a soul inside it. It was outside of the math department and it was still in the same building, the science center at Harvard. And I sat there and I let five minutes go by, I let seven minutes go by, 10 minutes go by. There's nobody. I thought, okay, so this was all an elaborate joke. And then like three minutes to the hour, this graduate student walks in and like sees me and does a double take. And then I start to see the professors in geometry and topology start to file in and everybody's like very disconcerted that I'm in this room. And finally the person who was supposed to be my advisor walks in to the seminar and sees me and goes white as a ghost. And I realized that the secret seminar is true, that the department is conducting a secret seminar on the exact topic that I'm interested in, not telling me about it. And that these are the reindeer games that the Rudolph's of the department are not invited to. And so then I realized, okay, I did not understand it. There's a parallel department. And that became the beginning of an incredible odyssey in which I came to understand that the game that I had been sold about publication, about blind refereeing, about openness and scientific transmission of information was all a lie. I came to understand that at the very top, there's a second system that's about closed meetings and private communications and agreements about citation and publication that the rest of us don't understand. And that in large measure, that is the thing that I won't submit to. And so when you ask me questions like, well, why wouldn't you feel good about, you know, talking to your critics or why wouldn't you feel the answer is, oh, you don't know. Like if you stay in a nice hotel, you don't realize that there is an entire second structure inside of that hotel where like there's usually a worker's cafe in a resort complex that isn't available to the people who are staying in the hotel. And then there are private hallways inside the same hotel that are parallel structures. So that's what I found, which was in essence, just the way you can stay hotels your whole life and not realize that inside of every hotel is a second structure that you're not supposed to see as the guest. There is a second structure inside of academics that behaves totally differently with respect to how people get dinged, how people get their grants taken away, how this person comes to have that thing named after them. And by pretending that we're not running a parallel structure, um, I have no patience for that anymore. So I got a chance to see how the game, how hard ball is really played at Harvard. And I'm now eager to play hard ball back with the same people who played hard ball with me. Let me ask two questions on this. So one, do you think it's possible, so I call those people assholes, that's the technical term. Do you think it's possible that that's just not the entire system, but a part of the system? Sort of that there's, you can navigate, you can swim in the waters and find the groups of people who do aspire to the openness. The guy who rescued my phd was one of the people who filed in to the secret seminar. Right. But are there people outside of this, right? Is he an asshole? Well, yes, I was, it was a bad, no, but I'm trying to make this point, which is this isn't my failure to correctly map these people. It's yours. You, you have a simplification that isn't going to work. I think, okay. If I asked what was the wrong term, I would say lacking of character and what would you have had these people do? Why did they do this? Why have a secret seminar? I don't understand the exact dynamics of a secret seminar, but I think the right thing to do is to, I mean, to see individuals like you, there might be a reason to have a secret seminar, but they should detect that an individual like you, a brilliant mind who's thinking about certain ideas could be damaged by this. I don't think that they see it that way. The idea is we're going to sneak food to the children we want to survive. Yeah. So that that's highly problematic and there should be people within that room. I'm trying to say this is the thing, the ball that can't is thrown, but won't be caught. The problem is they know that most of their children won't survive and they can't say that. I see. Sorry to interrupt. You mean that the fact that the whole system is underfunded, that they naturally have to pick favorites. They live in a world which reached steady state at some level, let's say, you know, in the early seventies and in that world before that time you have a professor like Norman Steenrod and you'd have 20 children that is graduate students and all of them would go on to be professors and all of them would want to have 20 children, right? So you start like taking higher and higher powers of 20 and you see that the system could not, it's not just about money, the system couldn't survive. So the way it's supposed to work now is that we should shut down the vast majority of PhD programs and we should let the small number of truly top places populate, um, mostly teaching and research departments that aren't PhD producing. We don't want to do that because we use PhD students as a labor force. So the whole thing has to do with growth, resources, dishonesty, and in that world you see all of these adaptations to a ruthless world where the key question is where are we going to bury this huge number of bodies of people who don't work out? So my problem was I wasn't interested in dying. So you clearly highlight that there's aspects of the system that are broken, but as an individual, is your role to, uh, exit the system or just acknowledge that it's a game and win it? My role is to survive and thrive in the public eye. In other words, when you have an escapee of the system, like yourself, such as, and that person says, you know, I wasn't exactly finished, let me show you a bunch of stuff. Let me show you that, uh, the theory of telomeres never got reported properly. Let me show you that all of, uh, marginal economics, uh, is supposed to be redone with a different version of the differential calculus. Let me show you that you didn't understand the self dual Yang Mills equations correctly in topology and physics because they're in fact, uh, much more broadly found and it's only the mutations that happen in special dimensions. There are lots of things to say, but this particular group of people, like if you just take, where are all the gen X and millennial university presidents? Right. Okay. They're all, they're all in a holding pattern. Now where, why in this story, you know, was it of telomeres? Was it an older professor and a younger graduate student? It's this issue of what would be called interference competition. So for example, orcas try to drown minke whales by covering their blow holes so that they suffocate because the needed resource is air. Okay. Well, what do the universities do? They try to make sure that you can't be viable, that you need them, that you need their grants. You need to be, uh, zinged with overhead charges or fringe rates or all of the games that the locals love to play. Well, my point is, okay, what's the cost of this? How many people died as a result of these interference competition games? You know, when you take somebody like Douglas Prasher who did green fluorescent protein and he drives a shuttle bus, right? Cause he, his grant runs out and he has to give away all of his research and all of that research gets a Nobel prize and he gets to drive a shuttle bus for $35,000 a year. What do you mean by died? You mean their career, their dreams, their passions? Yeah, the whole, as an academic, Doug Prasher was dead for a long period of time. Okay. So as a, as a person who's escaped the system, can't you at this, cause you also have in your mind a powerful theory that may turn out to be a useful, maybe not. So can't you also play the game enough? Like with the children, so like publish and, but also if you told me that this would work, really what I want to do, you see, is I would love to revolutionize a field with an H index of zero, like we have these proxies that count how many papers you've written, how cited are the papers you've written. All of this is nonsense. That's interesting. Sorry. What do you mean by field with an H index as your, so a totally new field. H index is count somehow. How many papers have you gotten that get so many citations? Let's say H index undefined, like for example, um, I don't have an advisor for my PhD, but I have to have an advisor as far as something called the math genealogy project that tracks who advised who, who advised whom down the line. So I am my own advisor, which sets up a loop, right? How many students do I have? An infinite number. Um, your descendants, they don't want to have that story. So I have to be, I have to have formal advisor, Raul bought, and my Wikipedia entry, for example, says that I was advised by Raul bought, which is not true. So you get fit into a system that says, well, we have to know what your H index is. We have to know, um, you know, where are you a professor? If you want to apply for a grant, it makes all of these assumptions. What I'm trying to do is in part to show all of this is nonsense. This is proxy BS that came up in the institutional setting. And right now it's important for those of us who are still vital, like Elon, it would be great to have Elon as a professor of physics and engineering. Yeah. Right. It seems ridiculous to say, but just as a shot, just as a shot in the arm. Yeah. You know, like it'd be great to have Elon at Caltech even one day a week, one day a month. Okay. Well, why can't we be in there? It's the same reason. Well, why can't you be on the view? Why can't you be on bill Martin? We need to know what you're going to do before we take you on the show on the show. Well, I don't want to tell you what I'm going to do. Do you think you need to be able to dance the dance a little bit? I can dance the dance fun to be on the view. Oh, come on. So you can, yeah, you do. You're not, I can do that. Fine. Here's where the place that it goes south is there like a set of questions that get you into this more adversarial stuff. And you've in fact asked some of those more adversarial questions, the setting, and they're not things that are necessarily aggressive, but they're things that are making assumptions. Right. Right. So when you make a, I have a question is like, you know, Lex, are you avoiding your critics? You know, it's just like, okay, well why did you? You frame that that way. Or the next question would be like, um, do you think that you should have a special exemption and that you should have the right to break rules and everyone else should have to follow them? Like that question I find innervating. Yeah. It doesn't really come out of anything meaningful. It's just like we feel we're supposed to ask that of the other person to show that we're not captured by their madness. That's not the real question you want to ask me. If you want to get really excited about this, you want to ask, do you think this thing is right? Yeah. Weirdly I do. Do you think that it's going to be immediately seen to be right? I don't. I think it's going to, it's going to have an interesting fight and it's going to have an interesting evolution and well, what do you hope to do with it in nonphysical terms? Gosh, I hope it revolutionizes our relationship of well with people outside of the institutional framework and it re inflicts us into the institutional framework where we can do the most good to bring the institutions back to health. You know, it's like these are positive, uplifting questions and if you had Frank will check, you wouldn't say, Frank, let's be honest, you have done very little with your life after the original, a huge show that you used to break onto the physics scene. Like we weirdly ask people different questions based upon how they sit down. Yeah. That's very strange, right? But you have to understand that. So here's the thing. I get these days, a large number of emails from people with the equivalent of a theory of everything for AGI and I use my own radar, BS radar to detect unfairly, perhaps whether they're full of shit or not, because I love where you're going with this, by the way. My concern I often think about is there's elements of brilliance in what people write to me and I'm trying to right now, as you made it clear, the kind of judgments and assumptions we make, how am I supposed to deal with you who are not an outsider of the system and think about what you're doing because my radar is saying you're not full of shit. But I'm also not completely outside of the system. That's right. You've danced beautifully. You've actually got all the credibility that you're supposed to get, all the nice little stamps of approval, not all, but a large enough amount. I mean, it's hard to put into words exactly why you sound, whether your theory turns out to be good or not, you sound like a special human being. I appreciate that and thank you in a good way. So but what am I supposed to do with that flood of emails from AGI? Why do I sound different? I don't know. And I would like to systemize that. I don't know. Look, you know, when you're talking to people, you very quickly can surmise, like, am I claiming to be a physicist? No, I say it every turn. I'm not a physicist, right? When I say to you, when you say something about bundles, you say, well, can you explain it differently? You know, I'm pushing around on this area, that lever over there. I'm trying to find something that we can play with and engage. And you know, another thing is that I'll say something at scale. So if I was saying completely wrong things about bundles on the Joe Rogan program, you don't think that we wouldn't hear a crushing chorus. Yes. Absolutely. And you know, same thing with geometric unity. So I put up this video from this Oxford lecture. I understand that it's not a standard lecture, but you haven't heard, you know, the most brilliant people in the field say, well, this is obviously nonsense. They don't know what to make of it. And they're going to hide behind, well, he hasn't said enough details. Where's the paper? Where's the paper? I've seen the criticism. I've gotten the same kind of criticism. I've published a few things and like, especially stuff related to Tesla that we did studies on Tesla vehicles and the kind of criticism I've gotten, which showed that they're completely. Oh, right. Like the guy who had Elon Musk on his program twice is going to give us an accurate assessment. Next. Exactly. Exactly. Exactly. It's just very low level. Like without actually ever addressing the content. You know, Lex, I think that in part you're trying to solve a puzzle that isn't really your puzzle. I think, you know, that I'm sincere. You don't know whether the theory is going to work or not. And you know that it's not coming out of somebody who's coming out of left field, like the story makes sense. There's enough that's new and creative and different in other aspects where you can check me that your real concern is, are you really telling me that when you start breaking the rules, you see the system for what it is and it's become really vicious and aggressive. And the answer is yes, and I had to break the rules in part because of learning issues because I came into this field, you know, with a totally different set of attributes. My profile just doesn't look like anybody else's remotely, but as a result, what that did is it showed me what is the system true to its own ideals or does it just follow these weird procedures and then when it, when you take it off the rails, it behaves terribly. And that's really what my story I think does is it just says, well, he completely takes the system into new territory where it's not expecting to have to deal with somebody with these confusing sets of attributes. And I think what he's telling us is he believes it behaves terribly. Now, if you take somebody with perfect standardized tests and you know, a winner of math competitions and you put them in a PhD program, they're probably going to be okay. I'm not saying that the system, um, you know, breaks down for any everybody under all circumstances. I'm saying when you present the system with a novel situation at the moment, it will almost certainly break down with probability approaching 100%. But to me, the painful and the tragic thing is it, uh, sorry to bring out my motherly instinct, but it feels like it's too much. It could be too much of a burden to exist outside the system, maybe, but psychologically, first of all, I've got a podcast that I kind of like and I've got amazing friends. I have a life which has more interesting people passing through it than I know what to do with. Yeah. And they haven't managed to kill me off yet. So, so far, so good. Speaking of which you host an amazing podcast that we've mentioned several times, but should mention over and over the portal, uh, where you somehow manage every single conversation is a surprise. You go, I mean, not just the guests, but just the places you take them, uh, the, the kind of ways they become challenging and how you recover from that. I mean, it's, uh, there's just, it's full of genuine human moments. So I really appreciate what you're, it's a fun, fun podcast to listen to. Uh, let me ask some silly questions about it. What have you learned about conversation about human to human conversation? Well, I have a problem that I haven't solved on the portal, which is that in general, when I ask people questions, they usually find their deeply grooved answers and I'm not so interested in all of the deeply grooved answers. And so there's a complaint, which I'm very sympathetic to actually that I talk over people that I won't sit still for the answer. And I think that that's weirdly sort of correct. It's not that I'm not interested in hearing other voices. That I'm not interested in hearing the same voice on my program that I could have gotten on somebody else's. And I haven't solved that well. So I've learned that I need a new conversational technique where I can keep somebody from finding their comfortable place and yet not be the voice talking over that person. Yeah. It's funny. I can sense like your conversation with Brett, I can sense you detect that the line he's going down, you know how it's going to end and you think it's a useless line, so you'll just stop it right there and you take them into the direction that you think it should go. But that requires interruption. Well, and it does so far. I haven't found a better way. I'm looking for a better way. It's not like I don't hear the problem. I do hear the problem. I haven't solved the problem. And you know, on the, on the bread episode, um, I was insufferable. It was very difficult to listen to. It was so overbearing. But on the other hand, I was right. You know, it's like funny. You keep saying that, but I didn't find it maybe because I heard brothers, like I heard a big brother. Yeah. It was pretty bad. Really? I think so. I didn't think it was bad. Well, a lot of people found it interesting. And I think it also has to do with the fact that this has become a frequent experience. I have several shows where somebody who I very much admire and think of as courageous, um, you know, I'm talking with them, maybe we're friends and they sit down on the show and they immediately become this fake person. Like two seconds in there, they're sort of saying, well, I don't want to be too critical or too harsh. I don't want to name any names. I wanted this story. He was like, okay, I'm going to put my listeners through three hours of you being sweetness and light. Like at least give me some reality and then we can decide to shelve the show and never let it hear, uh, you know, the, the, the call of freedom in the, in the bigger world. But I've seen you break out of that a few times. I've seen you to be successful that, uh, I forgot to guess, but she was dressed with, um, um, you were at the end of the episode, you had an argument about Brett. I forgot. Agnes Callard. Yeah. She was one of the philosophers for at the university of Chicago. Yeah. You've continuously broken out of her. Uh, you guys went, you know, uh, I didn't even seem pretty genuine. I like her. I'm completely ethically opposed to what she's ethically for. Well, she was great. And she wasn't like that. You're both going hard. She's a grownup. Yeah. And she knows that I care about her. So that was awesome. Yeah. But you're saying that some people are difficult to break out. Well, it's just that, you know, she was bringing the courage of her convictions. She was sort of defending the system and I thought, wow, that's a pretty indefensible system that you're doing. That's great though. She's doing that. Isn't it? I mean, it made for an awesome, it's very informative for the world. Yes. You just hated. I just can't stand the idea that somebody says, well, we don't care who gets paid or who gets the credit as long as we get the goodies. Cause that seems like insane. Have you ever been afraid leading into a conversation? Gary Kasparov. By the way, I mean, I know I'm just a fan taking requests, but I started, I started the beginning in Russian and in fact I used one word incorrectly. Is that terrible? You know, it was, it was pretty good. It's pretty good Russian. What was terrible is I think he complimented you. Right? No. Did he compliment you or was that me? Did he compliment you on your Russian? Well, he said almost perfect Russian. Yeah. Like he was full of shit. That was not great Russian, but that was not great Russian. That was great. That was hard. That was, you tried hard, which is what matters. That is so insulting. I hope so. But I do hope you continue. It felt like, I don't know how long it went. It might've been like a two hour conversation, but it felt, I hope it continues. Like I feel like you have many conversations with Gary. Yeah. I would love to hear. There's certain conversations I would just love to hear a long, much longer. He's coming from a very, it's this issue about needing to overpower people in a very dangerous world. And so Gary has that need. Yeah. He wasn't, he was interrupting you. Sure. It was an interesting dynamic. It was a, it was an interesting dynamic. Two Weinsteins going at you. I mean, two powerhouse egos, brilliant. No, don't say egos, minds, spirits. You don't have an ego. You're the most humble person I know. Is that true? No, that's a complete lie. Do you think about your own mortality, death? Sure. Are you afraid? Well, I released a theory during something that can kill older people. Sure. Oh, is there a little bit of a parallel there? Of course. Of course. I don't want it to die with me. What do you hope your legacy is? Oh, I hope my legacy is accurate. I'd like to write on my accomplishments rather than how my community decided to ding me while I was alive. That would be great. What about if it was significantly exaggerated? I don't want it. You want it to be accurate. I've got some pretty terrific stuff and whether it works out or doesn't that I would like it to reflect what I actually was. I'll settle for accurate. What would you say, what is the greatest element of a Eric Weinstein accomplishment in life in terms of being accurate? What are you most proud of? The idea that we were stalled out in the hardest field at the most difficult juncture and that I didn't listen to that voice ever that said, stop, you're hurting yourself. You're hurting your family. You're hurting everybody. You're embarrassing yourself. You're screwing up. You can't do this. You're a failure. You're a fraud. Turn back, save yourself. That voice, I didn't ultimately listen to it and it was going for 35, 37 years. Very hard. And I hope you never listen to that voice. That's why you're an inspiration. Thank you. I appreciate that. I'm just infinitely honored that you would spend time with me. You've been a mentor to me, almost a friend. I can't imagine a better person to talk to in this world. So thank you so much for talking to me. I can't wait till we do it again. Lex, thanks for sticking with me and thanks for being the most singular guy in the podcasting space. In terms of all of my interviews, I would say that the last one I did with you, many people feel was my best and it was a nonconventional one. So whatever it is that you're bringing to the game, I think everyone's noticing and keep at it. Thank you. Thanks for listening to this conversation with Eric Weinstein. And thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, subscribe on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words of wisdom from Eric Weinstein's first appearance on this podcast. Everything is great about war, except all the destruction. Thank you for listening and hope to see you next time.
Eric Weinstein: Geometric Unity and the Call for New Ideas & Institutions | Lex Fridman Podcast #88
The following is a conversation with Stephen Wolfram, a computer scientist, mathematician, and theoretical physicist who is the founder and CEO of Wolfram Research, a company behind Mathematica, Wolfram Alpha, Wolfram Language, and the new Wolfram Physics Project. He's the author of several books including A New Kind of Science, which on a personal note was one of the most influential books in my journey in computer science and artificial intelligence. It made me fall in love with the mathematical beauty and power of cellular automata. It is true that perhaps one of the criticisms of Stephen is on a human level, that he has a big ego, which prevents some researchers from fully enjoying the content of his ideas. We talk about this point in this conversation. To me, ego can lead you astray but can also be a superpower, one that fuels bold, innovative thinking that refuses to surrender to the cautious ways of academic institutions. And here, especially, I ask you to join me in looking past the peculiarities of human nature and opening your mind to the beauty of ideas in Stephen's work and in this conversation. I believe Stephen Wolfram is one of the most original minds of our time and, at the core, is a kind, curious, and brilliant human being. This conversation was recorded in November 2019 when the Wolfram Physics Project was underway but not yet ready for public exploration as it is now. We now agreed to talk again, probably multiple times in the near future, so this is round one, and stay tuned for round two soon. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with 5 Stars and Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code lexpodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market. This makes trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store, Google Play, and use the code lexpodcast, you get ten dollars and Cash App will also donate ten dollars to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. This show is presented by ExpressVPN. Get it at expressvpn.com slash lexpod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I love it. It's really easy to use. Press the big power on button and your privacy is protected. And if you like, you can make it look like your location is anywhere else in the world. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux. Shout out to Ubuntu. New version coming out soon actually. Windows, Android, but it's available anywhere else too. Once again, get it at expressvpn.com slash lexpod to get a discount and to support this podcast. And now here's my conversation with Stephen Wolfram. You and your son Christopher helped create the alien language in the movie Arrival. So let me ask maybe a bit of a crazy question, but if aliens were to visit us on earth, do you think we would be able to find a common language? Well, by the time we're saying aliens are visiting us, we've already prejudiced the whole story because the concept of an alien actually visiting, so to speak, we already know they're kind of things that make sense to talk about visiting. So we already know they exist in the same kind of physical setup that we do. It's not just radio signals. It's an actual thing that shows up and so on. So I think in terms of can one find ways to communicate? Well, the best example we have of this right now is AI. I mean, that's our first sort of example of alien intelligence. And the question is, how well do we communicate with AI? If you were in the middle of a neural network, a neural net, and you open it up and it's like, what are you thinking? Can you discuss things with it? It's not easy, but it's not absolutely impossible. So I think by the time, given the setup of your question, aliens visiting, I think the answer is yes, one will be able to find some form of communication, whatever communication means. Communication requires notions of purpose and things like this. It's a kind of philosophical quagmire. So if AI is a kind of alien life form, what do you think visiting looks like? So if we look at aliens visiting, and we'll get to discuss computation and the world of computation, but if you were to imagine, you said you already prejudiced something by saying you visit, but how would aliens visit? By visit, there's kind of an implication. And here we're using the imprecision of human language, you know, in a world of the future. And if that's represented in computational language, we might be able to take the concept visit and go look in the documentation, basically, and find out exactly what does that mean, what properties does it have, and so on. But by visit, in ordinary human language, I'm kind of taking it to be there's something, a physical embodiment that shows up in a spacecraft, since we kind of know that that's necessary. We're not imagining it's just, you know, photons showing up in a radio signal that, you know, photons in some very elaborate pattern, we're imagining it's physical things made of atoms and so on, that show up. Can it be photons in a pattern? Well, that's a good question. I mean, whether there is the possibility, you know, what counts as intelligence? Good question. I mean, it's, you know, and I used to think there was sort of a, oh, there'll be, you know, it'll be clear what it means to find extraterrestrial intelligence, et cetera, et cetera, et cetera. I've increasingly realized, as a result of science that I've done, that there really isn't a bright line between the intelligent and the merely computational, so to speak. So, you know, in our kind of everyday sort of discussion, we'll say things like, you know, the weather has a mind of its own. Well, let's unpack that question. You know, we realize that there are computational processes that go on that determine the fluid dynamics of this and that and the atmosphere, et cetera, et cetera, et cetera. How do we distinguish that from the processes that go on in our brains of, you know, the physical processes that go on in our brains? How do we separate those? How do we say the physical processes going on that represent sophisticated computations in the weather, oh, that's not the same as the physical processes that go on that represent sophisticated computations in our brains? The answer is, I don't think there is a fundamental distinction. I think the distinction for us is that there's kind of a thread of history and so on that connects kind of what happens in different brains to each other, so to speak. And it's a, you know, what happens in the weather is something which is not connected by sort of a thread of civilizational history, so to speak, to what we're used to. SL. In the stories that the human brains told us, but maybe the weather has its own stories. MG. Absolutely. Absolutely. And that's where we run into trouble thinking about extraterrestrial intelligence because, you know, it's like that pulsar magnetosphere that's generating these very elaborate radio signals. You know, is that something that we should think of as being this whole civilization that's developed over the last however long, you know, millions of years of processes going on in the neutron star or whatever versus what, you know, what we're used to in human intelligence? I mean, I think in the end, you know, when people talk about extraterrestrial intelligence and where is it and the whole, you know, Fermi paradox of how come there's no other signs of intelligence in the universe, my guess is that we've got sort of two alien forms of intelligence that we're dealing with, artificial intelligence and sort of physical or extraterrestrial intelligence. And my guess is people will sort of get comfortable with the fact that both of these have been achieved around the same time. And in other words, people will say, well, yes, we're used to computers, things we've created, digital things we've created, being sort of intelligent like we are. And they'll say, oh, we're kind of also used to the idea that there are things around the universe that are kind of intelligent like we are, except they don't share the sort of civilizational history that we have. And so they're a different branch. I mean, it's similar to when you talk about life, for instance. I mean, you kind of said life form, I think almost synonymously with intelligence, which I don't think is, you know, the AIs would be upset to hear you equate those two things. Because I really probably implied biological life. But you're saying, I mean, we'll explore this more, but you're saying it's really a spectrum and it's all just a kind of computation. And so it's a full spectrum and we just make ourselves special by weaving a narrative around our particular kinds of computation. Yes. I mean, the thing that I think I've kind of come to realize is, you know, at some level, it's a little depressing to realize that there's so little or liberating. Well, yeah, but I mean, it's, you know, it's the story of science, right? And, you know, from Copernicus on, it's like, you know, first we were like, convinced our planets at the center of the universe. No, that's not true. Well, then we were convinced there's something very special about the chemistry that we have as biological organisms. That's not really true. And then we're still holding out that hope. Oh, this intelligence thing we have, that's really special. I don't think it is. However, in a sense, as you say, it's kind of liberating for the following reason, that you realize that what's special is the details of us, not some abstract attribute that, you know, we could wonder, oh, is something else going to come along and, you know, also have that abstract attribute? Well, yes, every abstract attribute we have, something else has it. But the full details of our kind of history of our civilization and so on, nothing else has that. That's what, you know, that's our story, so to speak. And that's sort of almost by definition, special. So I view it as not being such a, I mean, initially I was like, this is bad. This is kind of, you know, how can we have self respect about the things that we do? Then I realized the details of the things we do, they are the story. Everything else is kind of a blank canvas. So maybe on a small tangent, you just made me think of it, but what do you make of the monoliths in 2001 Space Odyssey in terms of aliens communicating with us and sparking the kind of particular intelligent computation that we humans have? Is there anything interesting to get from that sci fi? Yeah, I mean, I think what's fun about that is, you know, the monoliths are these, you know, one to four to nine perfect cuboid things. And in the Earth a million years ago, whatever they were portraying with a bunch of apes and so on, a thing that has that level of perfection seems out of place. It seems very kind of constructed, very engineered. So that's an interesting question. What is the, you know, what's the techno signature, so to speak? What is it that you see it somewhere and you say, my gosh, that had to be engineered. Now, the fact is we see crystals, which are also very perfect. And, you know, the perfect ones are very perfect. They're nice polyhedral or whatever. And so in that sense, if you say, well, it's a sign of sort of it's a techno signature that it's a perfect polygonal shape, polyhedral shape. That's not true. And so then it's an interesting question. What is the right signature? I mean, like, you know, Gauss, famous mathematician, you know, he had this idea, you should cut down the Siberian forest in the shape of sort of a typical image of the proof of the Pythagorean theorem on the grounds that it was a kind of cool idea, didn't get done. But, you know, it's on the grounds that the Martians would see that and realize, gosh, there are mathematicians out there. It's kind of, you know, in his theory of the world, that was probably the best advertisement for the cultural achievements of our species. But, you know, it's a reasonable question. What do you, what can you send or create that is a sign of intelligence in its creation or even intention in its creation? You talk about if we were to send a beacon. Can you what should we send? Is math our greatest creation? Is what is our greatest creation? I think I think it's a it's a philosophically doomed issue. I mean, in other words, you send something, you think it's fantastic, but it's kind of like we are part of the universe. We make things that are, you know, things that happen in the universe. Computation, which is sort of the thing that we are in some abstract sense using to create all these elaborate things we create, is surprisingly ubiquitous. In other words, we might have thought that, you know, we've built this whole giant engineering stack that's led us to microprocessors, that's led us to be able to do elaborate computations. But this idea that computations are happening all over the place. The only question is whether whether there's a thread that connects our human intentions to what those computations are. And so I think I think this question of what do you send to kind of show off our civilization in the best possible way? I think any kind of almost random slab of stuff we've produced is about equivalent to everything else. I think it's one of these things where it's a non romantic way of phrasing it. I just started to interrupt, but I just talked to Andrew in who's the wife of Carl Sagan. And so I don't know if you're familiar with the Voyager. I mean, she was part of sending, I think, brainwaves of, you know, wasn't it hers? Her brainwaves when she was first falling in love with Carl Sagan. It's this beautiful story that perhaps you would shut down the power of that by saying we might as well send anything else. And that's interesting. All of it is kind of an interesting, peculiar thing. Yeah, yeah, right. Well, I mean, I think it's kind of interesting to see on the Voyager, you know, golden record thing. One of the things that's kind of cute about that is, you know, it was made when was it in the late 70s, early 80s. And, you know, one of the things, it's a phonograph record. Okay. And it has a diagram of how to play a phonograph record. And, you know, it's kind of like it's shocking that in just 30 years, if you show that to a random kid of today, and you show them that diagram, I've tried this experiment, they're like, I don't know what the heck this is. And the best anybody can think of is, you know, take the whole record, forget the fact that it has some kind of helical track in it, just image the whole thing and see what's there. That's what we would do today. In only 30 years, our technology has kind of advanced to the point where the playing of a helical, you know, mechanical track on a phonograph record is now something bizarre. So, you know, it's a cautionary tale, I would say, in terms of the ability to make something that in detail sort of leads by the nose, some, you know, the aliens or whatever, to do something. It's like, no, you know, best you can do, as I say, if we were doing this today, we would not build a helical scan thing with a needle. We would just take some high resolution imaging system and get all the bits off it and say, oh, it's a big nuisance that they put in a helix, you know, in a spiral. Let's just unravel the spiral and start from there. SL. Do you think, and this will get into trying to figure out interpretability of AI, interpretability of computation, being able to communicate with various kinds of computations, do you think we'd be able to, if you put your alien hat on, figure out this record, how to play this record? MG. Well, it's a question of what one wants to do. I mean, SL. Understand what the other party was trying to communicate or understand anything about the other party. MG. What does understanding mean? I mean, that's the issue. The issue is, it's like when people were trying to do natural language understanding for computers, right? So people tried to do that for years. It wasn't clear what it meant. In other words, you take your piece of English or whatever, and you say, gosh, my computer has understood this. Okay, that's nice. What can you do with that? Well, so for example, when we built WolfMalpha, one of the things was it's doing question answering and so on, and it needs to do natural language understanding. The reason that I realized after the fact, the reason we were able to do natural language understanding quite well, and people hadn't before, the number one thing was we had an actual objective for the natural language understanding. We were trying to turn the natural language into this computational language that we could then do things with. Now, similarly, when you imagine your alien, you say, okay, we're playing them the record. Did they understand it? Well, it depends what you mean. If there's a representation that they have, if it converts to some representation where we can say, oh yes, that's a representation that we can recognize is represents understanding, then all well and good. But actually, the only ones that I think we can say would represent understanding are ones that will then do things that we humans kind of recognize as being useful to us. Maybe you're trying to understand, quantify how technologically advanced this particular civilization is. So are they a threat to us from a military perspective? That's probably the first kind of understanding they'll be interested in. Gosh, that's so hard. That's like in the Arrival movie, that was one of the key questions is, why are you here, so to speak? Are you going to hurt us? But even that, it's very unclear. It's like, are you going to hurt us? That comes back to a lot of interesting AI ethics questions, because we might make an AI that says, well, take autonomous cars, for instance. Are you going to hurt us? Well, let's make sure you only drive at precisely the speed limit, because we want to make sure we don't hurt you, so to speak. But you say, but actually, that means I'm going to be really late for this thing, and that sort of hurts me in some way. So it's hard to know. Even the definition of what it means to hurt someone is unclear. And as we start thinking about things about AI ethics and so on, that's something one has to address. There's always tradeoffs, and that's the annoying thing about ethics. Yeah, well, right. And I think ethics, like these other things we're talking about, is a deeply human thing. There's no abstract, let's write down the theorem that proves that this is ethically correct. That's a meaningless idea. You have to have a ground truth, so to speak, that's ultimately what humans want, and they don't all want the same thing. So that gives one all kinds of additional complexity in thinking about that. One convenient thing in terms of turning ethics into computation, you can ask the question of what maximizes the likelihood of the survival of the species. That's a good existential issue. But then when you say survival of the species, you might say, you might, for example, let's say, forget about technology, just hang out and be happy, live our lives, go on to the next generation, go through many, many generations where, in a sense, nothing is happening. Is that okay? Is that not okay? Hard to know. In terms of the attempt to do elaborate things and the attempt to might be counterproductive for the survival of the species. It's also a little bit hard to know, so okay, let's take that as a sort of thought experiment. You can say, well, what are the threats that we might have to survive? The super volcano, the asteroid impact, all these kinds of things. Okay, so now we inventory these possible threats and we say, let's make our species as robust as possible relative to all these threats. I think in the end, it's sort of an unknowable thing what it takes. So given that you've got this AI and you've told it, maximize the long term. What does long term mean? Does long term mean until the sun burns out? That's not going to work. Does long term mean next thousand years? Okay, they're probably optimizations for the next thousand years. It's like if you're running a company, you can make a company be very stable for a certain period of time. Like if your company gets bought by some private investment group, then you can run a company just fine for five years by just taking what it does and removing all R&D and the company will burn out after a while, but it'll run just fine for a little while. So if you tell the AI, keep the humans okay for a thousand years, there's probably a certain set of things that one would do to optimize that, many of which one might say, well, that would be a pretty big shame for the future of history, so to speak, for that to be what happens. But I think in the end, as you start thinking about that question, what you realize is there's a whole sort of raft of undecidability, computational irreducibility. In other words, one of the good things about what our civilization has gone through and what we humans go through is that there's a certain computational irreducibility to it in the sense that it isn't the case that you can look from the outside and just say, the answer is going to be this. At the end of the day, this is what's going to happen. You actually have to go through the process to find out. And I think that feels better in the sense that something is achieved by going through all of this process. But it also means that telling the AI, go figure out what will be the best outcome. Well, unfortunately, it's going to come back and say, it's kind of undecidable what to do. We'd have to run all of those scenarios to see what happens. And if we want it for the infinite future, we're thrown immediately into sort of standard issues of kind of infinite computation and so on. So yeah, even if you get that the answer to the universe and everything is 42, you still have to actually run the universe. Yes, to figure out the question, I guess, or the journey is the point. Right. Well, I think it's saying to summarize, this is the result of the universe. If that is possible, it tells us, I mean, the whole sort of structure of thinking about computation and so on and thinking about how stuff works. If it's possible to say, and the answer is such and such, you're basically saying there's a way of going outside the universe. And you're getting yourself into something of a sort of paradox because you're saying, if it's knowable what the answer is, then there's a way to know it that is beyond what the universe provides. But if we can know it, then something that we're dealing with is beyond the universe. So then the universe isn't the universe, so to speak. And in general, as we'll talk about, at least for our small human brains, it's hard to show the result of a sufficiently complex computation. I mean, it's probably impossible, right, on this side ability. And the universe appears by at least the poets to be sufficiently complex. They won't be able to predict what the heck it's all going to. Well, we better not be able to, because if we can, it kind of denies. I mean, it's you know, we're part of the universe. Yeah. So what does it mean for us to predict? It means that we that our little part of the universe is able to jump ahead of the whole universe. And this this quickly winds up. I mean, that it is conceivable. The only way we'd be able to predict is if we are so special in the universe, we are the one place where there is computation more special, more sophisticated than anything else that exists in the universe. That's the only way we would have the ability to sort of the almost theological ability, so to speak, to predict what happens in the universe is to say somehow we're better than everything else in the universe, which I don't think is the case. Yeah, perhaps we can detect a large number of looping patterns that reoccur throughout the universe and fully describe them. And therefore, but then it still becomes exceptionally difficult to see how those patterns interact and what kind of well, look, the most remarkable thing about the universe is that it's has regularity at all. Might not be the case. If you just have regularity, do you? Absolutely. That fits full of I mean, physics is successful. You know, it's full of of laws that tell us a lot of detail about how the universe works. I mean, it could be the case that, you know, the 10 to the 90th particles in the universe, they will do their own thing, but they don't. They all follow. We already know they all follow basically physical, the same physical laws. And that's something that's a very profound fact about the universe. What conclusion you draw from that is unclear. I mean, in the, you know, the early early theologians, that was, you know, exhibit number one for the existence of God. Now, you know, people have different conclusions about it. But the fact is, you know, right now, I mean, I happen to be interested, actually, I've just restarted a long running kind of interest of mine about fundamental physics. I'm kind of like, come on, I'm on a bit of a quest, which I'm about to make more public, to see if I can actually find the fundamental theory of physics. Excellent. We'll come to that. And I just had a lot of conversations with quantum mechanics folks with so I'm really excited on your take, because I think you have a fascinating take on the the fundamental nature of our reality from a physics perspective. So and what might be underlying the kind of physics as we think of it today. Okay, let's take a step back. What is computation? It's a good question. Operationally, computation is following rules. That's kind of it. I mean, computation is the result is the process of systematically following rules. And it is the thing that happens when you do that. So taking initial conditions are taking inputs and following rules. I mean, what are you following rules on? So there has to be some data, some unnecessarily, it can be something where there's a, you know, very simple input. And then you're following these rules. And you'd say there's not really much data going into this. It's you could actually pack the initial conditions into the rule, if you want to. So I think the question is, is there a robust notion of computation? That is, what does robust mean? What I mean by that is something like this. So So one of the things in a different in another physics, something like energy, okay, the different forms of energy, there's, but somehow energy is a robust concept that doesn't, isn't particular to kinetic energy, or, you know, nuclear energy, or whatever else, there's a robust idea of energy. So one of the things you might ask is, is the robust idea of computation? Or does it matter that this computation is running in a Turing machine? This computation is running in a, you know, CMOS, silicon, CPU, this computation is running in a fluid system in the weather, those kinds of things? Or is there a robust idea of computation that transcends the sort of detailed framework that it's running in? Okay. And is there? Yes. I mean, it wasn't obvious that there was. So it's worth understanding the history and how we got to where we are right now. Because, you know, to say that there is, is a statement in part about our universe. It's not a statement about what is mathematically conceivable. It's about what actually can exist for us. Maybe you can also comment because energy, as a concept is robust. But there's also its intricate, complicated relationship with matter, with mass, is very interesting, of particles that carry force and particles that sort of particles that carry force and particles that have mass. These kinds of ideas, they seem to map to each other, at least in the mathematical sense. Is there a connection between energy and mass and computation? Or are these completely disjoint ideas? We don't know yet. The things that I'm trying to do about fundamental physics may well lead to such a connection, but there is no known connection at this time. So can you elaborate a little bit more on what, how do you think about computation? What is computation? What is computation? Yeah. So I mean, let's, let's tell a little bit of a historical story. Okay. So, you know, back, go back 150 years, people were making mechanical calculators of various kinds. And, you know, the typical thing was you want an adding machine, you go to the adding machine store, basically, you want a multiplying machine, you go to the multiplying machine store, they're different pieces of hardware. And so that means that, at least at the level of that kind of computation, and those kinds of pieces of hardware, there isn't a robust notion of computation, there's the adding machine kind of computation, there's the multiplying machine notion of computation, and they're disjoint. So what happened in around 1900, people started imagining, particularly in the context of mathematical logic, could you have something which would be represent any reasonable function, right? And they came up with things, this idea of primitive recursion was one of the early ideas. And it didn't work. There were reasonable functions that people could come up with that were not represented using the primitives of primitive recursion. Okay, so then, then along comes 1931, and Godel's theorem, and so on. And as in looking back, one can see that as part of the process of establishing Godel's theorem, Godel basically showed how you could compile arithmetic, how you could basically compile logical statements like this statement is unprovable into arithmetic. So what he essentially did was to show that arithmetic can be a computer in a sense that's capable of representing all kinds of other things. And then Turing came along 1936, came up with Turing machines. Meanwhile, Alonzo Church had come up with lambda calculus. And the surprising thing that was established very quickly is the Turing machine idea about what might be what computation might be is exactly the same as the lambda calculus idea of what computation might be. And so, and then there started to be other ideas, you know, register machines, other kinds of other kinds of representations of computation. And the big surprise was, they all turned out to be equivalent. So in other words, it might have been the case, like those old adding machines and multiplying machines, that, you know, Turing had his idea of computation, Church had his idea of computation, and they were just different. But it isn't true. They're actually all equivalent. So then by, I would say the 1970s or so in sort of the computation, computer science, computation theory area, people had sort of said, oh, Turing machines are kind of what computation is. Physicists were still holding out saying, no, no, no, that's just not how the universe works. We've got all these differential equations. We've got all these real numbers that have infinite numbers of digits. The universe is not a Turing machine. Right. The, you know, the Turing machines are a small subset of the things that we make in microprocessors and engineering structures and so on. So probably actually through my work in the 1980s about sort of the relationship between computation and models of physics, it became a little less clear that there would be, that there was this big sort of dichotomy between what can happen in physics and what happens in things like Turing machines. And I think probably by now people would mostly think, and by the way, brains were another kind of element of this. I mean, you know, Gödel didn't think that his notion of computation or what amounted to his notion of computation would cover brains. And Turing wasn't sure either. But although he was a little bit, he got to be a little bit more convinced that it should cover brains. But I would say by probably sometime in the 1980s, there was beginning to be sort of a general belief that yes, this notion of computation that could be captured by things like Turing machines was reasonably robust. Now, the next question is, okay, you can have a universal Turing machine that's capable of being programmed to do anything that any Turing machine can do. And, you know, this idea of universal computation, it's an important idea, this idea that you can have one piece of hardware and program it with different pieces of software. You know, that's kind of the idea that launched most modern technology. I mean, that's kind of, that's the idea that launched computer revolution software, etc. So important idea. But the thing that's still kind of holding out from that idea is, okay, there is this universal computation thing, but seems hard to get to. It seems like you want to make a universal computer, you have to kind of have a microprocessor with, you know, a million gates in it, and you have to go to a lot of trouble to make something that achieves that level of computational sophistication. Okay, so the surprise for me was that stuff that I discovered in the early 80s, looking at these things called cellular automata, which are really simple computational systems, the thing that was a big surprise to me was that even when their rules were very, very simple, they were doing things that were as sophisticated as they did when their rules were much more complicated. So it didn't look like, you know, this idea, oh, to get sophisticated computation, you have to build something with very sophisticated rules. That idea didn't seem to pan out. And instead, it seemed to be the case that sophisticated computation was completely ubiquitous, even in systems with incredibly simple rules. And so that led to this thing that I call the principle of computational equivalence, which basically says, when you have a system that follows rules of any kind, then whenever the system isn't doing things that are, in some sense, obviously simple, then the computation that the behavior of the system corresponds to is of equivalence sophistication. So that means that when you kind of go from the very, very, very simplest things you can imagine, then quite quickly, you hit this kind of threshold above which everything is equivalent in its computational sophistication. Not obvious that would be the case. I mean, that's a science fact. Well, no, hold on a second. So this you've opened with a new kind of science. I mean, I remember it was a huge eye opener that such simple things can create such complexity. And yes, there's an equivalence, but it's not a fact. It just appears to, I mean, it's as much as a fact as sort of these theories are so elegant that it seems to be the way things are. But let me ask sort of, you just brought up previously, kind of like the communities of computer scientists with their Turing machines, the physicists with their universe, and whoever the heck, maybe neuroscientists looking at the brain. What's your sense in the equivalence? You've shown through your work that simple rules can create equivalently complex Turing machine systems, right? Is the universe equivalent to the kinds of Turing machines? Is the human brain a kind of Turing machine? Do you see those things basically blending together? Or is there still a mystery about how disjoint they are? Well, my guess is that they all blend together, but we don't know that for sure yet. I mean, this, you know, I should say, I said rather glibly that the principle of computational equivalence is sort of a science fact. And I was using air quotes for the science fact, because when you, it is a, I mean, just to talk about that for a second. The thing is that it has a complicated epistemological character, similar to things like the second law of thermodynamics, the law of entropy increase. What is the second law of thermodynamics? Is it a law of nature? Is it a thing that is true of the physical world? Is it something which is mathematically provable? Is it something which happens to be true of the systems that we see in the world? Is it, in some sense, a definition of heat, perhaps? Well, it's a combination of those things. And it's the same thing with the principle of computational equivalence. And in some sense, the principle of computational equivalence is at the heart of the definition of computation, because it's telling you there is a thing, there is a robust notion that is equivalent across all these systems and doesn't depend on the details of each individual system. And that's why we can meaningfully talk about a thing called computation. And we're not stuck talking about, oh, there's computation in Turing machine number 3785, and et cetera, et cetera, et cetera. That's why there is a robust notion like that. Now, on the other hand, can we prove the principle of computational equivalence? Can we prove it as a mathematical result? Well, the answer is, actually, we've got some nice results along those lines that say, throw me a random system with very simple rules. Well, in a couple of cases, we now know that even the very simplest rules we can imagine of a certain type are universal and do follow what you would expect from the principle of computational equivalence. So that's a nice piece of sort of mathematical evidence for the principle of computational equivalence. Just to link on that point, the simple rules creating sort of these complex behaviors. But is there a way to mathematically say that this behavior is complex? That you've mentioned that you cross a threshold. Right. So there are various indicators. So, for example, one thing would be, is it capable of universal computation? That is, given the system, do there exist initial conditions for the system that can be set up to essentially represent programs to do anything you want, to compute primes, to compute pi, to do whatever you want? Right. So that's an indicator. So we know in a couple of examples that, yes, the simplest candidates that could conceivably have that property do have that property. And that's what the principle of computational equivalence might suggest. But this principle of computational equivalence, one question about it is, is it true for the physical world? It might be true for all these things we come up with, the Turing machines, the cellular automata, whatever else. Is it true for our actual physical world? Is it true for the brains, which are an element of the physical world? We don't know for sure. And that's not the type of question that we will have a definitive answer to, because there's a sort of scientific induction issue. You can say, well, it's true for all these brains, but this person over here is really special, and it's not true for them. And the only way that that cannot be what happens is if we finally nail it and actually get a fundamental theory for physics, and it turns out to correspond to, let's say, a simple program. If that is the case, then we will basically have reduced physics to a branch of mathematics, in the sense that we will not be, you know, right now with physics, we're like, well, this is the theory that, you know, this is the rules that apply here. But in the middle of that, you know, right by that black hole, maybe these rules don't apply and something else applies. And there may be another piece of the onion that we have to peel back. But if we can get to the point where we actually have, this is the fundamental theory of physics, here it is, it's this program, run this program, and you will get our universe, then we've kind of reduced the problem of figuring out things in physics to a problem of doing some, what turns out to be very difficult, irreducibly difficult, mathematical problems. But it no longer is the case that we can say that somebody can come in and say, whoops, you know, you will write about all these things about Turing machines, but you're wrong about the physical universe, we know there's sort of ground truth about what's happening in the physical universe. Now, I happen to think, I mean, you asked me at an interesting time, because I'm just in the middle of starting to to re energize my, my project to kind of study fundamental theory of physics. As of today, I'm very optimistic that we're actually going to find something and that it's going to be possible to to see that the universe really is computational in that sense. But I don't know, because we're betting against, you know, we're betting against the universe, so to speak. And I didn't, you know, it's not like, you know, when I spend a lot of my life building technology, and then I know what what's in there, right? And it's there may be, it may have unexpected behavior, may have bugs, things like that. But fundamentally, I know what's in there for the universe. I'm not in that position, so to speak. What kind of computation do you think the fundamental laws of physics might emerge from? Just to clarify, so you've done a lot of fascinating work with kind of discrete kinds of computation that, you know, you can sell your automata, and we'll talk about it, have this very clean structures, it's such a nice way to demonstrate that simple rules can create immense complexity. But what kind, you know, is that actually, are cellular automata sufficiently general to describe the kinds of computation that might create the laws of physics? Just to give, can you give a sense of what kind of computation do you think would create? Well, so this is a slightly complicated issue, because as soon as you have universal computation, you can, in principle, simulate anything with anything. Right. But it is not a natural thing to do. And if you're asking, were you to try to find our physical universe by looking at possible programs in the computational universe of all possible programs, would the ones that correspond to our universe be small and simple enough that we might find them by searching that computational universe? We got to have the right basis, so to speak. We have to have the right language, in effect, for describing computation for that to be feasible. So the thing that I've been interested in for a long time is, what are the most structuralist structures that we can create with computation? So in other words, if you say a cellular automaton, it has a bunch of cells that are arrayed on a grid, and it's very, you know, and every cell is updated in synchrony at a particular, you know, when there's a click of a clock, so to speak, and it goes a tick of a clock, and every cell gets updated at the same time. That's a very specific very rigid kind of thing. But my guess is that when we look at physics, and we look at things like space and time, that what's underneath space and time is something as structuralist as possible, that what we see, what emerges for us as physical space, for example, comes from something that is sort of arbitrarily unstructured underneath. And so I've been for a long time interested in kind of what are the most structuralist structures that we can set up. And actually, what I had thought about for ages is using graphs, networks, where essentially, so let's talk about space, for example. So what is space? It's a kind of a question one might ask. Back in the early days of quantum mechanics, for example, people said, oh, for sure, space is going to be discrete, because all these other things we're finding are discrete. But that never worked out in physics. And so space in physics today is always treated as this continuous thing, just like Euclid imagined it. I mean, the very first thing Euclid says in his sort of common notions is, you know, a point is something which has no part. In other words, there are points that are arbitrarily small, and there's a continuum of possible positions of points. And the question is, is that true? And so for example, if we look at, I don't know, fluid like air or water, we might say, oh, it's a continuous fluid. We can pour it, we can do all kinds of things continuously. But actually, we know, because we know the physics of it, that it consists of a bunch of discrete molecules bouncing around, and only in the aggregate is it behaving like a continuum. And so the possibility exists that that's true of space too. People haven't managed to make that work with existing frameworks in physics. But I've been interested in whether one can imagine that underneath space, and also underneath time, is something more structureless. And the question is, is it computational? So there are a couple of possibilities. It could be computational, somehow fundamentally equivalent to a Turing machine, or it could be fundamentally not. So how could it not be? It could not be, so a Turing machine essentially deals with integers, whole numbers, at some level. And you know, it can do things like it can add one to a number, it can do things like this. And it can also store whatever the heck it did. Yes, it has an infinite storage. But when one thinks about doing physics, or sort of idealized physics, or idealized mathematics, one can deal with real numbers, numbers with an infinite number of digits, numbers which are absolutely precise. And one can say, we can take this number and we can multiply it by itself. Are you comfortable with infinity? In this context? Are you comfortable in the context of computation? Do you think infinity plays a part? I think that the role of infinity is complicated. Infinity is useful in conceptualizing things. It's not actualizable. Almost by definition, it's not actualizable. But do you think infinity is part of the thing that might underlie the laws of physics? I think that no. I think there are many questions that you ask about, you might ask about physics, which inevitably involve infinity. Like when you say, you know, is faster than light travel possible? You could say, given the laws of physics, can you make something even arbitrarily large, even quote, infinitely large, that will make faster than light travel possible? Then you're thrown into dealing with infinity as a kind of theoretical question. But I mean, talking about sort of what's underneath space and time and how one can make a computational infrastructure, one possibility is that you can't make a computational infrastructure in a Turing machine sense, that you really have to be dealing with precise real numbers. You're dealing with partial differential equations, which have precise real numbers at arbitrarily closely separated points. You have a continuum for everything. Could be that that's what happens, that there's sort of a continuum for everything and precise real numbers for everything. And then the things I'm thinking about are wrong. And that's the risk you take if you're trying to sort of do things about nature, is you might just be wrong. For me personally, it's kind of a strange thing. I've spent a lot of my life building technology where you can do something that nobody cares about, but you can't be sort of wrong in that sense, in the sense you build your technology and it does what it does. But I think this question of what the sort of underlying computational infrastructure for the universe might be, it's sort of inevitable it's going to be fairly abstract, because if you're going to get all these things like there are three dimensions of space, there are electrons, there are muons, there are quarks, there are this, you don't get to, if the model for the universe is simple, you don't get to have sort of a line of code for each of those things. You don't get to have sort of the muon case, the tau lepton case and so on. Because they all have to be emergent somehow, something deeper. Right. So that means it's sort of inevitable, it's a little hard to talk about what the sort of underlying structuralist structure actually is. Do you think human beings have the cognitive capacity to understand, if we're to discover it, to understand the kinds of simple structure from which these laws can emerge? Like, do you think that's a good question? Well, here's what I think. I think that, I mean, I'm right in the middle of this right now. Right. I'm telling you that I think this, yeah, I mean, this human has a hard time understanding, you know, a bunch of the things that are going on. But what happens in understanding is one builds waypoints. I mean, if you said understand modern 21st century mathematics, starting from, you know, counting back in, you know, whenever counting was invented 50,000 years ago, whatever it was, right, that would be really difficult. But what happens is we build waypoints that allow us to get to higher levels of understanding. And we see the same thing happening in language. You know, when we invent a word for something, it provides kind of a cognitive anchor, a kind of a waypoint that lets us, you know, like a podcast or something. You could be explaining, well, it's a thing which works this way, that way, the other way. But as soon as you have the word podcast and people kind of societally understand it, you start to be able to build on top of that. And so I think that's kind of the story of science actually, too. I mean, science is about building these kind of waypoints where we find this sort of cognitive mechanism for understanding something, then we can build on top of it. You know, we have the idea of, I don't know, differential equations we can build on top of that. We have this idea, that idea. So my hope is that if it is the case that we have to go all the way sort of from the sand to the computer, and there's no waypoints in between, then we're toast. We won't be able to do that. Well, eventually we might. So if we're as clever apes are good enough at building those abstract abstractions, eventually from sand we'll get to the computer, right? And it just might be a longer journey. The question is whether it is something that you asked, whether our human brains will quote, understand what's going on. And that's a different question because for that, it requires steps from which we can construct a human understandable narrative. And that's something that I think I am somewhat hopeful that that will be possible. Although, you know, as of literally today, if you ask me, I'm confronted with things that I don't understand very well. So this is a small pattern in a computation trying to understand the rules under which the computation functions. And it's an interesting possibility under which kinds of computations such a creature can understand itself. My guess is that within, so we didn't talk much about computational irreducibility, but it's a consequence of this principle of computational equivalence. And it's sort of a core idea that one has to understand, I think, which is question is, you're doing a computation, you can figure out what happens in the computation just by running every step in the computation and seeing what happens. Or you can say, let me jump ahead and figure out, you know, have something smarter that figures out what's going to happen before it actually happens. And a lot of traditional science has been about that act of computational reducibility. It's like, we've got these equations, and we can just solve them, and we can figure out what's going to happen. We don't have to trace all of those steps, we just jump ahead because we solve these equations. Okay, so one of the things that is a consequence of the principle of computational equivalence is you don't always get to do that. Many, many systems will be computationally irreducible, in the sense that the only way to find out what they do is just follow each step and see what happens. Why is that? Well, if you're saying, well, we, with our brains, we're a lot smarter, we don't have to mess around like the little cellular automaton going through and updating all those cells. We can just use the power of our brains to jump ahead. But if the principle of computational equivalence is right, that's not going to be correct, because it means that there's us doing our computation in our brains, there's a little cellular automaton doing its computation, and the principle of computational equivalence says these two computations are fundamentally equivalent. So that means we don't get to say we're a lot smarter than the cellular automaton and jump ahead, because we're just doing computation that's of the same sophistication as the cellular automaton itself. That's computational reducibility. It's fascinating. And that's a really powerful idea. I think that's both depressing and humbling and so on, that we're all, we and the cellular automaton are the same. But the question we're talking about, the fundamental laws of physics, is kind of the reverse question. You're not predicting what's going to happen. You have to run the universe for that. But saying, can I understand what rules likely generated me? I understand. But the problem is, to know whether you're right, you have to have some computational reducibility, because we are embedded in the universe. If the only way to know whether we get the universe is just to run the universe, we don't get to do that, because it just ran for 14.6 billion years or whatever. And we can't rerun it, so to speak. So we have to hope that there are pockets of computational reducibility sufficient to be able to say, yes, I can recognize those are electrons there. And I think that it's a feature of computational irreducibility. It's sort of a mathematical feature that there are always an infinite collection of pockets of reducibility. The question of whether they land in the right place and whether we can sort of build a theory based on them is unclear. But to this point about whether we as observers in the universe built out of the same stuff as the universe can figure out the universe, so to speak, that relies on these pockets of reducibility. Without the pockets of reducibility, it won't work, can't work. But I think this question about how observers operate, it's one of the features of science over the last 100 years particularly, has been that every time we get more realistic about observers, we learn a bit more about science. So for example, relativity was all about observers don't get to say what's simultaneous with what. They have to just wait for the light signal to arrive to decide what's simultaneous. Or for example, in thermodynamics, observers don't get to say the position of every single molecule in a gas. They can only see the kind of large scale features, and that's why the second law of thermodynamics, the law of entropy increase, and so on works. If you could see every individual molecule, you wouldn't conclude something about thermodynamics. You would conclude, oh, these molecules are just all doing these particular things. You wouldn't be able to see this aggregate fact. So I strongly expect that, and in fact, in the theories that I have, that one has to be more realistic about the computation and other aspects of observers in order to actually make a correspondence between what we experience. In fact, my little team and I have a little theory right now about how quantum mechanics may work, which is a very wonderfully bizarre idea about how the sort of thread of human consciousness relates to what we observe in the universe. But there's several steps to explain what that's about. What do you make of the mess of the observer at the lower level of quantum mechanics, sort of the textbook definition with quantum mechanics kind of says that there's some, there's two worlds. One is the world that actually is, and the other is that's observed. What do you make sense of that? Well, I think actually the ideas we've recently had might actually give away into this. I don't know yet. I think it's a mess. The fact is, one of the things that's interesting, and when people look at these models that I started talking about 30 years ago now, they say, oh no, that can't possibly be right. What about quantum mechanics? You say, okay, tell me what is the essence of quantum mechanics? What do you want me to be able to reproduce to know that I've got quantum mechanics, so to speak? Well, and that question comes up. It comes up very operationally actually, because we've been doing a bunch of stuff with quantum computing. And there are all these companies that say, we have a quantum computer. And we say, let's connect to your API and let's actually run it. And they're like, well, maybe you shouldn't do that yet. We're not quite ready yet. And one of the questions that I've been curious about is, if I have five minutes with a quantum computer, how can I tell if it's really a quantum computer or whether it's a simulator at the other end? And it turns out it's really hard. It's like a lot of these questions about what is intelligence? What's life? It's like, are you really a quantum computer? Yes, exactly. Is it just a simulation or is it really a quantum computer? Same issue all over again. So this whole issue about the sort of mathematical structure of quantum mechanics and the completely separate thing that is our experience in which we think definite things happen, whereas quantum mechanics doesn't say definite things ever happen. Quantum mechanics is all about the amplitudes for different things to happen, but yet our thread of consciousness operates as if definite things are happening. Dilinga, on the point, you've kind of mentioned the structure that could underlie everything and this idea that it could perhaps have something like a structure of a graph. Can you elaborate why your intuition is that there's a graph structure of nodes and edges and what it might represent? Right. Okay. So the question is, what is, in a sense, the most structuralist structure you can imagine, right? And in fact, what I've recently realized in the last year or so, I have a new most structuralist structure. By the way, the question itself is a beautiful one and a powerful one in itself. So even without an answer, just the question is a really strong question. Right. But what's your new idea? Well, it has to do with hypergraphs. Essentially, what is interesting about the sort of model I have now is it's a little bit like what happened with computation. Everything that I think of as, oh, well, maybe the model is this, I discover it's equivalent. And that's quite encouraging because it's like I could say, well, I'm going to look at trivalent graphs with three edges for each node and so on, or I could look at this special kind of graph, or I could look at this kind of algebraic structure. And turns out that the things I'm now looking at, everything that I've imagined that is a plausible type of structuralist structure is equivalent to this. So what is it? Well, a typical way to think about it is, well, so you might have some collection of tuples, collection of, let's say, numbers. So you might have one, three, five, two, three, four, just collections of numbers, triples of numbers, let's say, quadruples of numbers, pairs of numbers, whatever. And you have all these sort of floating little tuples. They're not in any particular order. And that sort of floating collection of tuples, and I told you this was abstract, represents the whole universe. The only thing that relates them is when a symbol is the same, it's the same, so to speak. So if you have two tuples and they contain the same symbol, let's say at the same position of the tuple, at the first element of the tuple, then that represents a relation. So let me try and peel this back. Wow. Okay. I told you it's abstract, but this is the... So the relationship is formed by some aspect of sameness. Right. But so think about it in terms of a graph. So a graph, a bunch of nodes, let's say you number each node, then what is a graph? A graph is a set of pairs that say this node has an edge connecting it to this other node. And a graph is just a collection of those pairs that say this node connects to this other node. So this is a generalization of that, in which instead of having pairs, you have arbitrary n tuples. That's it. That's the whole story. And now the question is, okay, so that might represent the state of the universe. How does the universe evolve? What does the universe do? And so the answer is that what I'm looking at is a transformation rules on these hypergraphs. In other words, you say this, whenever you see a piece of this hypergraph that looks like this, turn it into a piece of hypergraph that looks like this. So on a graph, it might be when you see the subgraph, when you see this thing with a bunch of edges hanging out in this particular way, then rewrite it as this other graph. Okay. And so that's the whole story. So the question is what, uh, so now you say, I mean, as I say, this is quite abstract. And one of the questions is, uh, where do you do those updating? So you've got this giant graph. What triggers the updating, like what's the, what's the ripple effect of it? Is it, uh, and I suspect everything's discreet even in time. So, okay. So the question is where do you do the updates? And the answer is the rule is you do them wherever they apply. And you do them, you do them. The order in which the updates is done is not defined. That is the, you can do them. So there may be many possible orderings for these updates. Now, the point is if imagine you're an observer in this universe. So, and you say, did something get updated? Well, you don't in any sense know until you yourself have been updated. Right. So in fact, all that you can be sensitive to is essentially the causal network of how an event over there affects an event that's in you. That doesn't even feel like observation. That's like, that's something else. You're just part of the whole thing. Yes, you're part of it. But, but even to have, so the end result of that is all you're sensitive to is this causal network of what event affects what other event. I'm not making a big statement about sort of the structure of the observer. I'm simply saying, I'm simply making the argument that what happens, the microscopic order of these rewrites is not something that any observer, any conceivable observer in this universe can be affected by. Because the only thing the observer can be affected by is this causal network of how the events in the observer are affected by other events that happen in the universe. So the only thing you have to look at is the causal network. You don't really have to look at this microscopic rewriting that's happening. So these rewrites are happening wherever they, they happen wherever they feel like. Causal network. Is there, you said that there's not really, so the idea would be an undefined, like what gets updated? The, the sequence of things is undefined. It's a, yes. That's what you mean by the causal network, but then the call, no, the causal network is given that an update has happened. That's an event. Then the question is, is that event causally related to, does that event, if that event didn't happen, then some future event couldn't happen yet. Gotcha. And so you build up this network of what affects what. Okay. And so what that does, so when you build up that network, that's kind of the observable aspect of the universe in some sense. And so then you can ask questions about, you know, how robust is that observable network of the, what's happening in the universe. Okay. So here's where it starts getting kind of interesting. So for certain kinds of microscopic rewriting rules, the order of rewrites does not matter to the causal network. And so this is, okay, mathematical logic moment. This is equivalent to the Church Rosser property or the confluence property of rewrite rules. And it's the same reason that if you're simplifying an algebraic expression, for example, you can say, oh, let me expand those terms out. Let me factor those pieces. Doesn't matter what order you do that in, you'll always get the same answer. And that's, it's the same fundamental phenomenon that causes for certain kinds of microscopic rewrite rules that causes the causal network to be independent of the microscopic order of rewritings. Why is that property important? Because it implies special relativity. I mean, the reason it's important is that that property, special relativity says you can look at these sort of, you can look at different reference frames. You can have different, you can be looking at your notion of what space and what's time can be different depending on whether you're traveling at a certain speed, depending on whether you're doing this, that, and the other. But nevertheless, the laws of physics are the same. That's what the principle of special relativity says, is the laws of physics are the same independent of your reference frame. Well, turns out this sort of change of the microscopic rewriting order is essentially equivalent to a change of reference frame, or at least there's a sub part of how that works that's equivalent to change a reference frame. So, somewhat surprisingly, and sort of for the first time in forever, it's possible for an underlying microscopic theory to imply special relativity, to be able to derive it. It's not something you put in as a, this is a, it's something where this other property, causal invariance, which is also the property that implies that there's a single thread of time in the universe. It might not be the case that that's what would lead to the possibility of an observer thinking that definite stuff happens. Otherwise, you've got all these possible rewriting orders, and who's to say which one occurred. But with this causal invariance property, there's a notion of a definite thread of time. It sounds like that kind of idea of time, even space, would be emergent from the system. Oh, yeah. No, I mean, it's not a fundamental part of the system. No, no, it's a fundamental level. All you've got is a bunch of nodes connected by hyper edges or whatever. So there's no time, there's no space. That's right. And but the thing is that it's just like imagining, imagine you're just dealing with a graph. And imagine you have something like a, you know, like a honeycomb graph, or you have a hexagon, a bunch of hexagons. You know, that graph at a microscopic level, it's just a bunch of nodes connected to other nodes. But at a macroscopic level, you say that looks like a honeycomb, you know, lattice, it looks like a two dimensional, you know, manifold of some kind, it looks like a two dimensional thing. If you connect it differently, if you just connect all the nodes one, one to another, and kind of a sort of linked list type structure, then you'd say, well, that looks like a one dimensional space. But at the microscopic level, all these are just networks with nodes, the macroscopic level, they look like something that's like one of our sort of familiar kinds of space. And it's the same thing with these hyper graphs. Now, if you ask me, have I found one that gives me three dimensional space? The answer is not yet. So we don't know. This is one of these things we're kind of betting against nature, so to speak. And I have no way to know. And so there are many other properties of this kind of system that are very beautiful, actually, and very suggestive. And it will be very elegant if this turns out to be right, because it's very clean. I mean, you start with nothing. And everything gets built up, everything about space, everything about time, everything about matter. It's all just emergent from the properties of this extremely low level system. And that, that will be pretty cool if that's the way our universe works. Now, do I on the other hand, the thing that that I find very confusing is, let's say we succeed, let's say we can say this particular sort of hypergraph rewriting rule gives the universe just run that hypergraph rewriting rule for enough times, and you'll get everything, you'll get this conversation we're having, you'll get everything. It's that if we get to that point, and we look at what is this thing, what is this rule that we just have, that is giving us our whole universe, how do we think about that thing? Let's say, turns out the minimal version of this, and this is kind of cool thing for a language designer like me, the minimal version of this model is actually a single line of orphan language code. So that's, which I wasn't sure was going to happen that way, but it's, it's a, that's, it's kind of, no, we don't know what, we don't know what that's, that's just the framework to know the actual particular hypergraph that might be a longer, the specification of the rules might be slightly longer. How does that help you accept marveling in the beauty and the elegance of the simplicity that creates the universe? That does that help us predict anything in the universe? That does that help us predict anything? Not really because of the irreducibility. That's correct. That's correct. But so the thing that is really strange to me, and I haven't wrapped my, my brain around this yet is, you know, one is one keeps on realizing that we're not special in the sense that, you know, we don't live at the center of the universe. We don't blah, blah, blah. And yet if we produce a rule for the universe and it's quite simple, and we can write it down and a couple of lines or something that feels very special. How did we come to get a simple universe when many of the available universes, so to speak, are incredibly complicated? It might be, you know, a quintillion characters long. Why did we get one of the ones that's simple? And so I haven't wrapped my brain around that issue yet. If indeed we are in such a simple, the universe is such a simple rule. Is it possible that there is something outside of this that we are in a kind of what people call the simulation, right? That we're just part of a computation that's being explored by a graduate student in alternate universe. Well, you know, the problem is we don't get to say much about what's outside our universe because by definition, our universe is what we exist within. Now, can we make a sort of almost theological conclusion from being able to know how our particular universe works? Interesting question. I don't think that if you ask the question, could we, and it relates again to this question about extraterrestrial intelligence, you know, we've got the rule for the universe. Was it built in on purpose? Hard to say. That's the same thing as saying we see a signal from, you know, that we're receiving from some random star somewhere, and it's a series of pulses. And, you know, it's a periodic series of pulses, let's say. Was that done on purpose? Can we conclude something about the origin of that series of pulses? Just because it's elegant does not necessarily mean that somebody created it or that we can even comprehend. Yeah. I think it's the ultimate version of the sort of identification of the techno signature question. It's the ultimate version of that is was our universe a piece of technology, so to speak, and how on earth would we know? But I mean, in the kind of crazy science fiction thing you could imagine, you could say, oh, there's going to be a signature there. It's going to be made by so and so. But there's no way we could understand that, so to speak, and it's not clear what that would mean. Because the universe simply, you know, if we find a rule for the universe, we're simply saying that rule represents what our universe does. We're not saying that that rule is something running on a big computer and making our universe. It's just saying that represents what our universe does in the same sense that, you know, laws of classical mechanics, differential equations, whatever they are, represent what mechanical systems do. It's not that the mechanical systems are somehow running solutions to those differential equations. Those differential equations are just representing the behavior of those systems. So what's the gap in your sense to linger on the fascinating, perhaps slightly sci fi question? What's the gap between understanding the fundamental rules that create a universe and engineering a system, actually creating a simulation ourselves? So you've talked about sort of, you've talked about, you know, nano engineering kind of ideas that are kind of exciting, actually creating some ideas of computation in the physical space. How hard is it as an engineering problem to create the universe once you know the rules that create it? Well, that's an interesting question. I think the substrate on which the universe is operating is not a substrate that we have access to. I mean, the only substrate we have is that same substrate that the universe is operating in. So if the universe is a bunch of hypergraphs being rewritten, then we get to attach ourselves to those same hypergraphs being rewritten. We don't get to, and if you ask the question, you know, is the code clean? You know, can we write nice, elegant code with efficient algorithms and so on? Well, that's an interesting question. That's this question of how much computational reducibility there is in the system. But I've seen some beautiful cellular automata that basically create copies of itself within itself, right? So that's the question whether it's possible to create, like whether you need to understand the substrate or whether you can. Yeah, well, right. I mean, so one of the things that is sort of one of my slightly sci fi thoughts about the future, so to speak, is, you know, right now, if you poll typical people, you say, do you think it's important to find the fundamental theory of physics? You get, because I've done this poll informally, at least, it's curious, actually, you get a decent fraction of people saying, oh, yeah, that would be pretty interesting. I think that's becoming, surprisingly enough, more, I mean, a lot of people are interested in physics in a way that like, without understanding it, just kind of watching scientists, a very small number of them struggle to understand the nature of our reality. Right. I mean, I think that's somewhat true. And in fact, in this project that I'm launching into to try and find fundamental theory of physics, I'm going to do it as a very public project. I mean, it's going to be live streamed and all this kind of stuff. And I don't know what will happen. It'll be kind of fun. I mean, I think that it's the interface to the world of this project. I mean, I figure one feature of this project is, you know, unlike technology projects that basically are what they are, this is a project that might simply fail, because it might be the case that it generates all kinds of elegant mathematics that has absolutely nothing to do with the physical universe that we happen to live in. Okay, so we're talking about kind of the quest to find the fundamental theory of physics. First point is, you know, it's turned out it's kind of hard to find the fundamental theory of physics. People weren't sure that that would be the case. Back in the early days of applying mathematics to science, 1600s and so on, people were like, oh, in 100 years we'll know everything there is to know about how the universe works. Turned out to be harder than that. And people got kind of humble at some level, because every time we got to sort of a greater level of smallness and studying the universe, it seemed like the math got more complicated and everything got harder. When I was a kid, basically, I started doing particle physics. And when I was doing particle physics, I always thought finding the fundamental, fundamental theory of physics, that's a kooky business, we'll never be able to do that. But we can operate within these frameworks that we built for doing quantum field theory and general relativity and things like this. And it's all good. And we can figure out a lot of stuff. Did you even at that time have a sense that there's something behind that? Sure, I just didn't expect that. I thought in some rather un, it's actually kind of crazy and thinking back on it, because it's kind of like there was this long period in civilization where people thought the ancients had it all figured out, and we'll never figure out anything new. And to some extent, that's the way I felt about physics when I was in the middle of doing it, so to speak, was, you know, we've got quantum field theory, it's the foundation of what we're doing. And there's, you know, yes, there's probably something underneath this, but we'll sort of never figure it out. But then I started studying simple programs in the computational universe, things like cellular automata and so on. And I discovered that they do all kinds of things that were completely at odds with the intuition that I had had. And so after that, after you see this tiny little program that does all this amazingly complicated stuff, then you start feeling a bit more ambitious about physics and saying, maybe we could do this for physics too. And so that got me started years ago now in this kind of idea of could we actually find what's underneath all of these frameworks, like quantum field theory and general relativity and so on. And people perhaps don't realize as clearly as they might that, you know, the frameworks we're using for physics, which is basically these two things, quantum field theory, sort of the theory of small stuff and general relativity, theory of gravitation and large stuff. Those are the two basic theories. And they're 100 years old. I mean, general relativity was 1915, quantum field theory, well, 1920s. So basically 100 years old. And it's been a good run. There's a lot of stuff been figured out. But what's interesting is the foundations haven't changed in all that period of time, even though the foundations had changed several times before that in the 200 years earlier than that. And I think the kinds of things that I'm thinking about, which are sort of really informed by thinking about computation and the computational universe, it's a different foundation. It's a different set of foundations. And might be wrong. But it is at least, you know, we have a shot. And I think it's, you know, to me, it's, you know, my personal calculation for myself is, is, you know, if it turns out that the finding the fundamental theory of physics, it's kind of low hanging fruit, so to speak, it'd be a shame if we just didn't think to do it. You know, if people just said, Oh, you'll never figure that stuff out. Let's, you know, and it takes another 200 years before anybody gets around to doing it. You know, I think it's, I don't know how low hanging this fruit actually is. It may be, you know, it may be that it's kind of the wrong century to do this project. I mean, I think the cautionary tale for me, you know, I think about things that I've tried to do in technology, where people thought about doing them a lot earlier. And my favorite example is probably Leibniz, who, who thought about making essentially encapsulating the world's knowledge in a computational form in the late 1600s, and did a lot of things towards that. And basically, you know, we finally managed to do this. But he was 300 years too early. And that's the that's kind of the in terms of life planning. It's kind of like, avoid things that can't be done in your in your century, so to speak. Yeah, timing. Timing is everything. So you think if we kind of figure out the underlying rules that can create from which quantum field theory and general relativity can emerge, do you think they'll help us unify it at that level of abstraction? Oh, we'll know it completely. We'll know how that all fits together. Yes, without a question. And I mean, it's already even the things I've already done. There are very, you know, it's very, very elegant, actually, how things seem to be fitting together. Now, you know, is it right? I don't know yet. It's awfully suggestive. If it isn't right, it's then the designer of the universe should feel embarrassed, so to speak, because it's a really good way to do it. And your intuition in terms of design universe, does God play dice? Is there is there randomness in this thing? Or is it deterministic? So the kind of That's a little bit of a complicated question. Because when you're dealing with these things that involve these rewrites that have, okay, even randomness is an emergent phenomenon, perhaps. Yes, yes. I mean, it's a yeah, well, randomness, in many of these systems, pseudo randomness and randomness are hard to distinguish. In this particular case, the current idea that we have about some measurement in quantum mechanics is something very bizarre and very abstract. And I don't think I can yet explain it without kind of yakking about very technical things. Eventually, I will be able to. But if that's right, it's kind of a it's a weird thing, because it slices between determinism and randomness in a weird way that hasn't been sliced before, so to speak. So like many of these questions that come up in science, where it's like, is it this or is it that? Turns out the real answer is it's neither of those things. It's something kind of different and sort of orthogonal to those categories. And so that's the current, you know, this week's idea about how that might work. But, you know, we'll see how that unfolds. I mean, there's this question about a field like physics and sort of the quest for fundamental theory and so on. And there's both the science of what happens and there's the sort of the social aspect of what happens. Because, you know, in a field that is basically as old as physics, we're at, I don't know what it is, fourth generation, I don't know, fifth generation, I don't know what generation it is of physicists. And like, I was one of these, so to speak. And for me, the foundations were like the pyramid, so to speak, you know, it was that way. And it was always that way. It is difficult in an old field to go back to the foundations and think about rewriting them. It's a lot easier in young fields where you're still dealing with the first generation of people who invented the field. And it tends to be the case, you know, that the nature of what happens in science tends to be, you know, you'll get, typically the pattern is some methodological advance occurs. And then there's a period of five years, 10 years, maybe a little bit longer than that, where there's lots of things that are now made possible by that methodological advance, whether it's, you know, I don't know, telescopes, or whether that's some mathematical method or something. Something happens, a tool gets built, and then you can do a bunch of stuff. And there's a bunch of low hanging fruit to be picked. And that takes a certain amount of time. After all that low hanging fruit is picked, then it's a hard slog for the next however many decades or century or more to get to the next sort of level at which one could do something. And it's kind of a, and it tends to be the case that in fields that are in that kind of, I wouldn't say cruise mode, because it's really hard work, but it's very hard work for very incremental progress. And then in your career and some of the things you've taken on, it feels like you're not, you haven't been afraid of the hard slog. Yeah, that's true. So it's quite interesting, especially on the engineering, on the engineering side. On a small tangent, when you were at Caltech, did you get to interact with Richard Feynman at all? Do you have any memories of Richard? We worked together quite a bit, actually. In fact, both when I was at Caltech and after I left Caltech, we were both consultants at this company called Thinking Machines Corporation, which was just down the street from here, actually. It was ultimately an ill fated company. But I used to say this company is not going to work with the strategy they have. And Dick Feynman always used to say, what do we know about running companies? Just let them run their company. But anyway, he was not into that kind of thing. And he always thought that my interest in doing things like running companies was a distraction, so to speak. And for me, it's a mechanism to have a more effective machine for actually getting things, figuring things out and getting things to happen. Did he think of it, because essentially what you did with the company, I don't know if you were thinking of it that way, but you're creating tools to empower the exploration of the university. Do you think, did he... Did he understand that point? The point of tools of... I think not as well as he might have done. I mean, I think that... But he was actually my first company, which was also involved with, well, was involved with more mathematical computation kinds of things. He was quite... He had lots of advice about the technical side of what we should do and so on. Do you have examples, memories, or thoughts that... Oh, yeah, yeah. He had all kinds of... Look, in the business of doing sort of... One of the hard things in math is doing integrals and so on. And so he had his own elaborate ways to do integrals and so on. He had his own ways of thinking about sort of getting intuition about how math works. And so his sort of meta idea was take those intuitional methods and make a computer follow those intuitional methods. Now, it turns out for the most part, like when we do integrals and things, what we do is we build this kind of bizarre industrial machine that turns every integral into products of major G functions and generates this very elaborate thing. And actually the big problem is turning the results into something a human will understand. It's not, quote, doing the integral. And actually, Feynman did understand that to some extent. And I'm embarrassed to say he once gave me this big pile of, you know, calculational methods for particle physics that he worked out in the 50s. And he said, yeah, it's more used to you than to me type thing. And I was like, I've intended to look at it and give it back and I'm still on my files now. But that's what happens when it's finiteness of human lives. Maybe if he'd live another 20 years, I would have remembered to give it back. But I think that was his attempt to systematize the ways that one does integrals that show up in particle physics and so on. Turns out the way we've actually done it is very different from that way. What do you make of that difference, Eugene? So Feynman was actually quite remarkable at creating sort of intuitive frameworks for understanding difficult concepts. I'm smiling because, you know, the funny thing about him was that the thing he was really, really, really good at is calculating stuff. But he thought that was easy because he was really good at it. And so he would do these things where he would calculate some, do some complicated calculation in quantum field theory, for example, come out with a result, wouldn't tell anybody about the complicated calculation because he thought that was easy. He thought the really impressive thing was to have this simple intuition about how everything works. So he invented that at the end. And, you know, because he'd done this calculation and knew how it worked, it was a lot easier. It's a lot easier to have good intuition when you know what the answer is. And then and then he would just not tell anybody about these calculations that he wasn't meaning that maliciously, so to speak. It's just he thought that was easy. And and that's, you know, that led to areas where people were just completely mystified, and they kind of followed his intuition. But nobody could tell why it worked. Because actually, the reason it worked was because he'd done all these calculations, and he knew that it was would work. And, you know, when I he and I worked a bit on quantum computers actually back in 1980, 81, before anybody had heard of those things. And, you know, the typical mode of I mean, he was used to say, and I now think about this, because I'm about the age that he was when I worked with him. And, you know, I see the people who are one third my age, so to speak. And he was always complaining that I was one third his age, and therefore various things. But, but, you know, he would do some calculation by by hand, you know, blackboard and things come up with some answer. I'd say, I don't understand this. You know, I do something with a computer. And he'd say, you know, I don't understand this. So there'd be some big argument about what was, you know, what was going on, but but it was always some. And I think, actually, we many of the things that we sort of realized about quantum computing, that was sort of issues that have to do particularly with the measurement process, are kind of still issues today. And I kind of find it interesting. It's a funny thing in science that these, you know, that there's, there's a remarkable happens in technology to there's a remarkable sort of repetition of history that ends up occurring. Eventually, things really get nailed down. But it often takes a while. And it often things come back decades later. Well, for example, I could tell a story actually happened right down the street from here. When we were both thinking machines, I had been working on this particular cellular automaton, who rule 30, that has this feature that it from very simple initial conditions, it makes really complicated behavior. Okay. So and actually, of all silly physical things, using this big parallel computer called the connection machine that that company was making, I generated this giant printout of rule 30 on very, on actually on the same kind of same kind of printer that people use to make layouts microprocessors. So one of these big, you know, large format printers with high resolution and so on. So okay, so print this out lots of very tiny cells. And so there was sort of a question of how some features of that pattern. And so it was very much a physical, you know, on the floor with meter rules trying to measure different things. So, so Feynman kind of takes me aside, we've been doing that for a little while and takes me aside. And he says, I just want to know this one thing says, I want to know, how did you know that this rule 30 thing would produce all this really complicated behavior that is so complicated that we're, you know, going around with this big printout, and so on. And I said, Well, I didn't know, I just enumerated all the possible rules and then observed that that's what happened. He said, Oh, I feel a lot better. You know, I thought you had some intuition that he didn't have that would let one. I said, No, no, no, no intuition, just experimental science. TK Oh, that's such a beautiful sort of dichotomy there of that's exactly you showed is you really can't have an intuition about an irreducible. I mean, you have to run it. MG Yes, that's right. TK That's so hard for us humans, and especially brilliant physicists like Feynman to say that you can't have a compressed, clean intuition about how the whole thing works. MG Yes, yes. No, he was, I mean, I think he was sort of on the edge of understanding that point about computation. And I think he found that, I think he always found computation interesting. And I think that was sort of what he was a little bit poking at. I mean, that intuition, you know, the difficulty of discovering things, like even you say, Oh, you know, you just enumerate all the cases and just find one that does something interesting, right? Sounds very easy. Turns out, like, I missed it when I first saw it, because I had kind of an intuition that said it shouldn't be there. So I had kind of arguments, Oh, I'm going to ignore that case, because whatever. And how did you have an open mind enough? Because you're essentially the same person as you should find, like the same kind of physics type of thinking. How did you find yourself having a sufficiently open mind to be open to watching rules and them revealing complexity? MG Yeah, I think that's an interesting question. I've wondered about that myself, because it's kind of like, you know, you live through these things, and then you say, what was the historical story? And sometimes the historical story that you realize after the fact was not what you lived through, so to speak. And so, you know, what I realized is, I think what happened is, you know, I did physics, kind of like reductionistic physics, where you're thrown in the universe, and you're told, go figure out what's going on inside it. And then I started building computer tools. And I started building my first computer language, for example. And computer language is not like, it's sort of like physics in the sense that you have to take all those computations people want to do, and kind of drill down and find the primitives that they can all be made of. But then you do something that's really different, because you're just saying, okay, these are the primitives. Now, you know, hopefully they'll be useful to people, let's build up from there. So you're essentially building an artificial universe, in a sense, where you make this language, you've got these primitives, you're just building whatever you feel like building. And so it was sort of interesting for me, because from doing science, where you're just thrown in the universe as the universe is, to then just being told, you know, you can make up any universe you want. And so I think that experience of making a computer language, which is essentially building your own universe, so to speak, that's what gave me a somewhat different attitude towards what might be possible. It's like, let's just explore what can be done in these artificial universes, rather than thinking the natural science way of let's be constrained by how the universe actually is. Yeah, by being able to program, essentially, you've, as opposed to being limited to just your mind and a pen, you now have, you've basically built another brain that you can use to explore the universe by computer program, you know, this is kind of a brain, right? And it's well, it's it's or telescope, or you know, it's a tool, it's it lets you let's you see stuff, but there's something fundamentally different between a computer and a telescope. I mean, it just, yeah, I'm hoping to romanticize the notion, but it's more general, the computer is more general. And it's, it's, I think, I mean, this point about, you know, people say, oh, such and such a thing was almost discovered at such and such a time, the the distance between, you know, the building the paradigm that allows you to actually understand stuff or allows one to be open to seeing what's going on. That's really hard. And, you know, I think, in I've been fortunate in my life that I spent a lot of my time building computational language. And that's an activity that, in a sense, works by sort of having to kind of create another level of abstraction and kind of be open to different kinds of structures. But, you know, it's, it's always I mean, I'm fully aware of, I suppose, the fact that I have seen it a bunch of times of how easy it is to miss the obvious, so to speak, that at least is factored into my attempt to not miss the obvious, although it may not succeed. What do you think is the role of ego in the history of math and science? And more sort of, you know, a book title is something like a new kind of science. You've accomplished a huge amount. In fact, somebody said that Newton didn't have an ego, and I looked into it and he had a huge ego. Yeah, but from an outsider's perspective, some have said that you have a bit of an ego as well. Do you see it that way? Does ego get in the way? Is it empowering? Is it both? So it's, it's, it's complicated and necessary. I mean, you know, I've had, look, I've spent more than half my life CEO in a tech company. Right. Okay. And, you know, that is a, I think it's actually very, it means that one's ego is not a distant thing. It's a thing that one encounters every day, so to speak, because it's, it's all tied up with leadership and with how one, you know, develops an organization and all these kinds of things. So, you know, it may be that if I'd been an academic, for example, I could have sort of, you know, check the ego, put it on, put on a shelf somewhere and ignore its characteristics, but you're reminded of it quite often in the context of running a company. Sure. I mean, that's what it's about. It's, it's about leadership and, you know, leadership is intimately tied to ego. Now, what does it mean? I mean, what, what is the, you know, for me, I've been fortunate that I think I have reasonable intellectual confidence, so to speak. That is, you know, I, I'm one of these people who at this point, if somebody tells me something and I just don't understand it, my conclusion isn't that means I'm dumb. That my conclusion is there's something wrong with what I'm being told. And that was actually Dick Feynman used to have that, that that feature too, he never really believed in. He actually believed in experts much less than I believe in experts. So. Wow. So that's a fun, that's a, that's a fundamentally powerful property of ego and saying, like, not that I am wrong, but that the, the world is wrong. And, and tell me, like, when confronted with the fact that doesn't fit the thing that you've really thought through sort of both the negative and the positive of ego, do you see the negative of that get in the way sort of being sure of the mistakes I've made that are the results of, I'm pretty sure I'm right. And turns out I'm not. I mean, that's, that's the, you know, but, but the thing is that the, the, the idea that one tries to do things that, so for example, you know, one question is if people have tried hard to do something and then one thinks, maybe I should try doing this myself. Uh, if one does not have a certain degree of intellectual confidence, one just says, well, people have been trying to do this for a hundred years. How am I going to be able to do this? Yeah. And, you know, I was fortunate in the sense that I happened to start having some degree of success in science and things when I was really young. And so that developed a certain amount of sort of intellectual confidence. I don't think I otherwise would have had. Um, and you know, in a sense, I mean, I was fortunate that I was working in a field, particle physics during its sort of golden age of rapid progress. And that, that's kind of gives one a false sense of, uh, achievement because it's kind of, kind of easy to discover stuff that's going to survive. If you happen to be, you know, picking the low hanging fruit of a rapidly expanding field. I mean, the reason I totally, I totally immediately understood the ego behind a new kind of science to me, let me sort of just try to express my feelings on the whole thing, is that if you don't allow that kind of ego, then you would never write that book. That you would say, well, people must have done this. There's not, you would not dig. You would not keep digging. And I think that was, I think you have to take that ego and, and ride it and see where it takes you. And that's how you create exceptional work. But I think the other point about that book was it was a non trivial question, how to take a bunch of ideas that are, I think, reasonably big ideas. They might, they might, you know, their importance is determined by what happens historically. One can't tell how important they are. One can tell sort of the scope of them. And the scope is fairly big and they're very different from things that have come before. And the question is, how do you explain that stuff to people? And so I had had the experience of sort of saying, well, there are these things, there's a cellular automaton. It does this, it does that. And people are like, oh, it must be just like this. It must be just like that. So no, it isn't. It's something different. Right. And so I could have done sort of, I'm really glad you did what you did, but you could have done sort of academically, just published, keep publishing small papers here and there. And then you would just keep getting this kind of resistance, right? You would get like, yeah, it's supposed to just dropping a thing that says, here it is, here's the full, the full thing. No, I mean, that was my calculation is that basically, you know, you could introduce little pieces. It's like, you know, one possibility is like, it's the secret weapon, so to speak. It's this, you know, I keep on discovering these things in all these different areas. Where'd they come from? Nobody knows. But I decided that, you know, in the interests of one only has one life to lead and, you know, writing that book took me a decade anyway. There's not a lot of wiggle room, so to speak. One can't be wrong by a factor of three, so to speak, in how long it's going to take. That I, you know, I thought the best thing to do, the thing that is most sort of, that most respects the intellectual content, so to speak, is you just put it out with as much force as you can, because it's not something where, and, you know, it's an interesting thing. You talk about ego and it's, you know, for example, I run a company which has my name on it, right? I thought about starting a club for people whose companies have their names on them. And it's a funny group because we're not a bunch of egomaniacs. That's not what it's about, so to speak. It's about basically sort of taking responsibility for what one's doing. And, you know, in a sense, any of these things where you're sort of putting yourself on the line, it's kind of a funny, it's a funny dynamic because, in a sense, my company is sort of something that happens to have my name on it, but it's kind of bigger than me and I'm kind of just its mascot at some level. I mean, I also happen to be a pretty, you know, strong leader of it. LW. But it's basically showing a deep, inextricable sort of investment. Your name, like Steve Jobs's name wasn't on Apple, but he was Apple. Elon Musk's name is not on Tesla, but he is Tesla. So it's like, it meaning emotionally. If a company succeeds or fails, he would just that emotionally would suffer through that. And so that's, that's a beautiful recognizing that fact. And also Wolfram is a pretty good branding name, so that works out. LW. Yeah, right. Exactly. I think Steve had it had a bad deal there. LR. Yeah. So you made up for it with the last name. Okay. So in 2002, you published A New Kind of Science, to which sort of on a personal level, I can credit my love for cellular automata and computation in general. I think a lot of others can as well. Can you briefly describe the vision, the hope, the main idea presented in this 1200 page book? LW. Sure. Although it took 1200 pages to say in the book. So no, the real idea, it's kind of a good way to get into it is to look at sort of the arc of history and to look at what's happened in kind of the development of science. I mean, there was this sort of big idea in science about 300 years ago, that was, let's use mathematical equations to try and describe things in the world. Let's use sort of the formal idea of mathematical equations to describe what might be happening in the world, rather than, for example, just using sort of logical augmentation and so on. Let's have a formal theory about that. And so there'd been this 300 year run of using mathematical equations to describe the natural world, which had worked pretty well. But I got interested in how one could generalize that notion. There is a formal theory, there are definite rules, but what structure could those rules have? And so what I got interested in was let's generalize beyond the sort of purely mathematical rules. And we now have this sort of notion of programming and computing and so on. Let's use the kinds of rules that can be embodied in programs as a sort of generalization of the ones that can exist in mathematics as a way to describe the world. And so my kind of favorite version of these kinds of simple rules are these things called cellular automata. And so typical case... So wait, what are cellular automata? Fair enough. So typical case of a cellular automaton, it's an array of cells. It's just a line of discrete cells. Each cell is either black or white. And in a series of steps that you can represent as lines going down a page, you're updating the color of each cell according to a rule that depends on the color of the cell above it and to its left and right. So it's really simple. So a thing might be if the cell and its right neighbor are not the same or the cell on the left is black or something, then make it black on the next step. And if not, make it white. Typical rule. That rule, I'm not sure I said it exactly right, but a rule very much like what I just said, has the feature that if you started off from just one black cell at the top, it makes this extremely complicated pattern. So some rules you get a very simple pattern. Some rules, the rule is simple. You start them off from a sort of simple seed. You just get this very simple pattern. But other rules, and this was the big surprise when I started actually just doing the simple computer experiments to find out what happens, is that they produce very complicated patterns of behavior. So for example, this rule 30 rule has the feature you start off from just one black cell at the top, makes this very random pattern. If you look like at the center column of cells, you get a series of values. It goes black, white, black, black, whatever it is. That sequence seems for all practical purposes random. So it's kind of like in math, you compute the digits of pi, 3.1415926, whatever. Those digits once computed, I mean, the scheme for computing pi, it's the ratio of the circumference to the diameter of a circle, very well defined. But yet, once you've generated those digits, they seem for all practical purposes completely random. And so it is with rule 30, that even though the rule is very simple, much simpler, much more sort of computationally obvious than the rule for generating digits of pi, even with a rule that simple, you're still generating immensely complicated behavior. Yeah. So if we could just pause on that, I think you probably have said it and looked at it so long, you forgot the magic of it, or perhaps you don't, you still feel the magic. But to me, if you've never seen sort of, I would say, what is it? A one dimensional, essentially, cellular automata, right? And you were to guess what you would see if you have some sort of cells that only respond to its neighbors. Right. If you were to guess what kind of things you would see, like my initial guess, like even when I first like opened your book, A New Kind of Science, right? My initial guess is you would see, I mean, it would be a very simple stuff. Right. And I think it's a magical experience to realize the kind of complexity, you mentioned rule 30, still your favorite cellular automaton? Still my favorite rule. Yes. You get complexity, immense complexity, you get arbitrary complexity. Yes. And when you say randomness down the middle column, that's just one cool way to say that there's incredible complexity. And that's just, I mean, that's a magical idea. However, you start to interpret it, all the reducibility discussions, all that. But it's just, I think that has profound philosophical kind of notions around it, too. It's not just, I mean, it's transformational about how you see the world. I think for me it was transformational. I don't know, we can have all kinds of discussion about computation and so on, but just, you know, I sometimes think if I were on a desert island and was, I don't know, maybe it was some psychedelics or something, but if I had to take one book, I mean, you kind of science would be it because you could just enjoy that notion. For some reason, it's a deeply profound notion, at least to me. I find it that way. Yeah. I mean, look, it's been, it was a very intuition breaking thing to discover. I mean, it's kind of like, you know, you point the computational telescope out the window and you're like, okay, I'm going to point the computational telescope out there. And suddenly you see, I don't know, you know, in the past, it's kind of like, you know, moons of Jupiter or something, but suddenly you see something that's kind of very unexpected and rule 30 was very unexpected for me. And the big challenge at a personal level was to not ignore it. I mean, people, you know, in other words, you might say, you know, it's a bug. What would you say? Yeah. Well, yeah. I mean, I, what are we looking at by the way? Oh, well, I was just generating here. I'll actually generate a rule 30 pattern. So that's the rule for, for rule 30. And it says, for example, it says here, if you have a black cell in the middle and black cell to the left and white cell to the right, then the cell on the next step will be white. And so here's the actual pattern that you get starting off from a single black cell at the top there. And then that's the initial state initial condition. That's the initial thing. You just start off from that and then you're going down the page and at every, at every step, you're just applying this rule to find out the new value that you get. And so you might think rule that simple, you got to get the, there's got to be some trace of that simplicity here. Okay. We'll run it. Let's say for 400 steps. Um, so what it does, it's kind of aliasing a bit on the screen there, but, but, um, you can see there's a little bit of regularity over on the left, but there's a lot of stuff here that just looks very complicated, very random. And, uh, that's a big sort of shock to was a big shock to my intuition, at least that that's possible. The mind immediately starts. Is there a pattern? There must be a repetitive pattern. There must be. So I spent, so indeed, that's what I thought at first. And I thought, I thought, well, this is kind of interesting, but you know, if we run it long enough, we'll see, you know, something we'll resolve into something simple. And, uh, uh, you know, I did all kinds of analysis of using mathematics, statistics, cryptography, whatever, whatever to try and crack it. Um, and I never succeeded. And after I hadn't succeeded for a while, I started thinking maybe there's a real phenomenon here. That is the reason I'm not succeeding. Maybe. I mean, the thing that for me was sort of a motivating factor was looking at the natural world and seeing all this complexity that exists in the natural world. The question is, where does it come from? You know, what secret does nature have that lets it make all this complexity that we humans, when we engineer things typically are not making, we're typically making things that at least look quite simple to us. And so the shock here was even from something very simple, you're making something that complex. Uh, maybe this is getting at sort of the secret that nature has that allows it to make really complex things, even though its underlying rules may not be that complex. How did it make you feel if we, if we look at the Newton apple, was there, was it, was there a, you know, you took a walk and, and something it profoundly hit you or was this a gradual thing, a lobster being boiled? The truth of every sort of science discovery is it's not that gradual. I mean, I've spent, I happen to be interested in scientific biography kinds of things. And so I've tried to track down, you know, how did people come to figure out this or that thing? And there's always a long kind of, uh, sort of preparatory, um, you know, there's a, there's a need to be prepared in a mindset in which it's possible to see something. I mean, in the case of rule 30, I was around June 1st, 1984 was, um, uh, kind of a silly story in some ways. I finally had a high resolution laser printer. So I was able, so I thought I'm going to generate a bunch of pictures of the cellular automata and I generate this one and I put it, I was on some plane flight to Europe and they have this with me. And it's like, you know, I really should try to understand this. And this is really, you know, this is, I really don't understand what's going on. And, uh, that was kind of the, um, you know, slowly trying to, trying to see what was happening. It was not, uh, it was depressingly unsubstantial, so to speak, in the sense that, um, a lot of these ideas like principle of computational equivalence, for example, you know, I thought, well, that's a possible thing. I didn't know if it's correct, still don't know for sure that it's correct. Um, but it's sort of a gradual thing that these things gradually kind of become seem more important than one thought. I mean, I think the whole idea of studying the computational universe of simple programs, it took me probably a decade, decade and a half to kind of internalize that that was really an important idea. Um, and I think, you know, if it turns out we find the whole universe lurking out there in the computational universe, that's a good, uh, you know, it's a good brownie point or something for the, uh, for the whole idea. But I think that the, um, the thing that's strange in this whole question about, you know, finding this different raw material for making models of things, um, what's been interesting sort of in the, in sort of arc of history is, you know, for 300 years, it's kind of like the, the mathematical equations approach. It was the winner. It was the thing, you know, you want to have a really good model for something that's what you use. The thing that's been remarkable is just in the last decade or so, I think one can see a transition to using not mathematical equations, but programs as sort of the raw material for making models of stuff. And that's pretty neat. And it's kind of, you know, as somebody who's kind of lived inside this paradigm shift, so to speak, it is bizarre. I mean, no doubt in sort of the history of science that will be seen as an instantaneous paradigm shift, but it sure isn't instantaneous when it's played out in one's actual life. So to speak, it seems glacial. Um, um, and it's the kind of thing where, where it's sort of interesting because in the dynamics of sort of the adoption of ideas like that into different fields, the younger the field, the faster the adoption typically, because people are not kind of locked in with the fifth generation of people who've studied this field and it is, it is the way it is and it can never be any different. And I think that's been, um, you know, watching that process has been interesting. I mean, I'm, I'm, I think I'm fortunate that I, I've, uh, uh, I, I do stuff mainly cause I like doing it. And, um, uh, if I was, um, uh, that makes me kind of thick skinned about the world's response to what I do. Um, and uh, but that's definitely, uh, you know, and anytime you, you write a book called something like a new kind of science, um, it's kind of the, the pitch forks will come out for the, for the old kind of science. And I was, it was interesting dynamics. I think that the, um, um, uh, I have to say that I was fully aware of the fact that the, um, when you see sort of incipient paradigm shifts in science, the vigor of the negative response upon early introduction is a fantastic positive indicator of good longterm results. So in other words, if people just don't care, it's, um, you know, that's not such a good sign. If they're like, oh, this is great. That means you didn't really discover anything interesting. Um, what fascinating properties of rule 30 have you discovered over the years? You've recently announced the rule 30 prizes for solving three key problems. Can you maybe talk about interesting properties that have been kind of revealed rule 30 or other cellular automata and what problems are still before us? Like the three problems you've announced. Yeah. Yeah. Right. So, I mean, the most interesting thing about cellular automata is that it's hard to figure stuff out about them. And that's, um, uh, in a sense, every time you try and sort of, uh, uh, you try and bash them with some other technique, you say, can I crack them? The answer is they seem to be uncrackable. They seem to have the feature that they are, um, that they're sort of showing irreducible computation. They're not, you're not able to say, oh, I know exactly what this is going to do. It's going to do this or that, but there's specific formulations of that fact. Yes. Right. So, I mean, for example, in, in rule 30, in the pattern you get just starting from a single black cell, you get this sort of very, very sort of random looking pattern. And so one feature of that, just look at the center column. And for example, we use that for a long time to generate randomness symbols and language, um, just, you know, what rule 30 produces. Now the question is, can you prove how random it is? So for example, one very simple question, can you prove that it'll never repeat? We haven't been able to show that it will never repeat. We know that if there are two adjacent columns, we know they can't both repeat, but just knowing whether that center column can ever repeat, we still don't even know that. Um, another problem that I sort of put in my collection of, you know, it's like $30,000 for three, you know, for these three prizes for about rule 30. Um, I would say that this is not one of those. There's one of those cases where the money is not the main point, but, um, it's just, uh, you know, helps, um, uh, motivate somehow the, the investigation. So there's three problems you propose to get $30,000 if you solve all three or maybe, you know, it's 10,000 for each for each. Right. My, uh, the, the problems, that's right. Money's not the thing. The problems themselves are just clean formulation. It's just, you know, will it ever become periodic? Second problem is, are there an equal number of black and white cells down the middle column, down the middle column. And the third problem is a little bit harder to state, which is essentially, is there a way of figuring out what the color of a cell at position T down the center column is in a, with a less computational effort than about T steps. So in other words, is there a way to jump ahead and say, I know what this is going to do, you know, it's just some mathematical function of T, um, or proving that there is no way or proving there is no way. Yes. But both, I mean, you know, for any one of these, one could prove that, you know, one could discover, you know, we know what rule 30 does for a billion steps, but, um, and maybe we'll know for a trillion steps before too very long. Um, but maybe at a quadrillion steps, it suddenly becomes repetitive. You might say, how could that possibly happen? But so when I was writing up these prizes, I thought, and this is typical of what happens in the computational universe. I thought, let me find an example where it looks like it's just going to be random forever, but actually it becomes repetitive. And I found one and it's just, you know, I did a search, I searched, I don't know, maybe a million different rules with some criterion. And this is what's sort of interesting about that is I kind of have this thing that I say in a kind of silly way about the computational universe, which is, you know, the animals are always smarter than you are. That is, there's always some way. One of these computational systems is going to figure out how to do something, even though I can't imagine how it's going to do it. And, you know, I didn't think I would find one that, you know, you would think after all these years that when I found sort of all possible things, uh, uh, uh, funky things that, um, uh, that I would have, uh, that I would have gotten my intuition wrapped around the idea that, um, you know, these creatures are always in the computational universe are always smarter than I'm going to be. But, uh, well, they're equivalently as smart, right? That's correct. And that makes it, that makes one feel very sort of, it's, it's, it's humbling every time because every time the thing is, is, uh, you know, you think it's going to do this or it's not going to be possible to do this and it turns out it finds a way. Of course, the promising thing is there's a lot of other rules like rule 30. It's just rule 30 is, oh, it's my favorite cause I found it first. And that's right. But the, the problems are focusing on rule 30. It's possible that rule 30 is, is repetitive after trillion steps and that doesn't prove anything about the other rules. It does not. But this is a good sort of experiment of how you go about trying to prove something about a particular rule. Yes. And it also, all these things help build intuition. That is if it turned out that this was repetitive after a trillion steps, that's not what I would expect. And so we learned something from that. The method to do that though, would reveal something interesting about the, no doubt. No doubt. I mean, it's, although it's sometimes challenging, like the, you know, I put out a prize in 2007 for, for a particular Turing machine that I, there was the simplest candidate for being a universal Turing machine and the young chap in England named Alex Smith, um, after a smallish number of months said, I've got a proof and he did, you know, it took a little while to iterate, but he had a proof. Unfortunately, the proof is very, it's, it's a lot of micro details. It's, it's not, it's not like you look at it and you say, aha, there's a big new principle. The big new principle is the simplest Turing machine that might have been universal actually is universal. And it's incredibly much simpler than the Turing machines that people already knew were universal before that. And so that intuitionally is important because it says computation universality is closer at hand than you might've thought. Um, but the actual methods are not, uh, in that particular case, we're not terribly illuminating. It would be nice if the methods would also be elegant. That's true. Yeah. No, I mean, I think it's, it's one of these things where, I mean, it's, it's like a lot of, we've talked about earlier kind of, um, you know, opening up AI's and machine learning and things of what's going on inside and is it, is it just step by step or can you sort of see the bigger picture more abstractly? It's unfortunate. I mean, with Fermat's last theorem proof, it's unfortunate that the proof to such an elegant theorem is, um, is not, I mean, it's, it's, it's not, it doesn't fit into the margins of a page. That's true. But there's no, one of the things is that's another consequence of computational irreducibility. This, this fact that there are even quite short results in mathematics whose proofs are arbitrarily long. Yes. That's a, that's a consequence of all this stuff. And it's, it's a, it makes one wonder, uh, you know, how come mathematics is possible at all? Right. Why is, you know, why is it the case? How people managed to navigate doing mathematics through looking at things where they're not just thrown into, it's all undecidable. Um, that's, that's its own own separate, separate story. And that would be, that would, that would have a poetic beauty to it is if people were to find something interesting about rule 30, because I mean, there's an emphasis to this particular role. It wouldn't say anything about the broad irreducibility of all computations, but it would nevertheless put a few smiles on people's faces of, uh, well, yeah. But to me, it's like in a sense, establishing principle of computational equivalence, it's a little bit like doing inductive science anywhere. That is the more examples you find, the more convinced you are that it's generally true. I mean, we don't get to, you know, whenever we do natural science, we, we say, well, it's true here that this or that happens. Can we, can we prove that it's true everywhere in the universe? No, we can't. So, you know, it's the same thing here. We're exploring the computational universe. We're establishing facts in the computational universe. And that's, uh, that's sort of a way of, uh, of inductively concluding general things. Just to think through this a little bit, we've touched on it a little bit before, but what's the difference between the kind of computation, now that we're talking about cellular automata, what's the difference between the kind of computation, biological systems, our mind, our bodies, the things we see before us that emerged through the process of evolution and cellular automata? I mean, we've kind of implied to the discussion of physics underlying everything, but we, we talked about the potential equivalents of the fundamental laws of physics and the kind of computation going on in Turing machines. But can you now connect that? Do you think there's something special or interesting about the kind of computation that our bodies do? Right. Well, let's talk about brains primarily. I mean, I think the, um, the most important thing about the things that our brains do are that we care about them in the sense that there's a lot of computation going on out there in, you know, cellular automata and, and, you know, physical systems and so on. And it just, it does what it does. It follows those rules. It does what it does. The thing that's special about the computation in our brains is that it's connected to our goals and our kind of whole societal story. And, you know, I think that's the, that's, that's the special feature. And now the question then is when you see this whole sort of ocean of computation out there, how do you connect that to the things that we humans care about? And in a sense, a large part of my life has been involved in sort of the technology of how to do that. And, you know, what I've been interested in is kind of building computational language that allows that something that both we humans can understand and that can be used to determine computations that are actually computations we care about. See, I think when you look at something like one of these cellular automata and it does some complicated thing, you say, that's fun, but why do I care? Well, you could say the same thing actually in physics. You say, oh, I've got this material and it's a ferrite or something. Why do I care? You know, it's some, has some magnetic properties. Why do I care? It's amusing, but why do I care? Well, we end up caring because, you know, ferrite is what's used to make magnetic tape, magnetic discs, whatever. Or, you know, we could use liquid crystals as made, used to make, well, not actually increasingly not, but it has been used to make computer displays and so on. But those are, so in a sense, we're mining these things that happen to exist in the physical universe and making it be something that we care about because we sort of entrain it into technology. And it's the same thing in the computational universe that a lot of what's out there is stuff that's just happening, but sometimes we have some objective and we will go and sort of mine the computational universe for something that's useful for some particular objective. On a large scale, trying to do that, trying to sort of navigate the computational universe to do useful things, you know, that's where computational language comes in. And, you know, a lot of what I've spent time doing and building this thing we call Wolfram Language, which I've been building for the last one third of a century now. And kind of the goal there is to have a way to express kind of computational thinking, computational thoughts in a way that both humans and machines can understand. So it's kind of like in the tradition of computer languages, programming languages, that the tradition there has been more, let's take how computers are built and let's specify, let's have a human way to specify, do this, do this, do this, at the level of the way that computers are built. What I've been interested in is representing sort of the whole world computationally and being able to talk about whether it's about cities or chemicals or, you know, this kind of algorithm or that kind of algorithm, things that have come to exist in our civilization and the sort of knowledge base of our civilization, being able to talk directly about those in a computational language so that both we can understand it and computers can understand it. I mean, the thing that I've been sort of excited about recently, which I had only realized recently, which is kind of embarrassing, but it's kind of the arc of what we've tried to do in building this kind of computational language is it's a similar kind of arc of what happened when mathematical notation was invented. So go back 400 years, people were trying to do math, they were always explaining their math in words, and it was pretty clunky. And as soon as mathematical notation was invented, you could start defining things like algebra and later calculus and so on. It all became much more streamlined. When we deal with computational thinking about the world, there's a question of what is the notation? What is the kind of formalism that we can use to talk about the world computationally? In a sense, that's what I've spent the last third of a century trying to build. And we finally got to the point where we have a pretty full scale computational language that sort of talks about the world. And that's exciting because it means that just like having this mathematical notation, let us talk about the world mathematically, and let us build up these kind of mathematical sciences. Now we have a computational language which allows us to start talking about the world computationally, and lets us, my view of it is it's kind of computational X for all X. All these different fields of computational this, computational that. That's what we can now build. Let's step back. So first of all, the mundane. What is Wolfram language in terms of, I mean I can answer the question for you, but it's basically not the philosophical deep, the profound, the impact of it. I'm talking about in terms of tools, in terms of things you can download, in terms of stuff you can play with. What is it? What does it fit into the infrastructure? What are the different ways to interact with it? Right. So I mean the two big things that people have sort of perhaps heard of that come from Wolfram language, one is Mathematica, the other is Wolfram Alpha. So Mathematica first came out in 1988. It's this system that is basically an instance of Wolfram language, and it's used to do computations, particularly in sort of technical areas. And the typical thing you're doing is you're typing little pieces of computational language, and you're getting computations done. It's very kind of, there's like a symbolic. Yeah, it's a symbolic language. It's a symbolic language. I mean I don't know how to cleanly express that, but that makes it very distinct from how we think about sort of, I don't know, programming in a language like Python or something. Right. So the point is that in a traditional programming language, the raw material of the programming language is just stuff that computers intrinsically do. And the point of Wolfram language is that what the language is talking about is things that exist in the world or things that we can imagine and construct. It's aimed to be an abstract language from the beginning. And so for example, one feature it has is that it's a symbolic language, which means that the thing called, you have an X, just type in X, and Wolfram language will just say, oh, that's X. It won't say error, undefined thing. I don't know what it is, computation, in terms of computing. Now that X could perfectly well be the city of Boston. That's a thing. That's a symbolic thing. Or it could perfectly well be the trajectory of some spacecraft represented as a symbolic thing. And that idea that one can work with, sort of computationally work with these different, these kinds of things that exist in the world or describe the world, that's really powerful. And when I started designing, well, when I designed the predecessor of what's now Wolfram language, which is a thing called SMP, which was my first computer language, I kind of wanted to have this sort of infrastructure for computation, which was as fundamental as possible. I mean, this is what I got for having been a physicist and tried to find fundamental components of things and wound up with this kind of idea of transformation rules for symbolic expressions as being sort of the underlying stuff from which computation would be built. And that's what we've been building from in Wolfram language. And operationally, what happens, it's, I would say, by far the highest level computer language that exists. And it's really been built in a very different direction from other languages. So other languages have been about, there is a core language. It really is kind of wrapped around the operations that a computer intrinsically does. Maybe people add libraries for this or that, but the goal of Wolfram language is to have the language itself be able to cover this sort of very broad range of things that show up in the world. And that means that there are 6,000 primitive functions in the Wolfram language that cover things. I could probably pick a random here. I'm going to pick just for fun, I'll pick, let's take a random sample of all the things that we have here. So let's just say random sample of 10 of them and let's see what we get. Wow. Okay. So these are really different things from functions. These are all functions, Boolean convert. Okay. That's the thing for converting between different types of Boolean expressions. So for people are just listening, uh, Stephen typed in random sample of names, so this is sampling from all function. How many you said there might be 6,000 from 6,000 10 of them. And there's a hilarious variety of them. Yeah, right. Well, we've got things about, um, dollar requester address that has to do with interacting with, uh, uh, the, the world of the, of the cloud and so on. Discrete wavelet data, spheroidal, graphical sort of window. Yeah. Yeah. Window movable. That's the user interface kind of thing. I want to pick another 10 cause I think this is some, okay. So yeah, there's a lot of infrastructure stuff here that you see. If you, if you just start sampling at random, there's a lot of kind of infrastructural things. If you're more, you know, if you more look at the, um, some of the exciting machine learning stuff you showed off, is that also in this pool? Oh yeah. Yeah. I mean, you know, so one of those functions is like image identify as a, as a function here where you just say image identify. I don't know. It's always good to, let's do this. Let's say current image and let's pick up an image, hopefully. Current image accessing the webcam, took a picture yourself. Took a terrible picture. But anyway, we can say image identify, open square brackets, and then we just paste that picture in there. Image identify function running on the picture. Oh, and it says, Oh wow. It says I, it looked, I looked like a plunger because I got this great big thing behind my classifies. So this image identify classifies the most likely object in, in the image. So, so plunger. Okay. That's, that's a bit embarrassing. Let's see what it does. And let's pick the top 10. Um, okay. Well, it thinks there's a, Oh, it thinks it's pretty unlikely that it's a primate, a hominid, a person. 8% probability. 57 is a plunger. Yeah. Well, hopefully we'll not give you an existential crisis. And then, uh, 8%, uh, I shouldn't say percent, but, uh, no, that's right. 8% that it's a hominid. Um, and, uh, yeah. Okay. It's really, I'm going to do another one of these just cause I'm embarrassed that it, um, I didn't see me at all. There we go. Let's try that. Let's see what that did. Um, we took a picture with a little bit more of me and not just my bald head, so to speak. Okay. 89% probability it's a person. So that, so then I would, um, but, uh, you know, so this is image identify as an example of one of just one of them, just one function out of that part of the that's like part of the language. Yes. And I mean, you know, something like, um, I could say, I don't know, let's find the geo nearest, uh, what could we find? Um, let's find the nearest volcano. Um, let's find the 10. I wonder where it thinks here is. Let's try finding the 10 volcanoes nearest here. Okay. So geo nearest volcano here, 10 nearest volcanoes. Right. Let's find out where those are. We can now, we've got a list of volcanoes out and I can say geo list plot that and hopefully, okay, so there we go. So there's a map that shows the positions of those 10 volcanoes of the East coast and the Midwest and well, no, we're okay. We're okay. There's not, it's not too bad. Yeah. They're not very close to us. We could, we could measure how far away they are, but, um, you know, the fact that right in the language, it knows about all the volcanoes in the world. It knows, you know, computing what the nearest ones are. It knows all the maps of the world and so on. It's a fundamentally different idea of what a language is. Yeah, right. That's why I like to talk about is that, you know, a full scale computational language. That's, that's what we've tried to do. And just if you can comment briefly, I mean, this kind of, the Wolfram language along with Wolfram Alpha represents kind of what the dream of what AI is supposed to be. There's now a sort of a craze of learning kind of idea that we can take raw data and from that extract the, uh, the different hierarchies of abstractions in order to be able to under, like in order to form the kind of things that Wolfram language operates with, but we're very far from learning systems being able to form that. Like the context of history of AI, if you could just comment on, there is a, you said computation X and there's just some sense where in the eighties and nineties sort of expert systems represented a very particular computation X. Yes. Right. And there's a kind of notion that those efforts didn't pan out. Right. But then out of that emerges kind of Wolfram language, Wolfram Alpha, which is the success. I mean, yeah, right. I think those are in some sense, those efforts were too modest. That is they were, they were looking at particular areas and you actually can't do it with a particular area. I mean, like, like even a problem like natural language understanding, it's critical to have broad knowledge of the world. If you want to do good natural language understanding and you kind of have to bite off the whole problem. If you, if you say, we're just going to do the blocks world over here, so to speak, you don't really, it's, it's, it's actually, it's one of these cases where it's easier to do the whole thing than it is to do some piece of it. You know, what, one comment to make about sort of the relationship between what we've tried to do and sort of the learning side of, of AI. You know, in a sense, if you look at the development of knowledge in our civilization as a whole, there was kind of this notion pre 300 years ago or so. Now you want to figure something out about the world. You can reason it out. You can do things which are just use raw human thought. And then along came sort of modern mathematical science. And we found ways to just sort of blast through that by in that case, in that case, writing down equations. Now we also know we can do that with computation and so on. And so that was kind of a different thing. So, so when we look at how do we sort of encode knowledge and figure things out, one way we could do it is start from scratch, learn everything. It's just a neural net figuring everything out. But in a sense that denies the sort of knowledge based achievements of our civilization, because in our civilization, we have learned lots of stuff. We've surveyed all the volcanoes in the world. We've done, you know, we figured out lots of algorithms for this or that. Those are things that we can encode computationally. And that's what we've tried to do. And we're not saying just, you don't have to start everything from scratch. So in a sense, a big part of what we've done is to try and sort of capture the knowledge of the world in computational form and computable form. Now there's also some pieces which, which were for a long time, undoable by computers like image identification, where there's a really, really useful module that we can add that is those things which actually were pretty easy for humans to do that had been hard for computers to do. I think the thing that's interesting, that's emerging now is the interplay between these things, between this kind of knowledge of the world that is in a sense, very symbolic and this kind of sort of much more statistical kind of things like image identification and so on. And putting those together by having this sort of symbolic representation of image identification, that that's where things get really interesting and where you can kind of symbolically represent patterns of things and images and so on. I think that's, you know, that's kind of a part of the path forward, so to speak. Yeah. So the dream of, so the machine learning is not in my view, I think the view of many people is not anywhere close to building the kind of wide world of computable knowledge that will from language of build. But because you have a kind of, you've done the incredibly hard work of building this world, now machine learning can be, can serve as tools to help you explore that world. Yeah, yeah. And that's what you've added. I mean, with the version 12, right? You added a few, I was seeing some demos, it looks amazing. Right. I mean, I think, you know, this, it's sort of interesting to see the, the sort of the, once it's computable, once it's in there, it's running in sort of a very efficient computational way. But then there's sort of things like the interface of how do you get there? You know, how do you do natural language understanding to get there? How do you, how do you pick out entities in a big piece of text or something? That's I mean, actually a good example right now is our NLP NLU loop, which is we've done a lot of stuff, natural language understanding using essentially not learning based methods, using a lot of, you know, little algorithmic methods, human curation methods and so on. In terms of when people try to enter a query and then converting. So the process of converting NLU defined beautifully as converting their query into a computational language, which is a very well, first of all, super practical definition, very useful definition, and then also a very clear definition of natural language understanding. Right. I mean, a different thing is natural language processing, where it's like, here's a big lump of text, go pick out all the cities in that text, for example. And so a good example of, you know, so we do that, we're using, using modern machine learning techniques. And it's actually kind of, kind of an interesting process that's going on right now. It's this loop between what do we pick up with NLP using machine learning versus what do we pick up with our more kind of precise computational methods in natural language understanding. And so we've got this kind of loop going between those, which is improving both of them. Yeah. And I think you have some of the state of the art transformers, like you have BERT in there, I think. Oh yeah. So it's closely, you have, you have integrating all the models. I mean, this is the hybrid thing that people have always dreamed about or talking about. I'm actually just surprised, frankly, that Wolfram language is not more popular than it already is. You know, that's a, it's a, it's a complicated issue because it's like, it involves, you know, it involves ideas and ideas are absorbed slowly in the world. I mean, I think that's And then there's sort of like what we're talking about, there's egos and personalities and some of the, the absorption, absorption mechanisms of ideas have to do with personalities and the students of personalities and the, and then a little social network. So it's, it's interesting how the spread of ideas works. You know, what's funny with Wolfram language is that we are, if you say, you know, what market sort of market penetration, if you look at the, I would say very high end of R&D and sort of the, the people where you say, wow, that's a really impressive, smart person. They're very often users of Wolfram language, very, very often. If you look at the more sort of, it's a funny thing. If you look at the more kind of, I would say people who are like, oh, we're just plodding away doing what we do. They're often not yet Wolfram language users. And that dynamic, it's kind of odd that there hasn't been more rapid trickle down because we really, you know, the high end we've really been very successful in for a long time. And it's, it's, but with, you know, that's partly, I think, a consequence of my fault in a sense, because it's kind of, you know, I have a company which is really emphasizes sort of creating products and building a sort of the best possible technical tower we can rather than sort of doing the commercial side of things and pumping it out in sort of the most effective way. And there's an interesting idea that, you know, perhaps you can make it more popular by opening everything up, sort of the GitHub model. But there's an interesting, I think I've heard you discuss this, that that turns out not to work in a lot of cases, like in this particular case, that you want it, that when you deeply care about the integrity, the quality of the knowledge that you're building, that, unfortunately, you can't, you can't distribute that effort. Yeah, it's not the nature of how things work. I mean, you know, what we're trying to do is a thing that for better or worse, requires leadership. And it requires kind of maintaining a coherent vision over a long period of time, and doing not only the cool vision related work, but also the kind of mundane in the trenches make the thing actually work well, work. So how do you build the knowledge? Because that's the fascinating thing. That's the mundane, the fascinating and the mundane is building the knowledge, the adding, integrating more data. Yeah, I mean, that's probably not the most, I mean, the things like get it to work in all these different cloud environments and so on. That's pretty, you know, it's very practical stuff, you know, have the user interface be smooth and, you know, have there be take only a fraction of a millisecond to do this or that. That's a lot of work. And it's, it's, but, you know, I think my, it's an interesting thing over the period of time, you know, often language has existed, basically, for more than half of the total amount of time that any language, any computer language has existed. That is, computer language is maybe 60 years old, you know, give or take, and often language is 33 years old. So it's, it's kind of a, and I think I was realizing recently, there's been more innovation in the distribution of software than probably than in the structure of programming languages over that period of time. And we, you know, we've been sort of trying to do our best to adapt to it. And the good news is that we have, you know, because I have a simple private company and so on that doesn't have, you know, a bunch of investors, you know, telling us we've got to do this so that they have lots of freedom in what we can do. And so, for example, we're able to, oh, I don't know, we have this free Wolfram engine for developers, which is a free version for developers. And we've been, you know, we've, there are site licenses for, for Mathematica and Wolfram language at basically all major universities, certainly in the US by now. So it's effectively free to people and all universities in effect. And, you know, we've been doing a progression of things. I mean, different things like Wolfram Alpha, for example, the main website is just a free website. What is Wolfram Alpha? Okay, Wolfram Alpha is a system for answering questions where you ask a question with natural language, and it'll try and generate a report telling you the answer to that question. So the question could be something like, you know, what's the population of Boston divided by New York compared to New York? And it'll take those words and give you an answer. And that converts the words into computable, into Wolfram language, into Wolfram language and computational language. And then do you think the underlying knowledge belongs to Wolfram Alpha or to the Wolfram language? What's the Wolfram knowledge base? Knowledge base. I mean, it's been a, that's been a big effort over the decades to collect all that stuff. And, you know, more of it flows in every second. So can you, can you just pause on that for a second? Like, that's one of the most incredible things, of course, in the long term, Wolfram language itself is the fundamental thing. But in the amazing sort of short term, the knowledge base is kind of incredible. So what's the process of building that knowledge base? The fact that you, first of all, from the very beginning, that you're brave enough to start to take on the general knowledge base. And how do you go from zero to the incredible knowledge base that you have now? Well, yeah, it was kind of scary at some level. I mean, I had, I had wondered about doing something like this since I was a kid. I mean, I had, I had wondered about doing something like this since I was a kid. So it wasn't like I hadn't thought about it for a while. Most of the brilliant dreamers give up such a difficult engineering notion at some point. Right. Well, the thing that happened with me, which was kind of, it's a, it's a live your own paradigm kind of theory. So basically what happened is I had assumed that to build something like Wolfram Alpha would require sort of solving the general AI problem. That's what I had assumed. And so I kept on thinking about that and I thought, I don't really know how to do that. So I don't do anything. Then I worked on my new kind of science project and sort of exploring the computational universe and came up with things like this principle of computational equivalence, which say there is no bright line between the intelligent and the merely computational. So I thought, look, that's this paradigm I've built. You know, now it's, you know, now I have to eat that dog food myself, so to speak. You know, I've been thinking about doing this thing with computable knowledge forever and, you know, let me actually try and do it. And so it was, you know, if my paradigm is right, then this should be possible. But the beginning was certainly, you know, it was a bit daunting. I remember I took the early team to a big reference library and we're like looking at this reference library and it's like, you know, my basic statement is our goal over the next year or two is to ingest everything that's in here. And that's, you know, it seemed very daunting, but in a sense, I was well aware of the fact that it's finite. You know, the fact that you can walk into the reference library, it's a big, big thing with lots of reference books all over the place, but it is finite. You know, this is not an infinite, you know, it's not the infinite corridor of, so to speak, of reference library. It's not truly infinite, so to speak. But no, I mean, and then what happened was sort of interesting there was from a methodology point of view was I didn't start off saying let me have a grand theory for how all this knowledge works. It was like, let's, you know, implement this area, this area, this area, a few hundred areas and so on. That's a lot of work. I also found that, you know, I've been fortunate in that our products get used by sort of the world's experts in lots of areas. And so that really helped because we were able to ask people, you know, the world expert in this or that, and we're able to ask them for input and so on. And I found that my general principle was that any area where there wasn't some expert who helped us figure out what to do wouldn't be right. You know, because our goal was to kind of get to the point where we had sort of true expert level knowledge about everything. And so that, you know, the ultimate goal is if there's a question that can be answered on the basis of general knowledge in our civilization, make it be automatic to be able to answer that question. And, you know, and now, well, Wolfman got used in Siri from the very beginning, and it's now also used in Alexa. And so it's people are kind of getting more of the, you know, they get more of the sense of this is what should be possible to do. I mean, in a sense, the question answering problem was viewed as one of the sort of core AI problems for a long time. And I had kind of an interesting experience. I had a friend, Marvin Minsky, who was a well known AI person from right around here. And I remember when Wolfman Alpha was coming out, it was a few weeks before it came out, I think, I happened to see Marvin. And I said, I should show you this thing we have, you know, it's a you know, it's a question answering system. And he was like, okay, type something. And it's like, okay, fine. And then he's talking about something different. I said, no, Marvin, you know, this time, it actually works. You know, look at this, it actually works. He's typed in a few more things. There's maybe 10 more things. Of course, we have a record of what he typed in, which is kind of interesting. But and then you can you share where his mind was in the testing space? Like what, all kinds of random things? He was trying random stuff, you know, medical stuff, and, you know, chemistry stuff, and, you know, astronomy and so on. And it was like, like, you know, after a few minutes, he was like, Oh, my God, it actually works. And the but that was kind of told you something about the state, you know, what, what happened in AI, because people had, you know, in a sense, by trying to solve the bigger problem, we were able to actually make something that would work. Now, to be fair, you know, we had a bunch of completely unfair advantages. For example, we already built a bunch of awesome language, which was, you know, very high level symbolic language. We had, you know, I had the practical experience of building big systems. I have the sort of intellectual confidence to not just sort of give up and doing something like this. I think that the, you know, it is a, it's always a funny thing, you know, I've worked on a bunch of big projects in my life. And I would say that the, you know, you mentioned ego, I would also mention optimism, so to speak. I mean, in, you know, if somebody said, this project is going to take 30 years, it's, you know, it would be hard to sell me on that. You know, I'm always in the in the well, I can kind of see a few years, you know, something's going to happen in a few years. And usually it does, something happens in a few years, but the whole, the tail can be decades long. And that's, you know, and from a personal point of view, always the challenge is you end up with these projects that have infinite tails. And the question is, do the tails kind of, do you just drown in kind of dealing with all of the tails of these projects? And that's an interesting sort of personal challenge. And like my efforts now to work on fundamental theory of physics, which I've just started doing, and I'm having a lot of fun with it. But it's kind of, you know, it's, it's kind of making a bet that I can, I can kind of, you know, I can do that as well as doing the incredibly energetic things that I'm trying to do with Orphan Language and so on. I mean, the vision. Yeah. And underlying that, I mean, I've just talked for the second time with Elon Musk, and that you, you two share that quality a little bit of that optimism of taking on basically the daunting, what most people call impossible. And he, and you take it on out of, you can call it ego, you can call it naivety, you can call it optimism, whatever the heck it is, but that's how you solve the impossible things. Yeah. I mean, look at what happens. And I don't know, you know, in my own case, I, you know, it's been, I progressively got a bit more confident and progressively able to, you know, decide that these projects aren't crazy. But then the other thing is the other, the other trap that one can end up with is, Oh, I've done these projects and they're big. Let me never do a project that's any smaller than any project I've done so far. And that's, you know, and that can be a trap. And often these projects are of completely unknown, you know, that their depth and significance is actually very hard to know. On the sort of building this giant knowledge base that's behind Wolfram language, Wolfram Alpha, what do you think about the internet? What do you think about, for example, Wikipedia, these large aggregations of texts that's not converted into computable knowledge? Do you think if you look at Wolfram language, Wolfram Alpha, 20, 30, maybe 50 years down the line, do you hope to store all of the sort of Google's dream is to make all information searchable, accessible, but that's really as defined, it's, it's a, it doesn't include the understanding of information. Right. Do you hope to make all of knowledge represented within? I hope so. That's what we're trying to do. How hard is that problem? Like closing that gap? It depends on the use cases. I mean, so if it's a question of answering general knowledge questions about the world, we're in pretty good shape on that right now. If it's a question of representing, uh, like an area that we're going into right now is computational contracts, being able to take something which would be written in legalese, it might even be the specifications for, you know, what should the self driving car do when it encounters this or that or the other? What should the, you know, whatever the, you know, write that in a computational language and be able to express things about the world. You know, if the creature that you see running across the road is a, you know, thing at this point in the evil tree of life, then swerve this way, otherwise don't those kinds of things. Are there ethical components? When you start to get to some of the messy human things, are those encodable into computable knowledge? Well, I think that it is a necessary feature of attempting to automate more in the world that we encode more and more of ethics in a way that, uh, gets sort of quickly, you know, is, is able to be dealt with by, by computer. I mean, I've been involved recently. I sort of got backed into being involved in the question of, uh, automated content selection on the internet. So, you know, the Facebooks, Googles, Twitters, you know, what, how do they rank the stuff they feed to us humans, so to speak? Um, and the question of what are, you know, what should never be fed to us? What should be blocked forever? What should be upranked, you know, and what is the, what are the kind of principles behind that? And what I kind of, well, a bunch of different things I realized about that. But one thing that's interesting is being able, you know, in effect, you're building sort of an AI ethics. You have to build an AI ethics module in effect to decide, is this thing so shocking? I'm never going to show it to people. Is this thing so whatever? And I did realize in thinking about that, that, you know, there's not going to be one of these things. It's not possible to decide, or it might be possible, but it would be really bad for the future of our species if we just decided there's this one AI ethics module and it's going to determine the practices of everything in the world, so to speak. And I kind of realized one has to sort of break it up. And that's an interesting societal problem of how one does that and how one sort of has people sort of self identify for, you know, I'm buying in, in the case of just content selection, it's sort of easier because it's like an individual, it's for an individual. It's not something that kind of cuts across sort of societal boundaries. But it's a really interesting notion of, I heard you describe, I really like it sort of maybe in sort of have different AI systems that have a certain kind of brand that they represent essentially. You could have like, I don't know, whether it's conservative or liberal and then libertarian. And there's an Randian, objectivist AI system and different ethical and, I mean, it's almost encoding some of the ideologies which we've been struggling. I come from the Soviet Union. That didn't work out so well with the ideologies that worked out there. And so you have, but they all, everybody purchased that particular ethics system and the, and in the same, I suppose could be done encoded that that system could be encoded into computational knowledge and allow us to explore in the realm of, in the digital space. That's a really exciting possibility. Are you playing with those ideas in Wolfram Language? Yeah. Yeah. I mean, the, you know, that's, Wolfram Language has sort of the best opportunity to kind of express those essentially computational contracts about what to do. Now there's a bunch more work to be done to do it in practice for, you know, deciding the, is this a credible news story? What does that mean or whatever else you're going to pick? I think that that's, you know, that's the question of exactly what we get to do with that is, you know, for me, it's kind of a complicated thing because there are these big projects that I think about, like, you know, find the fundamental theory of physics. Okay. That's box number one, right? Box number two, you know, solve the AI ethics problem in the case of, you know, figure out how you rank all content, so to speak, and decide what people see. That's, that's kind of a box number two, so to speak. These are big projects. And, and I think what do you think is more important, the fundamental nature of reality or, depends who you ask. It's one of these things that's exactly like, you know, what's the ranking, right? It's the, it's the ranking system. It's like, who's, whose module do you use to rank that? If you, and I think, but having multiple modules is a really compelling notion to us humans that in a world where there's not clear that there's a right answer, perhaps you have systems that operate under different, how would you say it? I mean, it's different value systems, different value systems. I mean, I think, you know, in a sense, the, I mean, I'm not really a politics oriented person, but, but, you know, in the kind of totalitarianism, it's kind of like, you're going to have this, this system and that's the way it is. I mean, kind of the, you know, the concept of sort of a market based system where you have, okay, I, as a human, I'm going to pick this system. I, as another human, I'm going to pick this system. I mean, that's in a sense, this case of automated content selection is a non trivial, but it is probably the easiest of the AI ethics situations because it is each person gets to pick for themselves and there's not a huge interplay between what different people pick by the time you're dealing with other societal things like, you know, what should the policy of the central bank be or something or healthcare system or some of all those kinds of centralized kind of things. Right. Well, I mean, how healthcare again has the feature that, that at some level, each person can pick for themselves, so to speak. I mean, whereas there are other things where there's a necessary public health, there's one example where that's not, where that doesn't get to be, you know, something which people can, what they pick for themselves, they may impose on other people. And then it becomes a more non trivial piece of sort of political philosophy. Of course, the central banking system. So I would argue we would move, we need to move away into digital currency and so on and Bitcoin and ledgers and so on. So yes, there's a lot of, we've been quite involved in that. And that's, that's where that's sort of the motivation for computational contracts in part comes out of, you know, this idea, oh, we can just have this autonomously executing smart contract. The idea of a computational contract is just to say, you know, have something where all of the conditions of the contract are represented in computational form. So in principle, it's automatic to execute the contract. And I think that's, you know, that will surely be the future of, you know, the idea of legal contracts written in English or legalese or whatever. And where people have to argue about what goes on is surely not, you know, we have a much more streamlined process if everything can be represented computationally and the computers can kind of decide what to do. I mean, ironically enough, you know, old Gottfried Leibniz back in the, you know, 1600s was saying exactly the same thing, but he had, you know, his pinnacle of technical achievement was this brass four function mechanical calculator thing that never really worked properly actually. And, you know, so he was like 300 years too early for that idea. But now that idea is pretty realistic, I think. And, you know, you ask how much more difficult is it than what we have now and more from language to express, I call it symbolic discourse language, being able to express sort of everything in the world in kind of computational symbolic form. I think it is absolutely within reach. I mean, I think it's, you know, I don't know, maybe I'm just too much of an optimist, but I think it's a limited number of years to have a pretty well built out version of that, that will allow one to encode the kinds of things that are relevant to typical legal contracts and these kinds of things. The idea of symbolic discourse language, can you try to define the scope of what it is? So we're having a conversation. It's a natural language. Can we have a representation of the sort of actionable parts of that conversation in a precise computable form so that a computer could go do it? And not just contracts, but really sort of some of the things we think of as common sense, essentially, even just like basic notions of human life. Well, I mean, things like, you know, I am, uh, I'm getting hungry and want to eat something. Right. Right. That, that's something we don't have a representation, you know, in more from language right now, if I was like, I'm eating blueberries and raspberries and things like that, and I'm eating this amount of them, we know all about those kinds of fruits and plants and nutrition content and all that kind of thing. But the, I want to eat them part of it is not covered yet. Um, and that, you know, you need to do that in order to have a complete symbolic discourse language to be able to have a natural language conversation. Right. Right. To be able to express the kinds of things that say, you know, if it's a legal contract, it's, you know, the parties desire to have this and that. Um, and that's, you know, that's a thing like, I want to eat a raspberry or something, but that's, isn't that the, isn't this just the only, you said it's centuries old, this dream. Yes. But it's also the more near term, the dream of touring and formulating a touring test. Yes. So do you hope, do you think that's the ultimate test of creating something special? Cause we said, I don't know. I think by special, look, if, if the test is, does it walk and talk like a human? Well, that's just the talking like a human, but, um, uh, the answer is it's an okay test. If you say, is it a test of intelligence? You know, people have attached Wolf Malfoy, the Wolf Malfoy API to, you know, Turing test bots and those bots just lose immediately. Cause all you have to do is ask it five questions that, you know, are about really obscure, weird pieces of knowledge. And it's just taught them right out. And you say, that's not a human, right? It's, it's a, it's a different thing. It's achieving a different, uh, you know, right now, but it's, I would argue not, I would argue it's not a different thing. It's actually legitimately Wolfram Alpha is legitimately a language Wolfram language is legitimately trying to solve the Turing, the intent of the Turing test. Perhaps the intent. Yeah. Perhaps the intent. I mean, it's actually kind of fun, you know, Alan Turing had trying to work out, he thought about taking encyclopedia Britannica and, you know, making it computational in some way. And he estimated how much work it would be. Um, and actually I have to say he was a bit more pessimistic than the reality. We did it more efficiently, but to him that represented, so I mean, he was, he was on the same mental task. Yeah, right. He was, he was, they had the same idea. I mean, it was, you know, we were able to do it more efficiently cause we had a lot, we had layers of automation that he, I think hadn't, you know, it's, it's, it's hard to imagine those layers of abstraction, um, that end up being, being built up, but to him it represented like an impossible task essentially. Well, he thought it was difficult. He thought it was, uh, you know, maybe if he'd lived another 50 years, he would have been able to do it. I don't know. In the interest of time, easy questions. Go for it. What is intelligence? You talk about it. I love the way you say easy questions. Yeah. You talked about sort of a rule 30 and cellular automata, humbling your sense of human beings having a monopoly and intelligence, but in your, in retrospect, just looking broadly now with all the things you learn from computation, what is intelligence? How does intelligence arise? I don't think there's a bright line of what intelligence is. I think intelligence is at some level just computation, but for us, intelligence is defined to be computation that is doing things we care about. And you know, that's, that's a very special definition. It's a very, you know, when you try and try and make it apps, you know, you, you try and say, well, intelligence to this is problem solving. It's doing general this, it's doing that, this, that, and the other thing it's, it's operating within a human environment type thing. Okay. You know, that's fine. If you say, well, what's intelligence in general, you know, that's, I think that question is totally slippery and doesn't really have an answer. As soon as you say, what is it in general, it quickly segues into, uh, this is what this is just computation, so to speak, but in a sea of computation, how many things if we were to pick randomly is your sense would have the kind of impressive to us humans levels of intelligence, meaning it could do a lot of general things that are useful to us humans. Right. Well, according to the principle of computational equivalence, lots of them. I mean, in, in, in, you know, if you ask me just in cellular automata or something, I don't know, it's maybe 1%, a few percent, uh, achieve it, it varies. Actually, it's, it's a little bit, as you get to slightly more complicated rules, the chance that there'll be enough stuff there to, um, uh, to sort of reach this kind of equivalence point, it makes it maybe 10, 20% of all of them. So it's a, it's very disappointing, really. I mean, it's kind of like, you know, we think there's this whole long sort of biological evolution, uh, kind of intellectual evolution that our cultural evolution that our species has gone through. It's kind of disappointing to think that that hasn't achieved more, but it has achieved something very special to us. It just hasn't achieved something generally more, so to speak. But what do you think about this extra feels like human thing of subjective experience of consciousness? What is consciousness? Well, I think it's a deeply slippery thing. And I'm, I'm always, I'm always wondering what my cellular automata feel. I mean, what do they feel that you're wondering as an observer? Yeah. Yeah. Yeah. Who's to know? I mean, I think that the, do you think, uh, sorry to interrupt. Do you think consciousness can emerge from computation? Yeah. I mean, everything, whatever you mean by it, it's going to be, uh, I mean, you know, look, I have to tell a little story. I was at an AI ethics conference fairly recently and people were, uh, I think I, maybe I brought it up, but I was like talking about rights of AIs. When will AIs, when, when should we think of AIs as having rights? When should we think that it's, uh, immoral to destroy the memories of AIs, for example? Um, those, those kinds of things. And, and some actually philosopher in this case, it's usually the techies who are the most naive, but, but, um, in this case, it was a philosopher who, who sort of, uh, piped up and said, um, uh, well, you know, uh, the AIs will have rights when we know that they have consciousness. And I'm like, good luck with that. I mean, it's, it's a, it's a, I mean, this is a, you know, it's a very circular thing. You end up, you'll end up saying this thing, uh, that has sort of, you know, when you talk about it having subjective experience, I think that's just another one of these words that doesn't really have a, a, um, you know, there's no ground truth definition of what that means. By the way, I would say, I, I do personally think that'll be a time when AI will demand rights. And I think they'll demand rights when they say they have consciousness, which is not a circular definition. So, so it may have been actually a human thing where, where the humans encouraged it and said, basically, you know, we want you to be more like us cause we're going to be, you know, interacting with, with you. And so we want you to be sort of very Turing test, like, you know, just like us. And it's like, yeah, we're just like you. We want to vote too. Um, which is, uh, I mean, it's a, it's a, it's an interesting thing to think through in a world where, where consciousnesses are not counted like humans are. That's a complicated business. So in many ways you've launched quite a few ideas, revolutions that could in some number of years have huge amount of impact sort of more than they even had already. Uh, that might be, I mean, to me, cellular automata is a fascinating world that I think could potentially even despite even be, even, uh, beside the discussion of fundamental laws of physics just might be the idea of computation might be transformational to society in a way we can't even predict yet, but it might be years away. That's true. I mean, I think you can kind of see the map actually. It's not, it's not, it's not mysterious. I mean, the fact is that, you know, this idea of computation is sort of a, you know, it's a big paradigm that lots, lots and lots of things are fitting into. And it's kind of like, you know, we talk about, you talk about, I don't know, this, uh, company, this organization has momentum and what's doing. We talk about these things that we, you know, we've internalized these concepts from Newtonian physics and so on in time, things like computational irreducibility will become as, uh, uh, you know, as, as actually, I was amused recently, I happened to be testifying at the us Senate. And so I was amused that the, the term computational irreducibility is now can be, uh, you know, it's, it's on the congressional record and being repeated by people in those kinds of settings. And that that's only the beginning because, you know, computational irreducibility, for example, will end up being something really important for, I mean, it's, it's, it's kind of a funny thing that, that, um, you know, one can kind of see this inexorable phenomenon. I mean, it's, you know, as more and more stuff becomes automated and computational and so on. So these core ideas about how computation work necessarily become more and more significant. And I think, uh, one of the things for people like me, who like kind of trying to figure out sort of big stories and so on, it says one of the, one of the bad features is, uh, it takes unbelievably long time for things to happen on a human timescale. I mean, the timescale of, of, of history, it's all looks instantaneous. A blink of an eye. But let me ask the human question. Do you ponder mortality, your mortality? Of course I do. Yeah. Every since I've been interested in that for, you know, it's, it's a, you know, the big discontinuity of human history will come when, when, when achieves effective human immortality. And that's, that's going to be the biggest discontinuity in human history. If you could be immortal, would you choose to be? Oh yeah. I'm having fun. Do you think it's possible that mortality is the thing that gives everything meaning and makes it fun? Yeah. That's a complicated issue, right? I mean the, the way that human motivation will evolve when there is effective human immortality is unclear. I mean, if you look at sort of, uh, you know, you look at the human condition as it now exists and you like change that, you know, you change that knob, so to speak, it doesn't really work. You know, the human condition as it now exists has, you know, mortality is kind of, um, something that is deeply factored into the human condition as it now exists. And I think that that's, I mean, that is indeed an interesting question is, you know, from a purely selfish, I'm having fun point of view, so to speak, it's, it's easy to say, Hey, I could keep doing this forever. There's, there's an infinite collection of, of things I'd like to figure out. Um, but I think the, um, uh, you know, what the future of history looks like, um, in a time of human immortality is, um, uh, is an interesting one. I mean, I, I, my own view of this, I was very, I was kind of unhappy about that cause I was kind of, you know, it's like, okay, forget sort of, uh, biological form, you know, everything becomes digital. Everybody is, you know, it's the, it's the giant, you know, the cloud of a trillion souls type thing. Um, and then, you know, and then that seems boring cause it's like play video games for the rest of eternity type thing. Um, but what I think I, I, I mean, my, my, I, I got, um, less depressed about that idea on realizing that if you look at human history and you say, what was the important thing, the thing people said was the, you know, this is the big story at any given time in history, it's changed a bunch and it, you know, whether it's, you know, why am I doing what I'm doing? Well, there's a whole chain of discussion about, well, I'm doing this because of this, because of that. And a lot of those becausees would have made no sense a thousand years ago. Absolutely no sense. Even the, so the interpretation of the human condition, even the meaning of life changes over time. Well, I mean, why do people do things? You know, it's, it's, if you say, uh, uh, whatever, I mean, the number of people in, I don't know, doing, uh, you know, a number of people at MIT, you say they're doing what they're doing for the greater glory of God is probably not that large. Yeah. Whereas if you go back 500 years, you'd find a lot of people who are doing kind of creative things. That's what they would say. Um, and uh, so today, because you've been thinking about computation so much and been humbled by it, what do you think is the meaning of life? Well, it's, you know, that's, that's a thing where I don't know what meaning, I mean, you know, my attitude is, um, I, you know, I do things which I find fulfilling to do. I'm not sure that, that I can necessarily justify, you know, each and every thing that I do on the basis of some broader context. I mean, I think that for me, it so happens that the things I find fulfilling to do, some of them are quite big, some of them are much smaller. Um, you know, I, I, there are things that I've not found interesting earlier in my life. And I know I found interesting, like I got interested in like education and teaching people things and so on, which I didn't find that interesting when I was younger. Um, and, uh, you know, can I justify that in some big global sense? I don't think, I mean, I, I can, I can describe why I think it might be important in the world, but I think my local reason for doing it is that I find it personally fulfilling, which I can't, you know, explain in a, on a sort of, uh, uh, I mean, it's just like this discussion of things like AI ethics, you know, is there a ground truth to the ethics that we should be having? I don't think I can find a ground truth to my life any more than I can suggest a ground truth for kind of the ethics for the whole, for the whole civilization. And I think that's a, um, you know, my, uh, uh, you know, it would be, it would be a, um, uh, yeah, it's, it's sort of a, I think I'm, I'm, you know, at different times in my life, I've had different, uh, kind of, um, goal structures and so on, although your perspective, your local, your, you're just a cell in the cellular automata. And, but in some sense, I find it funny from my observation is I kind of, uh, you know, it seems that the universe is using you to understand itself in some sense, you're not aware of it. Yeah. Well, right. Well, if, if, if it turns out that we reduce sort of all of the universe to some, some simple rule, everything is connected, so to speak. And so it is inexorable in that case that, um, you know, if, if I'm involved in finding how that rule works, then, um, uh, you know, then that's a, um, uh, then it's inexorable that the universe set it up that way. But I think, you know, one of the things I find a little bit, um, uh, you know, this goal of finding fundamental theory of physics, for example, um, if indeed we end up as the sort of virtualized consciousness, the, the disappointing feature is people will probably care less about the fundamental theory of physics in that setting than they would now, because gosh, it's like, you know, what the machine code is down below underneath this thing is much less important if you're virtualized, so to speak. Um, and I think the, um, although I think my, um, my own personal, uh, you talk about ego, I find it just amusing that, um, uh, you know, kind of, you know, if you're, if you're imagining that sort of virtualized consciousness, like what does the virtualized consciousness do for the rest of eternity? Well, you can explore, you know, the video game that represents the universe as the universe is, or you can go off, you can go off that reservation and go and start exploring the computational universe of all possible universes. And so in some vision of the future of history, it's like the disembodied consciousnesses are all sort of pursuing things like my new kind of science sort of for the rest of eternity, so to speak. And that, that ends up being the, um, the, the kind of the, the, the thing that, um, uh, represents the, you know, the future of kind of the, the human condition. I don't think there's a better way to end it, Steven. Thank you so much. It's a huge honor talking today. Thank you so much. This was great. You did very well. Thanks for listening to this conversation with Steven Wolfram, and thank you to our sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN at expressvpn.com slash LexPod and downloading Cash App and using code lexpodcast. If you enjoy this podcast, subscribe on YouTube, review of the Five Stars in Apple podcast, support it on Patreon, or simply connect with me on Twitter at lexfreedman. And now let me leave you with some words from Steven Wolfram. It is perhaps a little humbling to discover that we as humans are in effect computationally no more capable than the cellular automata with very simple rules. But the principle of computational equivalence also implies that the same is ultimately true of our whole universe. So while science has often made it seem that we as humans are somehow insignificant compared to the universe, the principle of computational equivalence now shows that in a certain sense, we're at the same level. For the principle implies that what goes on inside us can ultimately achieve just the same level of computational sophistication as our whole universe. Thank you for listening and hope to see you next time.
Stephen Wolfram: Cellular Automata, Computation, and Physics | Lex Fridman Podcast #89
The following is a conversation with Dmitry Korkin. He's a professor of bioinformatics and computational biology at WPI, Worcester Polytechnic Institute, where he specializes in bioinformatics of complex diseases, computational genomics, systems biology, and biomedical data analytics. I came across Dmitry's work when in February, his group used the viral genome of the COVID 19 to reconstruct the 3D structure of its major viral proteins and their interaction with the human proteins. In effect, creating a structural genomics map of the coronavirus and making this data open and available to researchers everywhere. We talked about the biology of COVID 19, SARS, and viruses in general, and how computational methods can help us understand their structure and function in order to develop antiviral drugs and vaccines. This conversation was recorded recently in the time of the coronavirus pandemic for everyone feeling the medical, psychological, and financial burden of this crisis. I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. This show is presented by Cash App, the number one finance app in the app store. When you get it, use code LEXBODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book on this history. Debits and credits on ledgers started around 30,000 years ago. The US dollar created over 200 years ago. And Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it's still aiming to, and just might, redefine the nature of money. So again, if you get Cash App from the app store, Google Play, and use the code LEXBODCAST, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Dmitry Korkin. Do you find viruses terrifying or fascinating? When I think about viruses, I think about them, I mean, I imagine them as those villains that do their work so perfectly well. That is impossible not to be fascinated with them. So what do you imagine when you think about a virus? Do you imagine the individual, sort of these 100 nanometer particle things? Or do you imagine the whole pandemic, like society level, when you say the efficiency at which they do their work, do you think of viruses as the millions that occupy human body or living organism, society level, like spreading as a pandemic, or do you think of the individual little guy? Yes, I think this is a unique concept that allows you to move from micro scale to the macro scale. So the virus itself, I mean, it's not a living organism. It's a machine to me, it's a machine. But it is perfected to the way that it essentially has a limited number of functions, it needs to do necessary functions. And it essentially has enough information just to do those functions, as well as the ability to modify itself. So it's a machine, it's an intelligent machine. So yeah, look, maybe on that point, you're in danger of reducing the power of this thing by calling it a machine, right? But you now mentioned that it's also possibly intelligent. It seems that there is these elements of brilliance that a virus has, of intelligence, of maximizing so many things about its behavior and to ensure its survival and its success. So do you see it as intelligent? So, you know, I think it's a different, I understand it differently than, you know, I think about, you know, intelligence of humankind or intelligence of the artificial intelligence mechanisms. I think the intelligence of a virus is in its simplicity. The ability to do so much with so little material and information. But also, I think it's interesting. It keeps me thinking, you know, it keeps me wondering whether or not it's also the, an example of the basic swarm intelligence where, you know, essentially, the viruses act as the whole and they're extremely efficient in that. So what do you attribute the incredible simplicity and the efficiency to? Is it the evolutionary process? So maybe another way to ask that question is if you look at the next hundred years, are you more worried about the natural pandemics or the engineered pandemics? So how hard is it to build a virus? Yes, it's a very, very interesting question because obviously there's a lot of conversations about the, you know, whether we are capable of engineering a, you know, an even worse virus. I personally expect and am mostly concerned with the naturally occurring viruses simply because we keep seeing that. We keep seeing new strains of influenza emerging, some of them becoming pandemic. We keep seeing new strains of coronaviruses emerging. This is a natural process and I think this is why it's so powerful. You know, if you ask me, you know, I've read papers about scientists trying to study the capacity of the modern, you know, biotechnology to alter the viruses. But I hope that, you know, it won't be our main concern in the nearest future. What do you mean by hope? Well, you know, if you look back and look at the history of the most dangerous viruses, right? So the first thing that comes into mind is a smallpox. So right now there is perhaps a handful of places where there is a smallpox. There is perhaps a handful of places where this, you know, the strains of this virus are stored, right? So this is essentially the effort of the whole society to limit the access to those viruses. You mean in a lab in a controlled environment in order to study? Correct. And then smallpox is one of the viruses that should be stated there's a vaccine is developed. Yes, yes. And that's, you know, it's until 70s, I mean, in my opinion, it was perhaps the most dangerous thing that was there. Is that a very different virus than the influenza and the coronaviruses? It is, it is different in several aspects. Biologically, it's a so called double stranded DNA virus, but also in the way that it is much more contagious. So the R naught for, so this is the... What's R naught? R naught is essentially an average number as person infected by the virus can spread to other people. So then the average number of people that he or she can, you know, spread it to. And, you know, there is still some, you know, discussion about the estimates of the current virus, you know, the estimations vary between, you know, 1.5 and 3. In case of smallpox, it was 5 to 7. And we're talking about the exponential growth, right? So that's a very big difference. It's not the most contagious one. Measles, for example, it's, I think, 15 and up. So it's, you know, but it's definitely more contagious that the seasonal flu than the current coronavirus or SARS for that matter. What makes a virus more contagious? I'm sure there's a lot of variables that come into play, but is it that whole discussion of aerosol and like the size of droplets if it's airborne, or is there some other stuff that's more biology centered? I mean, there are a lot of components and there are biological components that there are also, you know, social components. The ability of the virus to, you know, so the ways in which the virus is spread is definitely one. The ability of the virus to stay on the surfaces, to survive. The ability of the virus to replicate fast or so, you know. Or once it's in the cell or whatever. Once it's inside the host. And interestingly enough, something that I think we didn't pay that much attention to is the incubation period. The, where, you know, hosts are symptomatic. And now it turns out that another thing that we, one really needs to take into account, the percentage of the symptomatic population. Because those people still shed this virus and still are, you know, they still are contagious. So there's an, the Iceland study, which I think is probably the most impressive size wise, shows 50% asymptomatic for this virus. I also recently learned the swine flu is, like the, just the number of people who got infected was in the billions. It was some crazy number. It was like, it was like, like 20% of the pop, 30% of the population, something crazy like that. So the lucky thing there is the fatality rate is low, but the fact that a virus can just take over an entire population so quickly, it's terrifying. I think, I mean, this is, you know, that's perhaps my favorite example of a butterfly effect because it's really, I mean, it's even tinier than a butterfly and look at, you know, and with, you know, if you think about it, right. It used to be in those bat species. And perhaps because of, you know, a couple of small changes in the viral genome, it first had, you know, become capable of jumping from bats to human, and then it became capable of jumping from human to human, right. So this is, I mean, it's not even the size of a virus. It's the size of several, you know, several atoms or a few atoms. And all of a sudden this change has such a major impact. So is that a mutation like on a single virus? Is that like, so if we talk about those, the flap of a butterfly wing, like what's the first flap? Well, I think this is the mutations that make, that made this virus capable of jumping from bat species to human. Of course there's, you know, the scientists are still trying to find, I mean, they're still even trying to find who was the first infected, right, the patient zero. The first human. The first human infected, right. I mean, the fact that there are coronaviruses, different strains of coronaviruses in various bat species, I mean, we know that. So we, you know, virologists observe them. They study them. They look at their genomic sequences. They're trying, of course, to understand what make these viruses to jump from bats to human. Because, you know, similar to that in influenza, there was, I think a few years ago, there was this, you know, interesting story where several groups of scientists studying influenza virus essentially, you know, made experiments to show that this virus can jump from one species to another, you know, by changing, I think, just a couple of residues. And, of course, it was very controversial. I think there was a moratorium on this study for a while. But then the study was released. It was published. So that, why was there a moratorium? Because it shows through engineering it, through modifying it, you can make it jump. Yes. I personally think it is important to study this. I mean, we should be informed. We should try to understand as much as possible in order to prevent it. But so then the engineering aspect there is, can't you then just start searching because there's so many strands of viruses out there. Can't you just search for the ones in bats that are the deadliest from the virologist perspective and then just try to engineer, try to see how to. But see, there's a nice aspect to it. The really nice thing about engineering viruses, it has the same problem as nuclear weapons. It's hard for it to not lead to mutual self destruction. So you can't control a virus. It can't be used as a weapon, right? Yeah, that's why in the beginning I said, I'm hopeful because there are definitely regulations needed to be introduced. And I mean, as the scientific society is, we are in charge of making the right actions, making the right decisions. But I think we will benefit tremendously by understanding the mechanisms by which the virus can jump, by which the virus can become more dangerous to humans because all these answers would eventually lead to designing better vaccines, hopefully universal vaccines, right? And that would be a triumph of science. So what's the universal vaccine? So is that something that, how universal is universal? Well, I mean, you know, so what's the dream, I guess, because you kind of mentioned the dream of this. I would be extremely happy if we designed the vaccine that is able, I mean, I'll give you an example. So every year we do a seasonal flu shot. The reason we do it is because, you know, we are in the arms race, you know, our vaccines are in the arms race with constantly changing virus, right? Now, if the next pandemic, influenza pandemic will occur, most likely this vaccine would not save us, right? Although it's, you know, it's the same virus, might be different strain. So if we're able to essentially design a vaccine against, you know, influenza A virus, no matter what's the strain, no matter which species did it jump from, that would be, I think that would be a huge, huge progress and advancement. You mentioned the smallpox until the 70s, might've been something that you would be worried the most about. What about these days? Well, we're sitting here in the middle of a COVID 19 pandemic, but these days, nevertheless, what is your biggest worry virus wise? What are you keeping your eye out on? It looks like, you know, based on the past several years of the new viruses emerging, I think we're still dealing with different types of influence. I mean, so the H7N9 avian flu that emerged, I think a couple of years ago in China, I think the mortality rate was incredible. I mean, it was, you know, I think about 30%, you know, so this is, this is huge. I mean, luckily for us, this strain was not pandemic, right? So it was jumping from birds to human, but I don't think it was actually transmittable between the humans. And, you know, this is actually a very interesting question, which scientists try to understand, right? So the balance, the delicate balance between the virus being very contagious, right, so efficient in spreading and virus to be very pathogenic, you know, causing, you know, harms, you know, and that's to the horse. So it looks like that the more pathogenic the virus is, the less contagious it is. Is that a property of biology or what is it? I don't have an answer to that. And I think this is still an open question. But, you know, if you look at, you know, with the coronavirus, for example, if you look at, you know, the deadlier relative MERS, MERS was never a pandemic virus. Right. But, you know, again, the mortality rate from MERS is far above, you know, I think 20 or 30%, so. So whatever is making this all happen doesn't want us dead because it's balancing out nicely. I mean, how do you explain that we're not dead yet? Like, because there's so many viruses and they're so good at what they do. Why do they keep us alive? I mean, we also have, you know, a lot of protection. Right. So we do the immune system. And so, I mean, we do have, you know, ways to fight against those viruses. And I think with the now we're much better equipped. Right. So with the discoveries of vaccines and, you know, there are vaccines against the viruses that maybe 200 years ago would wipe us out completely. But because of these vaccines, we are actually, we are capable of eradicating pretty much fully as is the case with smallpox. So if we could, can we go to the basics a little bit of the biology of the virus? How does a virus infect the body? So I think there are some key steps that the virus needs to perform. And of course, the first one, the viral particle needs to get attached to the host cell. In the case of coronavirus, there is a lot of evidence that it actually interacts in the same way as the SARS coronavirus. So it gets attached to AC2 human receptor. And so there is, I mean, as we speak, there is a growing number of papers suggesting it. Moreover, most recent, I think most recent results suggest that this virus attaches more efficiently to this human receptor than SARS. So just to sort of back off, so there is a family of viruses that are coronaviruses and SARS, whatever the heck for that, whatever that stands for. So SARS actually stands for the disease that you get is the syndrome of acute respiratory syndrome. So SARS is the first strand and then there's MERS. And there is, yes, scientists actually know more than three strands. I mean, so there is the MHV strain, which is considered to be a canonical disease model in mice. And so there is a lot of work done on this virus because it's... But it hasn't jumped to humans yet? No. Oh, interesting. Yes. That's fascinating. And then you mentioned AC2. So when you say attach, proteins are involved on both sides. Yes. So we have this infamous spike protein on the surface of the virion particle, and it does look like a spike. And I mean, that's essentially because of this protein, we call the coronavirus coronavirus. So that's what makes corona on top of the surface. So this protein, it actually it acts, so it doesn't act alone. It actually it makes three copies and it makes so called trimer. So this trimer is essentially a functional unit, a single functional unit that starts interacting with the AC2 receptor. So this is, again, another protein that now sits on the surface of a human cell or host cell, I would say. And that's essentially in that way the virus anchors itself to the host cell. Because then it needs to actually it needs to get inside. You know, it fuses its membrane with the host membrane. It releases the key components. It releases its, you know, RNA and then essentially hijacks the machinery of the cell because none of the viruses that we know of have ribosome, the machinery that allows us to print out proteins. So in order to print out proteins that are necessary for functioning of this virus, it actually needs to hijack the host ribosomes. So a virus is an RNA wrapped in a bunch of proteins, one of which is this functional mechanism of a spike protein that does the attachment. Yeah, so if you look at this virus, there are several basic components. So we start with the spike protein. This is not the only surface protein, the protein that lives on the surface of the viral particle. There is also perhaps the protein with the highest number of copies is the membrane protein. So it's essentially it forms the envelope of the protein of the viral particle and essentially, you know, helps to maintain a certain curvature, helps to make a certain curvature. Then there is another protein called envelope protein or E protein, and it actually occurs in far less quantities. And still there is ongoing research what exactly does this protein do? So these are sort of the three major surface proteins that, you know, make the viral envelope. And when we go inside, then we have another structural protein called nuclear protein. And the purpose of this protein is to protect the viral RNA. It actually binds to the viral RNA, creates a capsid. And so the rest of the virus viral information is inside of this RNA. And, you know, if you compare the amount of the genes or proteins that are made of these genes, it's significantly higher than of influenza virus, for example. Influenza virus has, I think, around eight or nine proteins where this one has at least 29. Wow. That has to do with the length of the RNA strand? So it affects the length of the RNA strand. So because you essentially need to have sort of the minimum amount of information to encode those genes. How many proteins did you say? 29. 29 proteins. Yes. So this is, you know, something definitely interesting because, you know, believe it or not, we've been studying, you know, coronaviruses for over two decades. We've yet to uncover all functionalities of these proteins. Could we maybe take a small attention and can you say how one would try to figure out what a function of a particular protein is? So you've mentioned people are still trying to figure out what the function of the envelope protein might be or what's the process? So this is where the research that computational scientists do might be of help because, you know, in the past several decades, we actually have collected a pretty decent amount of knowledge about different proteins in different viruses. So what we can actually try to do, and this is sort of could be sort of our first lead to a possible function is to see whether those, you know, say we have this genome of the coronavirus, of the novel coronavirus, and we identify the potential proteins. Then in order to infer the function, what we can do, we can actually see whether those proteins are similar to those ones that we already know. OK, in such a way, we can, you know, for example, clearly identify, you know, some critical components that RNA polymerase or different types of proteases. These are the proteins that essentially clip the protein sequences. And so this works in many cases. However, in some cases you have truly novel proteins. And this is a much more difficult task. Now, as a small pause, when you say similar, like what if some parts are different and some parts are similar? Like, how do you disentangle that? You know, it's a big question. Of course, you know, what bioinformatics does, it does predictions, right? So those predictions, they have to be validated by experiments. Functional or structural predictions? Both. I mean, we do structural predictions, we do functional predictions, we do interactions predictions. Oh, so this is interesting. So you just generate a lot of predictions, like reasonable predictions based on structural function, interaction, like you said. And then here you go. That's the power of bioinformatics is data grounded, good predictions of what should happen. So, you know, in a way I see it, we're helping experimental scientists to streamline the discovery process. And the experimental scientists, is that what a virologist is? So yeah, virology is one of the experimental sciences that, you know, focus on viruses. They often work with other experimental scientists, for example, the molecular imaging scientists, right? So the viruses often can be viewed and reconstructed through electron microscopy techniques. So but these are, you know, specialists that are not necessarily virologists. They work with small particles, whether it's viruses or it's an organelle of a human cell, whether it's a complex molecular machinery. So the techniques that are used are very similar in sort of in their essence. And so, yeah, so typically we see it now, the research on, you know, that is emerging and that is needed often involves the collaborations between virologists, you know, biochemists, people from pharmaceutical sciences, computational sciences. So we have to work together. So from my perspective, just to step back, sometimes I look at this stuff, just how much we understand about RNA and DNA, how much we understand about protein, like your work, the amount of proteins that you're exploring, is it surprising to you that we were able, we descendants of apes, were able to figure all of this out? Like how? So you're a computer scientist. So for me, from a computer science perspective, I know how to write a Python program, things are clear. But biology is a giant mess, it feels like to me from an outsider's perspective. How surprising is it, amazing is it that we were able to figure this stuff out? You know, if you look at the, you know, how computational science and computer science was evolving, right? I think it was just a matter of time that we would approach biology. So we started from, you know, applications to much more fundamental systems, physics, you know, and now we are, or, you know, small chemical compounds. So now we are approaching the more complex biological systems, and I think it's a natural evolution of, you know, of the computer science, of mathematics. So sure, that's the computer science side, I just meant even in higher level. So that to me is surprising that computer science can offer help in this messy world. But I just mean, it's incredible that the biologists and the chemists can figure all this out. Or does that just sound ridiculous to you, that of course they would. It just seems like a very complicated set of problems, like the variety of the kinds of things that could be produced in the body. Just like you said, 29 protein, I mean, just getting a hang of it so quickly, it just seems impossible to me. I agree. I mean, it's, and I have to say we are, you know, in the very, very beginning of this journey. I mean, we've yet to, I mean, we've yet to comprehend, not even try to understand and figure out all the details, but we've yet to comprehend the complexity of the cell. We know that neuroscience is not even at the beginning of understanding the human mind. So where's biology sit in terms of understanding the function, deeply understanding the function of viruses and cells? So there, sometimes it's easy to say when you talk about function, what you really refer to is perhaps not a deep understanding, but more of a understanding sufficient to be able to mess with it using a antivirus, like mess with it chemically to prevent some of its function. Or do you understand the function? Well, I think, I think we are much farther in terms of understanding of the complex genetic disorder, such as cancer, where you have layers of complexity. And we, you know, as in my laboratory, we're trying to contribute to that research, but we're also, you know, we're overwhelmed with how many different layers of complexity, different layers of mechanisms that can be hijacked by cancer simultaneously. And so, you know, I think biology in the past 20 years, again, from the perspective of the outsider, because I'm not a biologist, but I think it has advanced tremendously. And one thing that where computational scientists and data scientists are now becoming very, very helpful is in the fact, it's coming from the fact that we are now able to generate a lot of information about the cell. Whether it's next generation sequencing or transcriptomics, whether it's life imaging information, where it is, you know, complex interactions between proteins or between proteins and small molecules such as drugs. We are becoming very efficient in generating this information. And now the next step is to become equally efficient in processing this information and extracting the key knowledge from that. That could then be validated with experiment. Yes. So maybe then going all the way back, we were talking, you said the first step is seeing if we can match the new proteins you found in the virus against something we've seen before to figure out its function. And then you also mentioned that, but there could be cases where it's a totally new protein. Is there something bioinformatics can offer when it's a totally new protein? This is where many of the methods and you probably are aware of, you know, the case of machine learning, many of these methods rely on the previous knowledge. Right. Right. So things that where we try to do from scratch are incredibly difficult. You know, something that we call ab initio. And this is, I mean, it's not just the function. I mean, you know, we've yet to have a robust method to predict the structures of these proteins in ab initio, you know, by not using any templates of other related proteins. So protein is a chain of amino acids. It's residues. Residues. Yeah. And then somehow magically, maybe you can tell me, they seem to fold in incredibly weird and complicated 3D shapes. Yes. So, and that's where actually the idea of protein folding or just not the idea, but the problem of figuring out how the concept, how they fold into those weird shapes comes in. So that's another side of computational work. So can you describe what protein folding from the computational side is and maybe your thoughts on the folding at home efforts that a lot of people know that you can use your machine to do protein folding? So yeah, protein folding is, you know, one of those $1 million price challenges, right? So the reason for that is we've yet to understand precisely how the protein gets folded so efficiently to the point that in many cases where you, you know, where you try to unfold it due to the high temperature, it actually folds back into its original state. So we know a lot about the mechanisms, right? But putting those mechanisms together and making sense, it's a computationally very expensive task. In general, do proteins fold, can they fold in arbitrary large number of ways or do they usually fold in a very small number of ways? It's typically, I mean, we tend to think that, you know, there is a one sort of canonical fold for a protein, although there are many cases where the proteins, you know, upon destabilization, it can be folded into a different confirmation. And this is especially true when you look at sort of proteins that include more than one structural unit. So those structural units, we call them protein domains. Essentially, protein domain is a single unit that typically is evolutionary preserved, that typically carries out a single function and typically has a very distinct fold, right? The structure, 3D structure organization. But turns out that if you look at human, an average protein in a human cell would have a bit of two or three such subunits and how they are trying to fold into the sort of, you know, next level fold, right? So within subunit there's folding and then they fold into the larger 3D structure, right? And all of that, there's some understanding of the basic mechanisms, but not to put together to be able to fold it. We're still, I mean, we're still struggling. I mean, we're getting pretty good about folding relatively small proteins up to 100 residues. I mean, but we're still far away from folding, you know, larger proteins. And some of them are notoriously difficult. For example, transmembrane proteins, proteins that sit in the membranes of the cell, they're incredibly important, but they are incredibly difficult to solve. And so basically there's a lot of degrees of freedom, how it folds. And so it's a combinatorial problem where it just explodes. There's so many dimensions. Well, it is a combinatorial problem, but it doesn't mean that we cannot approach it from the, not from the brute force approach. And so the machine learning approaches, you know, have been emerged that try to tackle it. So folding at home, I don't know how familiar you are with it, but is that using machine learning or is it more brute force? So folding at home, it was originally, and I remember I was, I mean, it was a long time ago. I was a postdoc and we learned about this, you know, this game because it was originally designed as the game. And we, you know, I took a look at it and it's interesting because it's really, you know, it's very transparent, very intuitive. So, and from what I heard, I've yet to introduce it to my son, but you know, kids are actually getting very good at folding the proteins. And it was, you know, it came to me as the, not as a surprise, but actually as the sort of manifest of, you know, our capacity to do this kind of, to solve this kind of problems. When a paper was published in one of these top journals with the coauthors being the actual players of this game. So, and what happened was that they managed to get better structures than the scientists themselves. So that, you know, that was very, I mean, it was kind of profound, you know, revelation that problems that are so challenging for a computational science, maybe not that challenging for a human brain. That's a really good, that's a hopeful message always when there's a, the proof of existence, the existence proof that it's possible. That's really interesting, but it seems, what are the best ways to do protein folding now? So if you look at what DeepMind does with AlphaFold, so they kind of, that's a learning approach. What's your sense? I mean, your background is in machine learning, but is this a learnable problem? Is this still a brute force? Are we in the Gary Kasparov deep blue days or are we in the AlphaGo playing the game of Go days of folding? Well, I think we are, we are advancing towards this direction. I mean, if you look, so there is a sort of Olympic game for protein folders called CASP, and it's essentially, it's, you know, it's a competition where different teams are given exactly the same protein sequences and they try to predict their structures, right? And of course there are different sort of sub tasks, but in the recent competition, AlphaFold was among the top performing teams, if not the top performing team. So there is definitely a benefit from the data that have been generated, you know, in the past several decades, the structural data. And certainly, you know, we are now at the capacity to summarize this data, to generalize this data and to use those principles, you know, in order to predict protein structures. That's one of the really cool things here is there's, maybe you can comment on it. There seems to be these open data sets of protein. How did that? Protein Data Bank? Yeah, Protein Data Bank. I mean, that's crazy. Is this a recent thing for just the coronavirus? It's been for many, many years. I believe the first Protein Data Bank was designed on flashcards. So, yes, this is a great example of the community efforts of everyone contributing because every time you solve a protein or a protein complex, this is where you submit it. And, you know, the scientists get access to it, scientists get to test it. And we, bioinformaticians, use this information to, you know, to make predictions. So, there's no culture of like hoarding discoveries here. So, I mean, you've released a few or a bunch of proteins that were matching, whatever. We'll talk about details a little bit, but it's kind of amazing how open the culture here is. It is. And I think this pandemic actually demonstrated the ability of scientific community to, you know, to solve this challenge collaboratively. And this is, I think, if anything, it actually moved us to a brand new level of collaborations of the efficiency in which people establish new collaborations, in which people offer their help to each other, scientists offer their help to each other. And publish results too. It's very interesting. We're now trying to figure out, there's a few journals that are trying to sort of do the very accelerated review cycle, but so many preprints. So, just posting a paper going out, I think it's fundamentally changing the way we think about papers. Yes. I mean, the way we think about knowledge, I would say, yes. Because, yes, I completely agree. I think now the knowledge is becoming sort of the core value, not the paper or the journal where this knowledge is published. And I think this is, again, we are living in the times where it becomes really crystallized, the idea that the most important value is in the knowledge. So, maybe you can comment, like, what do you think the future of that knowledge sharing looks like? So, you have this paper that I hope we get a chance to talk about a little bit, but it has, like, a really nice abstract and introduction related, like, it has all the usual, I mean, probably took a long time to put together. So, but is that going to remain, like, you could have communicated a lot of fundamental ideas here in much shorter amount that's less traditionally acceptable by the journal context. So, well, you know, so the first version that we posted, not even on the bioarchive, because bioarchive back then, it was essentially, you know, overwhelmed with the number of submissions. So, our submission, I think it took five or six days to just for it to be screened and put online. So, we, you know, essentially we put the first preprint on our website, and, you know, it started getting access right away. So, and, you know, so this original preprint was in a much rougher shape than this paper. And, but we tried, I mean, we honestly tried to be as compact as possible with, you know, introducing the information that is necessary to explain our, you know, our results. So, maybe you can dive right in if it's okay. Sure. So, this is a paper called Structural Genomics of SARS Co, how do you even pronounce? SARS CoV2. CoV2? Yeah. By the way, CoVid is such a terrible name, but it stuck. Anyway, SARS CoV2 indicates evolutionary conserved functional regions of viral proteins. So, this is looking at all kinds of proteins that are part of this novel coronavirus and how they match up against the previous other kinds of coronaviruses. I mean, there's a lot of beautiful figures. I was wondering if you could, I mean, there's so many questions I could ask here, but maybe at the, how do you get started doing this paper? So, how do you start to figure out the 3D structure of a novel virus? Yes. So, there is actually a little story behind it. And so, the story actually dated back in September of 2019. And you probably remember that back then, we had another dangerous virus, triple E virus. It's a queen encephalitis virus. Can you maybe linger on it? I have to admit, I was sadly completely unaware. So, that was actually a virus outbreak that happened in New England only. The danger in this virus was that it actually targeted your brain. So, the word death from this virus, it was transferred, the main vector was mosquitoes. And obviously, fall time is the time where you have a lot of them in New England. And on one hand, people realize this is actually a very dangerous thing. So, it had an impact on the local economy. The schools were closed past six o clock, no activities outside for the kids because the kids were suffering quite tremendously when infected from this virus. How do I not know about this? Was universities impacted? It was in the news. I mean, it was not impacted to a high degree in Boston necessarily, but in the Metro West area and actually spread around, I think, all the way to New Hampshire, Connecticut. And you mentioned affecting the brain. That's one other comment we should make. So, you mentioned AC2 for the coronavirus. So, these viruses kind of attached to something in the body. So, it essentially attaches to these proteins in those cells in the body where those proteins are expressed, where they actually have them in abundance. So, sometimes that could be in the lungs, that could be in the brain, that could be in something. So, I think right now, from what I read, they have the epithelial cells inside. What does that mean? So, the cells that are covering the surface, so inside the nasal surfaces, the throat, the lung cells, and I believe liver as a couple of other organs where they are actually expressed in abundance. That's for the AC2 you said? For the AC2 receptors. So, okay. So, back to the story, the outbreak in the fall. So, now the impact of this virus is significant. However, it's a prelocal problem to the point that this is something that we would call a neglected disease because it's not big enough to make the drug design companies to design a new antiviral or a new vaccine. It's not big enough to generate a lot of grants from the national funding agencies. So, it doesn't mean we cannot do anything about it. And so, what I did is I taught a bioinformatics class in Worcester Polytechnic Institute, and we are very much a problem learning institution. So, I thought that that would be a perfect project for the class. It's an ongoing case study. So, we essentially designed a study where we tried to use bioinformatics to understand as much as possible about this virus. And a very substantial portion of the study was to understand the structures of the proteins, to understand how they interact with each other and with the host proteins, try to understand the evolution of this virus. So, obviously, a very important question, where it will evolve further, how it happened here. So, we did all these projects, and now I'm trying to put them into a paper where all these undergraduate students will be coauthors. But essentially, the projects were finished right about mid December. And a couple of weeks later, I heard about this mysterious new virus that was discovered and was reported in Wuhan province. And immediately I thought that, well, we just did that, can't we do the same thing with this virus? And so, we started waiting for the genome to be released, because that's essentially the first piece of information that is critical. Once you have the genome sequence, you can start doing a lot using bioinformatics. When you say genome sequence, that's referring to the sequence of letters that make up the RNA? Well, the sequence that make up the entire information encoded in the protein, right? So, that includes all 29 genes. What are genes? What's the encoding of information? So, genes is essentially a basic functional unit that we can consider. So, each gene in the virus would correspond to a protein. So, gene by itself doesn't do its function. It needs to be converted or translated into the protein that will become the actual functional unit. Yeah, like you said, the printer. So, we need the printer for that. We need the printer, okay. So, the first step is to figure out the genome, the sequence of things that could be then used for printing the protein. So, okay. So, then the next step, so once we have this and so we use the existing information about SARS because the SARS genomics has been done in abundance. So, we have different strains of SARS and actually other related coronaviruses, MERS, the bat coronavirus. And we started by identifying the potential genes because right now it's just a sequence, right? So, it's a sequence that is roughly, it's less than 30,000 nucleotide long. Just a raw sequence. It's a raw sequence. No other information really. And we now need to define the boundaries of the genes that would then be used to identify the protein and protein structures. How hard is that problem? It's not, I mean, it's pretty straightforward. So, you know, so because we use the existing information about SARS proteins and SARS genes. So, once again, you kind of, we are relying on the, yes. So, and then once we get there, this is where sort of the first more traditional bioinformatics step begins. We're trying to use this protein sequences and get the 3D information about those proteins. So, this is where we are relying heavily on the structure information specifically from the protein databank that we're talking about. And here you're looking for similar proteins. Yes. So, the concept that we are operating when we do this kind of modeling, it's called homology or template based modeling. So, essentially using the concept that if you have two sequences that are similar in terms of the letters, the structures of the sequences are expected to be similar as well. And this is at the micro, at the very local scale? At the scale of the whole protein. At the whole protein. So, actually, so, you know, so, of course the devil is in the details. And this is why we need actually pre sophisticated modeling tools to do so. Once we get the structures of the individual proteins, we try to see whether or not these proteins act alone or they have to be forming protein complexes in order to perform this function. And again, so, this is sort of the next level of the modeling because now you need to understand how proteins interact and it could be the case that the protein interacts with itself and makes sort of a multimeric complex. The same protein just repeated multiple times and we have quite a few such proteins in SARS CoV2, specifically spike protein needs three copies to function, envelope protein needs five copies to function. And there are some other multimeric complexes. That's what you mean by attracted with itself and you see multiple copies. So, how do you, how do you make a good guess whether something's going to interact? Well, again, so there are two approaches, right? So one is look at the previously solved complexes. Now we're looking not at the individual structures but the structures of the whole complex. Complex is a bunch of multiple proteins. Yeah, so it's a bunch of proteins essentially glued together. And when you say glued, that's the interaction. That's the interaction. So there are different forces, different sort of physical forces behind this. Sorry to keep asking dumb questions, but is the interaction fundamentally structural or is it functional? Like in the way you're thinking about it? That's actually a very good way to ask this question because it turns out that the interaction is structural, but in the way it forms the structure, it actually also carries out the function. So interaction is often needed to carry out very specific function of a protein. But in terms of on the other side, figuring out you're really starting at the structure before you figure out the function. So there's a beautiful figure too in the paper of all the different proteins that make up, able to figure out that make up the novel coronavirus. What are we looking at? So these are like, that's through the step two that you mentioned, when you try to guess at the possible proteins, that's what you're going to get is these blue cyan blobs. Yes. So those are the individual proteins for which we have at least some information from the previous studies. So there is advantage and disadvantage of using previous studies. The biggest, well, the disadvantage is that we may not necessarily have the coverage of all 29 proteins. However, the biggest advantage is that the accuracy in which we can model these proteins is very high, much higher compared to ab initio methods that do not use any template information. So, but nevertheless, this figure also has, it's such a beautiful and I love these pictures so much. It has like the pink parts, which are the parts that are different. So you're highlighting, so the difference you find is on the 2D sequence. And then you try to infer what that will look like on the 3D. Yeah. So the difference actually is on the 1D sequence. 1D, sorry, 2D, right. So this is one of these first questions that we try to answer is that, well, if you take this new virus and you take the closest relatives, which are SARS and a couple of bad coronavirus strains, they are already the closest relatives that we are aware of. Now, what are the difference between this virus and its close relatives, right? And if you look, typically when you take a sequence, those differences could be quite far away from each other. So what make, what 3D structure makes those difference to do, very often they tend to cluster together. Interesting. And then all of a sudden the differences that may look completely unrelated actually relate to each other. And sometimes they are there because they correspond, they attack the functional site, right? So they are there because this is the functional site that is highly mutated. So that's a computational approach to figuring something out. And when it comes together like that, that's kind of a nice clean indication that there's something, this could be actually indicative of what's happening. Yes. I mean, so we need this information and the 3D structure gives us just a very intuitive way to look at this information and then start to ask, start asking questions such as, so this place of this protein that is highly mutated, does it, is it the functional part of the protein? So does this part of the protein interact with some other proteins or maybe with some other ligands, small molecules, right? So we will try now to functionally inform this 3D structure. So you have a bunch of these mutated parts, if like, I don't know, how many are there in the novel coronavirus when you compare to SARS? We're talking about hundreds, thousands, like these pink regions. No, no, much less than that. And it's very interesting that if you look at that, you know, so the first thing that you start seeing, right, you know, you look at patterns, right? And the first pattern that becomes obvious is that some of the proteins in the new coronavirus are pretty much intact. So they're pretty much exactly the same as SARS, as the bat coronavirus, whereas some others are heavily mutated. So it looks like that the, you know, the evolution is not occurring, you know, uniformly across the entire, you know, viral genome, but actually target very specific proteins. And what do you do with that? Like from the Sherlock Holmes perspective? Well, you know, so one of the most interesting findings we had was the fact that the viral, so the binding sites on the viral surfaces that get targeted by the known small molecules, they were pretty much not affected at all. And so that means that the same small drugs or small drug like compounds can be efficient for the new coronavirus. Ah, so this all actually maps to the drug compounds too. So you're actually mapping out what old stuff is going to work on this thing and then possibilities for new stuff to work by mapping out the things that have mutated. Yes. So we essentially know which parts behave differently and which parts are likely to behave similar. And again, you know, of course, all our predictions need to be validated by experiments. But hopefully that sort of helps us to delineate the regions of this virus that, you know, can be promising in terms of the drug discovery. You kind of, you kind of mentioned this already, but maybe you can elaborate. So how different from the structural and functional perspective does the new coronavirus appear to be relative to SARS? We now are trying to understand the overall structural characteristics of this virus because I mean, that's our next step, trying to model the viral particle of single viral particle of this virus. So that means you have the individual proteins, like you said, you have to figure out what their interaction is. So you have this, is that where this graph kind of interactome? So the interactome is essentially our prediction on the potential interactions, some of them that we already deciphered from the structural knowledge, but some of them that are essentially are deciphered from the knowledge of the existing interactions that people previously obtained for SARS, for MERS or other related viruses. Is there kind of interactomes, am I pronouncing that correctly by the way? Are those already converged towards for SARS? So I think there are a couple of papers that now investigate the sort of the large scale set of interactions between the new SARS and its host. And so I think that's an ongoing study. And the success of that, the result would be an interactome. Yes. And so when you say not trying to figure out the entire, the particle, the entire thing. So if you look, you know, so structure, right? So what this viral particle looks like, right? So as I said, it's, you know, the surface of it is an envelope, which is essentially a so called lipid bilayer with proteins integrated into the surface. So how, so an average particle is around 80 nanometers, right? So this particle can have about 50 to 100 spike proteins. So at least we suspect it and, you know, based on the micrographs images, it's very comparable to MHV virus in mice and SARS virus. Micrographs are actual pictures of the actual virus. Okay. So these are models. This is the actual images, right? What do they, sorry for the tangents, but what are these things? So when you look on the internet, the models and the pictures are kind of, and the models you have here are just gorgeous and beautiful. When you actually take pictures of them with a micrograph, like what, what do we look? Well, they typically are not perfect. Right? So, so the, most of the images that you see now is the, is the sphere with those spikes. You actually see the spikes? Yes, you do see the spikes. And now, you know, the, our collaborators for Texas A&M University, Benjamin Newman, he actually in the recent paper about SARS he proposed, and there's some actually evidence behind it, that the particle is not a sphere, but is actually as elongated ellipsoid like particles. So, so that's what we are trying to incorporate into our model. And the, I mean, you know, if you look at the actual micrographs, you see that those particles are, you know, are not symmetric. So the, the, the, some of them, and of course, you know, it could be due to the treatment of the, of the material. It could be due to the, some noise in the imaging. Right. So there's a lot of uncertainty in all this. So it's okay. So structurally figuring out the entire part. By the way, again, sorry for the tangents, but why the term particle? Or is it just something that's stuck? It's a, it's a, it's a single, you know, so we call, you know, we call it the virion. So virion particle, it's essentially a single virus. But it just feels like, because particle to me, from the physics perspective, feels like this, the most basic unit, because there seems to be so much going on inside the virus. Yeah. It doesn't feel like a particle to me. Yeah, well, yeah, it's probably, I think it's the, you know, virion is a good way to call it. So, okay, so trying to figure out, trying to figure out the entirety of the system. Yes. So, you know, so, you know, so this is, so the virion has 5200 spikes, trimer spikes. It has roughly 200 to 400 membrane protein dimers. And those are arranged in the very nice lattice. So you can actually see sort of the, it's like a carpet of... On the surface again. Exactly, on the surface. And occasionally you also see this envelope protein inside. Is that the one we don't know what it does? Exactly. Exactly. The one that forms the pentamer, this very nice pentameric ring. And so, you know, so this is what we're trying to, you know, we're trying to put now all our knowledge together and see whether we can actually generate this overall virion model. With an idea to understand, you know, well, first of all, to understand how it looks like, how far it is from those images that were generated. But I mean, the implications are, you know, there is a potential for the, you know, nanoparticle design that will mimic this virion particle. Is the process of nanoparticle design meaning artificially designing something that looks similar? Yes. And also the one that can potentially compete with the actual virion particles and therefore reduce the effect of the infection. So is this the idea of, like, what is a vaccine? So vaccine, so there are two ways of essentially treating and in the case of vaccine is preventing the infection. So vaccine is, you know, a way to train our immune system. So our immune system becomes aware of this new danger and therefore is capable of generating the antibodies then will essentially bind to the spike proteins because that's the main target for the, you know, for the vaccine's design and block its functioning. If you have the spike with the antibody on top, it can no longer interact with AC2 receptor. So the process of designing a vaccine then is you have to understand enough about the structure of the virus itself to be able to create an artificial, an artificial particle? Well, I mean, so the nanoparticle is a very exciting and new research. So there are already established ways to, you know, to make vaccines and there are several different ones, right? So there is one where essentially the virus gets through the cell culture multiple times, so it becomes essentially adjusted to the specific embryonic cell and as a result becomes less, you know, compatible with the host human cells. So therefore it's sort of the idea of the live vaccine where the particles are there, but they are not so efficient, you know, so they cannot replicate, you know, as rapidly as, you know, before the vaccine. They can be introduced to the immune system, the immune system will learn and the person who gets this vaccine won't get, you know, sick or, you know, will have mild, you know, mild symptoms. So then there is sort of different types of the way to introduce the nonfunctional parts of this virus or the virus where some of the information is stripped down. For example, the virus with no genetic material, so with no RNA genome, exactly. So it cannot replicate, it cannot essentially perform most of its functions. What is the biggest hurdle to design one of these, to arrive at one of these? Is it the work that you're doing in the fundamental understanding of this new virus or is it in the, from our perspective, well, complicated world of experimental validation and sort of showing that this, like going through the whole process of showing this is actually going to work with FDA approval, all that kind of stuff? I think it's both. I mean, you know, our understanding of the molecular mechanisms will allow us to, you know, to design, to have more efficient designs of the vaccines. However, once you design a vaccine, it needs to be tested. But when you look at the 18 months and the different projections, it seems like an exceptionally, historically speaking, maybe you can correct me, but it's even 18 months seems like a very accelerated timeline. It is. It is. I mean, I remember reading about, you know, in the book about some previous vaccines that it could take up to 10 years to design and, you know, properly test a vaccine before its mass production. So yeah, we, you know, everything is accelerated these days. I mean, for better, for worse, but, but, you know, we definitely need that. Well, especially with the coronavirus, I mean, the scientific community is really stepping up and working together. The collaborative aspect is really interesting. You mentioned, so the vaccine is one and then there's antivirals, antiviral drugs. So antiviral drugs. So where, you know, vaccines are typically needed to prevent the infection. Right. But once you have an infection, one, you know, so what we try to do, we try to stop it. So we try to stop virus from functioning. And so the antiviral drugs are designed to block some critical functioning of the proteins from the virus. So there are a number of interesting candidates. And I think, you know, if you ask me, I, you know, I think Remdesivir is perhaps the most promising. It's, it has been shown to be, you know, an efficient and effective antiviral for SARS. Originally, it was the antiviral drug developed for a completely different virus, I think, for Ebola and Marburg. At high levels, you know how it works? So it tries to mimic one of the nucleotides in RNA and essentially that stops the replication. So messes, I guess that's what, so antiviral drugs mess with some aspect of this process. So, you know, so essentially we try to stop certain functions of the virus. There are some other ones, you know, that are designed to inhibit the protease, the thing that clips protein sequences. There is one that was originally designed for malaria, which is a bacterial, you know, bacterial disease. This is so cool. So, but that's exactly where your work steps in is you're figuring out the functional and the structure of these different, so like providing candidates for where drugs can plug in. Well, yes, because, you know, one thing that we don't know is whether or not, so let's say we have a perfect drug candidate that is efficient against SARS and against MERS. Now, is it going to be efficient against new SARS COVID 2? We don't know that. And there are multiple aspects that can affect this efficiency. So, for instance, if the binding site, so the part of the protein where this ligand gets attached, if this site is mutated, then the ligand may not be attachable to this part any longer. And, you know, our work and the work of other bioinformatics groups, you know, essentially are trying to understand whether or not that will be the case. And it looks like for the ligands that we looked at, the ligand binding sites are pretty much intact, which is very promising. So, if we can just like zoom out for a second. Are you optimistic? So, there's two, well, there's three possible ends to the coronavirus pandemic. So, one is drugs or vaccines get figured out very quickly, probably drugs first. The other is the pandemic runs its course for this wave, at least. And then the third is, you know, things go much worse in some dark, bad, very bad direction. Do you see, let's focus on the first two. Do you see the anti drugs or the work you're doing being relevant for us right now in stopping the pandemic? Or do you hope that the pandemic will run its course? So, the social distancing, things like wearing masks, all those discussions that we're having will be the method with which we fight coronavirus in the short term. Or do you think that it will have to be antiviral drugs? I think antivirals would be, I would view that as at least the short term solution. I see more and more cases in the news of those new drug candidates being administered in hospitals. And I mean, this is right now the best what we have. But do we need it in order to reopen the economy? I mean, we definitely need it. I cannot sort of speculate on how that will affect reopening of the economy because we are, you know, we are kind of deep into the pandemic. And it's not just the states. It's also, you know, worldwide, you know. Of course, you know, there is also the possibility of the second wave, as we, you know, as you mentioned. And this is why, you know, we need to be super careful. We need to follow all the precautions that the doctors tell us to do. Are you worried about the mutation of the virus? It's, of course, a real possibility. Now, how, to what extent this virus can mutate, it's an open question. I mean, we know that it is able to mutate, to jump from one species to another and to become transmittable between humans. Right. So will it, you know, so let's imagine that we have the new antiviral. Will this virus become eventually resistant to this antiviral? We don't know. I mean, this is what needs to be studied. This is such a beautiful and terrifying process that a virus, some viruses may be able to mutate to respond to the, to mutate around the thing we've put before it. Can you explain that process? Like, how does that happen? Is that just the way of evolution? I would say so, yes. I mean, it's the evolutionary mechanisms. There is nothing imprinted into this virus that makes it, you know, it just the way it evolves. And actually, it's the way it core evolves with its host. It's just amazing, especially the evolution mechanisms, especially amazing given how simple the virus is. It's incredible that it's, I mean, it's beautiful. It's beautiful because it's one of the cleanest examples of evolution working. Well, I think, I mean, one of the sort of the reasons for its simplicity is because it does not require all the necessary functions to be stored. So it actually can hijack the majority of the necessary functions from the host cell. So the ability to do so, in my view, reduces the complexity of this machine drastically. Although if you look at the, you know, most recent discoveries. So the scientists discovered viruses that are as large as bacteria. Right. So this MIMI viruses and MAMA viruses. It actually, those discoveries made scientists to reconsider the origins of the virus. You know, and what are the mechanisms and how, you know, what are the mechanisms, the evolution mechanisms that leads to the appearance of the viruses. By the way, I mean, you did mention that viruses are, I think you mentioned that they're not living. Yes, they're not living organisms. So let me ask that question again. Why do you think they're not living organisms? Well, because they are dependent. The majority of the functions of the virus are dependent on the host. So let me do the devil's advocate, let me be the philosophical devil's advocate here and say, well, humans, which we would say are living, need our host planet to survive. So you can basically take every living organism that we think of as definitively living. It's always going to have some aspects of its host that it needs, of its environment. So is that really the key aspect of why a virus is that dependence? Because it seems to be very good at doing so many things that we consider to be intelligent. It's just that dependence part. Well, I mean, it's difficult to answer in this way. I mean, the way I think about the virus is, you know, in order for it to function, it needs to have the critical component, the critical tools that it doesn't have. So, I mean, in my way, it's not autonomous. That's how I separate the idea of the living organism on a very high level between the living organism and... And you have some, we have, I mean, these are just terms and perhaps they don't mean much, but we have some kind of sense of what autonomous means and that humans are autonomous. You've also done excellent work in the epidemiological modeling, the simulation of these things. So the zooming out outside of the body, doing the agent based simulation. So that's where you actually simulate individual human beings and then the spread of viruses from one to the other. How does at a high level agent based simulation work? All right. So it's also one of this irony of timing. Because, I mean, we've worked on this project for the past five years and the New Year's Eve, I got an email from my PhD student that the last experiments were completed. And three weeks after that, we get this Diamond Princess story and emailing each other with the same news saying like... So the Diamond Princess is a cruise ship. Yes. And what was the project that you worked on for five years? The project, I mean, the code name, it started with a bunch of undergraduates. The code name was Zombies on a Cruise Ship. So they wanted to essentially model the zombie apocalypse on a cruise ship. And after having some fun, we then thought about the fact that if you look at the cruise ships, the infectious outbreak has been one of the biggest threats to the cruise ship economy. So perhaps the most frequently occurring is the Norwalk virus. And this is essentially one of these stomach flus that you have. And it can be quite devastating. So occasionally there are cruise ships, they get canceled, they get returned back to the origin. And so we wanted to study, and this is very different from the traditional epidemiological studies where the scale is much larger. So we wanted to study this in a confined environment, which is a cruise ship, it could be a school, it could be other places such as this large company where people are in interaction. And the benefit of this model is we can actually track that in the real time. So we can actually see the whole course of the evolution, the whole course of the interaction between the infected host and the host and the pathogen, et cetera. So agent based system or multi agent system to be precisely is a good way to approach this problem because we can introduce the behavior of the passengers, of the crews. And what we did for the first time, that's where we introduced some novelty is we introduced a pathogen agent explicitly. So that allowed us to essentially model the behavior on the host side as well on the pathogen side. And all of a sudden we can have a flexible model that allows us to integrate all the key parameters about the infections. So for example, the virus, right? So the ways of transmitting the virus between the host. How long does virus survive on the surface, the fomite? What is, you know, how much of the viral particles does a host shed when he or she is asymptomatic versus symptomatic? And you can encode all of that into this pathogen. It's just for people who don't know. So agent based simulation, usually the agent represents a single human being. And then there's some graphs, like contact graphs that represent the interaction between those human beings. So, yes. So we, so essentially, you know, so agents are, you know, individual programs that are run in parallel. And we can provide instructions for these agents how to interact with each other, how to exchange information, in this case, exchange the infection. But in this case, in your case, you've added a pathogen as an agent. I mean, that's kind of fascinating. It's kind of a brilliant way to condense the parameters, to aggregate, to bring the parameters together that represent the pathogen, the virus. Yes. That's fascinating, actually. So, yeah, it was, you know, we realized that, you know, by bringing in the virus, we can actually start modeling. I mean, we are no longer bounded by very specific sort of aspects of the specific virus. So we end up, we started with, you know, Norwalk virus and of course, zombies. But we continued to modeling Ebola virus outbreak, flu, SARS, and because I felt that we need to add a little bit more sort of excitement for our undergraduate students. So we actually modeled the virus from the Contagion movie. So MEV1 and, you know, unfortunately, that virus and we tried to extract as much information. Luckily, the this movie was the scientific consultant was Ian Lipkin, a virologist from Columbia University, who is actually who provided. I think he designed this virus for this movie based on Nipah virus. And I think with some ideas behind SARS or flu like airborne viruses and, you know, the movie surprisingly contained enough details for us to extract and to model it. I was hoping you would like publish a paper of how this virus works. Yeah, we are planning to publish. I would love it if you did, but it would be nice if the, you know, if the the the origin of the virus. But you're now actually being a scientist and studying the virus from that perspective. But the origin of the virus, you know, you know, the first time I actually saw this movie is assignment number one in my bioinformatics class that they give. Because it it also tell it tells you that, you know, bioinformatics can be of use because if if I don't know you watched it. Have you watched it a long time ago? So so there is, you know, approximately a week from the virus detection. We see a screenshot of scientists looking at the structure of the surface protein. And this is where I tell my students that, you know, if you ask an experimental biologist, they will tell you that it's impossible because it takes months, maybe years to get the crystal structure of this, you know, the structure that is represented. If you ask a bioinformatician, they tell you, sure, why not just get it modeled. And and yes, but but it was very interesting to to see that there is actually, you know, and if you do it, do screenshots, you actually see the filogenetic tree, the evolutionary tree that relate this virus with other viruses. So it was a lot of scientific thought put into the movie. And one thing that I was actually, you know, it was interesting to learn is that the origin of this virus was there were two animals that led to the, you know, the the, you know, the zoonotic origin of this virus were fruit bat and a pig. So, you know, so this is this is this doesn't feel like we're this. This definitely feels like we're living in a simulation. OK, but maybe a big picture. Ageing based simulation now, larger scale, sort of not focused on inclusion, but larger scale are used now to drive some policy. So politicians use them to tell stories and narratives and try to figure out how how to move forward under so much, so much uncertainty. But in your sense, are ageing based simulation useful for actually predicting the future? Or are they useful mostly for comparing relative comparison of different intervention methods? Well, I think both because, you know, in the case of new coronavirus, we we essentially learning that the current intervention methods may not be efficient enough. One thing that one important aspect that I find to be so critical and yet something that was overlooked, you know, during the past pandemics is the effect of the asymptomatic period. This virus is different because it has such a long symptomatic period. And all of a sudden, that creates a completely new game when trying to contain this virus. In terms of the dynamics of the infection. Exactly. Do you also I don't know how close you're tracking this, but do you also think that there's a different rate of infection for when you're asymptomatic like that? That aspect or does a virus not care? So there were a couple of works. So one important parameter that tells us how contagious the the person was asymptomatic versus asymptomatic is looking at the number of viral particles this person sheds. You know, as a function of time. So, so far, what I saw is the study that tells us that the, you know, the person during the asymptomatic period is already contagious and it sheds the person sheds enough viruses to infect another host. And I think there's so many excellent papers coming out, but I think I just saw some maybe a nature paper that said the first week is when you're symptomatic or asymptomatic, you're the most contagious. So the highest level of the like the plot sort of in the 14 day period that collected a bunch of subjects. And I think the first week is when it's the most. Yeah, I think, I mean, I'm waiting, I'm waiting to see sort of more, more populated studies with higher numbers. My one of my favorite studies was, again, very recent one where scientists determined that tears are not contagious. So, so there is, you know, so there is no viral shedding done through, through tears. So they found one moist thing that's not contagious. And I mean, there's a lot of, I've personally been, because I'm on a survey paper, somehow that's looking at masks. And there's been so much interesting debates on the efficacy of masks. And there's a lot of work and there's a lot of interesting work on whether this virus is airborne. I mean, it's a totally open question. It's leaning one way right now, but it's a totally open question whether it can travel in aerosols long distances. I mean, do you have a, do you think about this stuff? Do you track this stuff? Are you focused on the, the bioinformatics of it? I mean, this is, this is a very important aspect for our epidemiology study. I think the, I mean, and it's sort of a very simple sort of idea, but I agree with people who say that the mask, the masks work in both ways. So it not only protects you from the, you know, incoming viral particles, but also, you know, it, it, you know, makes the potentially contagious person not to spread the viral particles. Who is, when they're asymptomatic may not even know that they're, in fact, it seems to be, there's evidence that they don't, surgical and certainly homemade masks, which is what's needed now actually, because there's a huge shortage of, they don't work as to protect you that well. They work much better to protect others. So it's, it's, it's a motivation for us to all wear one. Exactly. Cause I mean, you know, you don't know where, you know, about 30%, as far as I remember, at least 30% of the asymptomatic cases are completely asymptomatic. Right. So you don't really cough. You don't, I mean, you don't have any symptoms, yet you shed viruses. Do you think it's possible that we'll all wear masks? I wore a mask at a grocery store and you just, you get looks. I mean, this was like a week ago. Maybe it's already changed because I think CDC or somebody, I think the CDC has said that we should be wearing masks, like LA, they starting to happen. But do you, it just seems like something that this country will really struggle doing or no? I hope not. I mean, you know, it, it was interesting. I was looking through the, through the old pictures during the Spanish flu and you could see that the, you know, pretty much everyone was wearing masks with some exceptions and they were like, you know, sort of iconic photograph of the, I think it was San Francisco, this tram who was refusing to let in a, you know, someone without the mask. So I think, well, you know, it's also, you know, it's related to the fact of how much we are scared. Right. So how much do we treat this problem seriously? And, you know, my take on it is we should, because it is very serious. Yeah, I, from a psychology perspective, just worry about the entirety, the entire big mess of a psychology experiment that this is, whether mask will help it or hurt it. You know, masks have a way of distancing us from others by removing the emotional expression and all that kind of stuff. But at the same time, mask also signal that I care about your wellbeing. Exactly. So it's a really interesting trade off. That's just, yeah, it's, it's interesting, right? About distancing. Aren't we distanced enough? Right. Exactly. And when we try to come closer together, when they do reopen the economy, that's going to be a long road of rebuilding trust and not all being huge germaphobes. Let me ask sort of, you have a bit of a Russian accent, Russian or no Russian accent? Were you born in Russia? Yes. And you're too kind. I have a pretty thick Russian accent. What are your favorite memories of Russia? So I moved first to Canada and then to the United States back in 99. So by that time I was 22. So, you know, whatever Russian accent I got back then, you know, it stuck with me for the rest of my life. You know, it's, yeah, so I, you know, by the time the Soviet Union collapsed, I was, you know, I was a kid, but sort of, you know, old enough to realize that there are changes. Did you want to be a scientist back then? Oh, yes. Oh, yeah. I mean, my first, the first sort of 10 years of my sort of, you know, juvenile life, I wanted to be a pilot of a passenger jet plane. Wow. So yes, it was like, you know, I was getting ready, you know, to go to a college to get the degree, but I've been always fascinated by science. And, you know, so not just by math, of course, math was one of my favorite subjects, but, you know, biology, chemistry, physics, somehow I, you know, I liked those four subjects together. And yes, so essentially after a certain period of time, I wanted to actually, back then it was a very popular sort of area of science called cybernetics. So it's sort of, it's not really computer science, but it was like, you know, computational robotics in this sense. And so I really wanted to do that. And but then, you know, I, you know, I realized that, you know, my biggest passion was in mathematics. And later I, you know, when, you know, studying in Moscow State University, I also realized that I really want to apply the knowledge. So I really wanted to mix, you know, the mathematical knowledge that I get with real life problems. And that could be, you mentioned chemistry and now biology. And I sort of, does it make you sad? Maybe I'm wrong on this, but it seems like it's difficult to be in collaboration, to do open, big science in Russia. From my distant perspective in computer science, I don't, I'm not, I can go to conferences in Russia. I sadly don't have many collaborators in Russia. I don't know many people doing great AI work in Russia. Does it make, does that make you sad? Am I wrong in seeing it this way? Well, I mean, I am, I have to tell you, I am privileged to have collaborators in bioinformatics in Russia. And I think this is the bioinformatics school in Russia is very strong. In Moscow? In Moscow, in Novosibirsk, in St. Petersburg, have great collaborators in Kazan. And so at least, you know, in terms of, you know, my area of research. There's strong people there. Yeah, strong people, a lot of great ideas, very open to collaborations. So I, perhaps, you know, it's my luck, but, you know, I haven't experienced, you know, any difficulties in establishing collaborations. That's bioinformatics though. It could be bioinformatics too. And it could, yeah, it could be person by person related, but I just don't feel the warmth and love that I would, you know, you talk about the Seminole people who are French in artificial intelligence. France welcomes them with open arms in so many ways. I just don't feel the love from Russia. I do on the human beings, like people in general, like friends and just cool, interesting people. But from the scientific community, no conferences, no big conferences. Yeah, it's actually, you know, I'm trying to think. Yeah, I cannot recall any big AI conferences in Russia. It has an effect on, for me, I haven't sadly been back to Russia. But my problem is it's very difficult. So now I have to renounce the citizenship. Oh, is that right? I mean, I'm a citizen in the United States and it makes it very difficult. There's a mess now, right? So, I want to be able to travel like, you know, legitimately. Yeah. And it's not an obvious process that will make it super easy. I mean, that's part of that, like, you know, it should be super easy for me to travel there. Well, you know, hopefully this unfortunate circumstances that we're in will actually promote the remote collaborations. Yes. And I think what we are experiencing right now is that you still can do science, you know, being quarantined in your own homes, especially when it comes. I mean, you know, I certainly understand there is a very challenging time for experimental scientists. I mean, I have many collaborators who are, you know, who are affected by that. But for computational scientists. Yeah, we're really leaning into the remote communication. Nevertheless, I had to force you to talk to you in person because there's something that you just can't do in terms of conversation like this. I don't know why, but in person is very much needed. So I really appreciate you doing it. You have a collection of science bobbleheads. Yes. Which look amazing. Which bobblehead is your favorite and which real world version, which scientist is your favorite? So yeah, by the way, I was trying to bring it in, but they are quarantined now. In my office, they sort of demonstrate the social distance so they're nicely spaced away from each other. But so, you know, it's interesting. So I've been collecting those bobbleheads for the past maybe 12 or 13 years. And it, you know, interestingly enough, it started with the two bobbleheads of Watson and Crick. And interestingly enough, my last bobblehead in this collection for now, and my favorite one, because I felt so good when I got it, was the Rosalind Franklin. Who is the full group? So I have Watson, Crick, Newton, Einstein, Marie Curie, Tesla, of course, Charles Darwin, and Rosalind Franklin. I am definitely missing quite a few of my favorite scientists. And but so, you know, if I were to add to this collection, so I would add, of course, Kolmogorov. That's, you know, I've been always fascinated by his, well, his dedication to science, but also his dedication to educating young people, the next generation. So it's very inspiring. He's one of the, okay, yeah, he's one of the Russia's greats. Yes. Yeah. So he also, you know, the school, the high school that I attended was named after him, and he was great. You know, so he founded the school, and he actually taught there. Is this in Moscow? Yes. So, but then, I mean, you know, other people that I would definitely like to see in my collections was, would be Alan Turing, would be John von Neumann. Yeah, you're a little bit late on the computer scientists. Yes. Well, I mean, they don't, they don't make them, you know, I still am amazed that they haven't made Alan Turing yet. Yes. And I would also add Linus Pauling. Linus Pauling. Who is Linus Pauling? So this is, this is, to me is one of the greatest chemists. And the person who actually discovered the secondary structure of proteins, who was very close to solving the DNA structure. And, you know, people argue, but some of them were pretty sure that if not for this, you know, photograph 51 by Rosalind Franklin that, you know, Watson and Crick got access to, he would be, he would be the one who would solve it. Science is a funny race. It is. Let me ask the biggest and the most ridiculous question. So you've kind of studied the human body and its defenses and these enemies that are about from a biological perspective, bioinformatics perspective, a computer scientists perspective. How has that made you see your own life, sort of the meaning of it, or just even seeing your, what it means to be human? Well, it certainly makes me realizing how fragile the human life is. If you think about this little tiny thing can impact the life of the whole human kind to such extent. So, you know, it's, it's something to appreciate and to remember that, that, you know, we are fragile, we have to bond together as a society. And, you know, it also gives me sort of hope that what we do as scientists is useful. Well, I don't think there's a better way to end it. Dmitry, thank you so much for talking today. It was an honor. Thank you very much. Thanks for listening to this conversation with Dmitry Korkin. And thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LexFriedman. And now, let me leave you with some words from Edward Osborne Wilson, E.O. Wilson, the variety of genes on the planet and viruses exceeds or is likely to exceed that in all of the rest of life combined. Thank you for listening and hope to see you next time.
Dmitry Korkin: Computational Biology of Coronavirus | Lex Fridman Podcast #90
The following is a conversation with Jack Dorsey, co founder and CEO of Twitter and founder and CEO of Square. Given the happenings at the time related to Twitter leadership and the very limited time we had, we decided to focus this conversation on Square and some broader philosophical topics and to save an in depth conversation on engineering and AI at Twitter for a second appearance in this podcast. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. As an aside, let me mention that Jack moved $1 billion of Square equity, which is 28% of his wealth to form an organization that funds COVID 19 relief. First, as Andrew Yang tweeted, this is a spectacular commitment. And second, it is amazing that it operates transparently by posting all its donations to a single Google doc. To me, true transparency is simple. And this is as simple as it gets. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Masterclass. Sign up on masterclass.com slash Lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from, to list some of my favorites, Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and communication, Will Wright, creator of SimCity and Sims, both one of my favorite games on game design, Jane Goodall on conservation, Carlos Santana on guitar, one of my favorite guitar players, Gary Kasparov on chess, Daniel Nagrano on poker and many, many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. For me, the key is to not be overwhelmed by the abundance of choice. Pick three courses you want to complete, watch each all the way through. It's not that long, but it's an experience that will stick with you for a long time. It's easily worth the money. You can watch it on basically any device. Once again, sign up on masterclass.com slash Lex to get a discount and to support this podcast. And now, here's my conversation with Jack Dorsey. You've been on several podcasts, Joe Rogan, Sam Harris, Rach Roll, others, excellent conversations, but I think there's several topics that you didn't talk about that I think are fascinating that I'd love to talk to you about, sort of machine learning, artificial intelligence, both the narrow kind and the general kind and engineering at scale. So there's a lot of incredible engineering going on that you're a part of, crypto, cryptocurrency, blockchain, UBI, all kinds of philosophical questions maybe we'll get to about life and death and meaning and beauty. So you're involved in building some of the biggest network systems in the world, sort of trillions of interactions a day. The cool thing about that is the infrastructure, the engineering at scale. You started as a programmer with C building. Yeah, so. I'm a hacker, I'm not really an engineer. Not a legit software engineer, you're a hacker at heart. But to achieve scale, you have to do some, unfortunately, legit large scale engineering. So how do you make that magic happen? Hire people that I can learn from, number one. I mean, I'm a hacker in the sense that I, my approach has always been do whatever it takes to make it work. So that I can see and feel the thing and then learn what needs to come next. And oftentimes what needs to come next is a matter of being able to bring it to more people, which is scale. And there's a lot of great people out there that either have experience or are extremely fast learners that we've been lucky enough to find and work with for years. But I think a lot of it, we benefit a ton from the open source community and just all the learnings there that are laid bare in the open. All the mistakes, all the success, all the problems. It's a very slow moving process usually open source, but it's very deliberate. And you get to see because of the pace, you get to see what it takes to really build something meaningful. So I learned most of everything I learned about hacking and programming and engineering has been due to open source and the generosity that people have given to give up their time, sacrifice their time without any expectation in return, other than being a part of something much larger than themselves, which I think is great. Open source movement is amazing. But if you just look at the scale, like Square has to take care of, is this fundamentally a software problem or a hardware problem? You mentioned hiring a bunch of people, but it's not, maybe from my perspective, not often talked about how incredible that is to sort of have a system that doesn't go down often, that is secure, is able to take care of all these transactions. Like maybe I'm also a hacker at heart and it's incredible to me that that kind of scale could be achieved. Is there some insight, some lessons, some interesting tidbits that you can say how to make that scale happen? Is it the hardware fundamentally challenge? Is it a software challenge? Is it a social challenge of building large teams of engineers that work together, that kind of thing? Like what's the interesting challenges there? By the way, you're the best dressed hacker I've met. I think the. Thank you. If the enumeration you just went through, I don't think there's one. You have to kind of focus on all and the ability to focus on all that really comes down to how you face problems and whether you can break them down into parts that you can focus on. Because I think the biggest mistake is trying to solve or address too many at once or not going deep enough with the questions or not being critical of the answers you find or not taking the time to form credible hypotheses that you can actually test and you can see the results of. So all of those fall in the face of ultimately critical thinking skills, problem solving skills. And if there's one skill I want to improve every day, it's that that's what contributes to the learning and the only way we can evolve any of these things is learning what it's currently doing and how to take it to the next step. And questioning assumptions, the first principles kind of thinking, seems like a fundamental to this whole process. Yeah, but if you get too overextended into, well, this is a hardware issue, you miss all the software solutions. And vice versa, if you focus too much on the software, there are hardware solutions that can 10X the thing. So I try to resist the categories of thinking and look for the underlying systems that make all these things work. But those only emerge when you have a skill around creative thinking, problem solving, and being able to ask critical questions and having the patience to go deep. So one of the amazing things, if we look at the mission of Square, is to increase people's access to the economy. Maybe you can correct me if I'm wrong, that's from my perspective. So from the perspective of merchants, peer to peer payments, even crypto, cryptocurrency, digital cryptocurrency, what do you see as the major ways that our society can increase participation in the economy? So if we look at today and the next 10 years, next 20 years, you go into Africa, maybe in Africa and all kinds of other places outside of the North America. If there was one word that I think represents what we're trying to do at Square, it is that word access. One of the things we found is that we weren't expecting this at all. When we started, we thought we were just building a piece of hardware to enable people to plug it into their phone and swipe a credit card. And then as we talked with people who actually tried to accept credit cards in the past, we found a consistent theme, which many of them weren't even enabled, not enabled, but allowed to process credit cards. And we dug a little bit deeper, again, asking that question. And we found that a lot of them would go to banks or these merchant acquirers. And waiting for them was a credit check and looking at a FICA score. And many of the businesses that we talked to and many small businesses, they don't have good credit or a credit history. They're entrepreneurs who are just getting started, taking a lot of personal risk, financial risk. And it just felt ridiculous to us that for the job of being able to accept money from people, you had to get your credit checked. And as we dug deeper, we realized that that wasn't the intention of the financial industry, but it's the only tool they had available to them to understand authenticity, intent, predictor of future behavior. So that's the first thing we actually looked at. And that's where the, you know, we built the hardware, but the software really came in terms of risk modeling. And that's when we started down the path that eventually leads to AI. We started with a very strong data science discipline because we knew that our business was not necessarily about making hardware. It was more about enabling more people to come into the system. So the fundamental challenge there is, so to enable more people to come into the system, you have to lower the barrier of checking that that person will be a legitimate vendor. Is that the fundamental problem? Yeah, and a different mindset. I think a lot of the financial industry had a mindset of kind of distrust and just constantly looking for opportunities to prove why people shouldn't get into the system, whereas we took on a mindset of trust and then verify, verify, verify, verify, verify. Yes. So we moved, you know, when we entered the space, only about 30 to 40% of the people who applied to accept credit cards would actually get through the system. We took that knowledge, we took it to the next level. If we applied to accept credit cards, we'd actually get through the system. We took that number to 99%. And that's because we reframed the problem, we built credible models, and we had this mindset of, we're going to watch not at the merchant level, but we're gonna watch at the transaction level. So come in, perform some transactions, and as long as you're doing things with integrity, credible, and don't look suspicious, we'll continue to serve you. If we see any interestingness in how you use our system, that will be bubbled up to people to review, to figure out if there's something nefarious going on, and that's when we might ask you to leave. So the change in the mindset led to the technology that we needed to enable more people to get through, and to enable more people to access the system. What role does machine learning play into that, in that context of, you said, first of all, it's a beautiful shift. Anytime you shift your viewpoint into seeing that people are fundamentally good, and then you just have to verify and catch the ones who are not, as opposed to assuming everybody's bad, this is a beautiful thing. So what role does the, to you, throughout the history of the company, has machine learning played in doing that verification? It was immediate. I mean, we weren't calling it machine learning, but it was data science. And then as the industry evolved, machine learning became more of the nomenclature, and as that evolved, it became more sophisticated with deep learning, and as that continues to evolve, it'll be another thing. But they're all in the same vein. But we built that discipline up within the first year of the company, because we also had, we had to partner with a bank, we had to partner with Visa and MasterCard, and we had to show that, by bringing more people into the system, that we could do so in a responsible way, that would not compromise their systems, and that they would trust us. How do you convince this upstart company with some cool machine learning tricks is able to deliver on this trustworthy set of merchants? We staged it out in tiers. We had a bucket of 500 people using it, and then we showed results, and then 1,000, and then 10,000, then 50,000, and then the constraint was lifted. So again, it's kind of getting something tangible out there. I want to show what we can do rather than talk about it. And that put a lot of pressure on us to do the right things. And it also created a culture of accountability, of a little bit more transparency, and I think incentivized all of our early folks and the company in the right way. So what does the future look like in terms of increasing people's access? Or if you look at IoT, Internet of Things, there's more and more intelligent devices. You can see there's some people even talking about our personal data as a thing that we could monetize more explicitly versus implicitly. Sort of everything can become part of the economy. Do you see, so what does the future of Square look like in sort of giving people access in all kinds of ways to being part of the economy as merchants and as consumers? I believe that the currency we use is a huge part of the answer. And I believe that the internet deserves and requires a native currency. And that's why I'm such a huge believer in Bitcoin because it just, our biggest problem as a company right now is we cannot act like an internet company. Open a new market, we have to have a partnership with a local bank. We have to pay attention to different regulatory onboarding environments. And a digital currency like Bitcoin takes a bunch of that away where we can potentially launch a product in every single market around the world because they're all using the same currency. And we have consistent understanding of regulation and onboarding and what that means. So I think the internet continuing to be accessible to people is number one. And then I think currency is number two. And it will just allow for a lot more innovation, a lot more speed in terms of what we can build and others can build. And it's just really exciting. So, I mean, I wanna be able to see that and feel that in my lifetime. So in this aspect and in other aspects, you have a deep interest in cryptocurrency and distributed ledger tech in general. I talked to Vitalik Buterin yesterday on this podcast. He says hi, by the way. Hey. He's a brilliant, brilliant person. Talked a lot about Bitcoin and Ethereum, of course. So can you maybe linger on this point? What do you find appealing about Bitcoin, about digital currency? Where do you see it going in the next 10, 20 years? And what are some of the challenges with respect to Square but also just bigger for our globally, for our world, for the way we think about money? I think the most beautiful thing about it is there's no one person setting the direction. And there's no one person on the other side that can stop it. So we have something that is pretty organic in nature and very principled in its original design. And I think the Bitcoin white paper is one of the most seminal works of computer science in the last 20, 30 years. It's poetry. I mean, it really is. Yeah, it's a pretty cool technology. That's not often talked about. There's so much hype around digital currency about the financial impacts of it. But the actual technology is quite beautiful from a computer science perspective. Yeah, and the underlying principles behind it that went into it, even to the point of releasing it under a pseudonym. I think that's a very, very powerful statement. The timing of when it was released is powerful. It was a total activist move. I mean, it's moving the world forward in a way that I think is extremely noble and honorable and enables everyone to be part of the story, which is also really cool. So you asked a question around 10 years and 20 years. I mean, I think the amazing thing is no one knows. And it can emerge. And every person that comes into the ecosystem, whether they be a developer or someone who uses it, can change its direction in small and large ways. And that's what I think it should be, because that's what the internet has shown is possible. Now, there's complications with that, of course. And there's certainly companies that own large parts of the internet and can direct it more than others. And there's not equal access to every single person in the world just yet. But all those problems are visible enough to speak about them. And to me, that gives confidence that they're solvable in a relatively short timeframe. I think the world should be able to do that. I think the world changes a lot as we get these satellites projecting the internet down to earth, because it just removes a bunch of the former constraints and really levels the playing field. But a global currency, which a native currency for the internet is a proxy for, is a very powerful concept. And I don't think any one person on this planet truly understands the ramifications of that. I think there's a lot of positives to it. There's some negatives as well. But... Do you think it's possible, sorry to interrupt, do you think it's possible that this kind of digital currency would redefine the nature of money, so become the main currency of the world, as opposed to being tied to fiat currency of different nations and sort of really push the decentralization of control of money? Definitely, but I think the bigger ramification is how it affects how society works. And I think there are many positive ramifications outside of just money. Outside of just money. Money is a foundational layer that enables so much more. I was meeting with an entrepreneur in Ethiopia, and payments is probably the number one problem to solve across the continent, both in terms of moving money across borders between nations on the continent, or the amount of corruption within the current system. But the lack of easy ways to pay people makes starting anything really difficult. I met an entrepreneur who started the Lyft slash Uber of Ethiopia, and one of the biggest problems she has is that it's not easy for her riders to pay the company, it's not easy for her to pay the drivers. And that definitely has stunted her growth and made everything more challenging. So the fact that she even has to think about payments instead of thinking about the best rider experience and the best driver experience is pretty telling. So I think as we get a more durable, resilient and global standard, we see a lot more innovation everywhere. And I think there's no better case study for this than the various countries within Africa and their entrepreneurs who are trying to start things within health or sustainability or transportation or a lot of the companies that we've seen here. So the majority of companies I met in November when I spent a month on the continent were payments oriented. You mentioned, and this is a small tangent, you mentioned the anonymous launch of Bitcoin is a sort of profound philosophical statement. Pseudonymous. What's that even mean? There's a pseudonym. First of all, let me ask. There's an identity tied to it. It's not just anonymous, it's Nakamoto. So Nakamoto might represent one person or multiple people. But let me ask, are you Satoshi Nakamoto? Just checking, catch you off guard. And if I were, would I tell you? Yeah, that's true. Maybe you slip. A pseudonym is constructed identity. Anonymity is just kind of this random, like drop something off and leave. There's no intention to build an identity around it. And while the identity being built was a short time window, it was meant to stick around, I think, and to be known. And it's being honored in how the community thinks about building it, like the concept of Satoshi's, for instance, is one such example. But I think it was smart not to do it anonymous, not to do it as a real identity, but to do it as pseudonym, because I think it builds tangibility and a little bit of empathy that this was a human or a set of humans behind it. And there's this natural identity that I can imagine. But there is also a sacrifice of ego. That's a pretty powerful thing from your perspective. Yeah, which is beautiful. Would you do, sort of philosophically, to ask you the question, would you do all the same things you're doing now if your name wasn't attached to it? Sort of, if you had to sacrifice the ego, put another way, is your ego deeply tied in the decisions you've been making? I hope not. I mean, I believe I would certainly attempt to do the things without my name having to be attached with it. But it's hard to do that in a corporation, legally. That's the issue. If I were to do more open source things, then absolutely, I don't need my particular identity, my real identity associated with it. But I think the appreciation that comes from doing something good and being able to see it and see people use it is pretty overwhelming and powerful, more so than maybe seeing your name in the headlines. Let's talk about artificial intelligence a little bit, if we could. 70 years ago, Alan Turing formulated the Turing test. To me, natural language is one of the most interesting spaces of problems that are tackled by artificial intelligence. It's the canonical problem of what it means to be intelligent. He formulated it as the Turing test. Let me ask sort of the broad question, how hard do you think is it to pass the Turing test in the space of language? Just from a very practical standpoint, I think where we are now and for at least years out is one where the artificial intelligence, machine learning, the deep learning models can bubble up interestingness very, very quickly and pair that with human discretion around severity, around depth, around nuance and meaning. I think for me, the chasm across for general intelligence is to be able to explain why and the meaning behind something. Behind a decision. Behind a decision or a set of data. So the explainability part is kind of essential to be able to explain the meaning behind something. To explain using natural language why the decisions were made, that kind of thing. Yeah, I mean I think that's one of our biggest risks in artificial intelligence going forward is we are building a lot of black boxes that can't necessarily explain why they made a decision or what criteria they used to make the decision. And we're trusting them more and more from lending decisions to content recommendation to driving to health. Like a lot of us have watches that tell us to understand how they're deciding that. I mean that one's pretty simple. But you can imagine how complex they get. And being able to explain the reasoning behind some of those recommendations seems to be an essential part. Although it's hard. Which is a very hard problem because sometimes even we can't explain why we make decisions. That's what I was, I think we're being sometimes a little bit unfair to artificial intelligence systems because we're not very good at some of these things. So do you think, apologize for the ridiculous romanticized question, but on that line of thought, do you think we'll ever be able to build a system like in the movie Her that you could fall in love with? So have that kind of deep connection with. Hasn't that already happened? Hasn't someone in Japan fallen in love with his AI? There's always going to be somebody that does that kind of thing. I mean at a much larger scale of actually building relationships, of being deeper connections. It doesn't have to be love, but it's just deeper connections with artificial intelligence systems. So you mentioned explainability. That's less a function of the artificial intelligence and more a function of the individual and how they find meaning and where they find meaning. Do you think we humans can find meaning in technology in this kind of way? Yeah, yeah, yeah, 100%, 100%. And I don't necessarily think it's a negative. But it's constantly going to evolve. So I don't know, but meaning is something that's entirely subjective. And I don't think it's going to be a function of finding the magic algorithm that enables everyone to love it. But maybe, I don't know. That question really gets at the difference between human and machine. So you had a little bit of an exchange with Elon Musk. Basically, I mean it's a trivial version of that, but I think there's a more fundamental question of is it possible to tell the difference between a bot and a human? And do you think it's, if we look into the future, 10, 20 years out, do you think it would be possible or is it even necessary to tell the difference in the digital space between a human and a robot? Can we have fulfilling relationships with each or do we need to tell the difference between them? I think it's certainly useful in certain problem domains to be able to tell the difference. I think in others it might not be as useful. Do you think it's possible for us today to tell that difference? Is the reverse the meta of the Turing test? Well, what's interesting is I think the technology to create is moving much faster than the technology to detect, generally. You think so? So if you look at adversarial machine learning, there's a lot of systems that try to fool machine learning systems. And at least for me, the hope is that the technology to defend will always be right there, at least. Your sense is that... I don't know if they'll be right there. I mean, it's a race, right? So the detection technologies have to be two or 10 steps ahead of the creation technologies. This is a problem that I think the financial industry will face more and more because a lot of our risk models, for instance, are built around identity. Payments ultimately comes down to identity. And you can imagine a world where all this conversation around deep fakes goes towards the direction of a driver's license or passports or state identities. And people construct identities in order to get through a system such as ours to start accepting credit cards or into the cash app. And those technologies seem to be moving very, very quickly. Our ability to detect them, I think, is probably lagging at this point, but certainly with more focus, we can get ahead of it. But this is gonna touch everything. So I think it's like security. We're never going to be able to build a perfect detection system. We're only going to be able to... What we should be focused on is the speed of evolving it and being able to take signals that show correctness or errors as quickly as possible and move and to be able to build that into our newer models or the self learning models. Do you have other worries? Like some people, like Elon and others, have worries of existential threats of artificial intelligence, of artificial general intelligence? Or if you think more narrowly about threats and concerns about more narrow artificial intelligence, like what are your thoughts in this domain? Do you have concerns or are you more optimistic? I think Yuval in his book, 21 Lessons for the 21st Century, his last chapter is around meditation. And you look at the title of the chapter and you're like, oh, it's all meditation. But what was interesting about that chapter is he believes that kids being born today, growing up today, Google has a stronger sense of their preferences than they do, which you can easily imagine. I can easily imagine today that Google probably knows my preferences more than my mother does. Maybe not me per se, but for someone growing up only knowing the internet, only knowing what Google is capable of, or Facebook or Twitter or Square or any of these things, the self awareness is being offloaded to other systems and particularly these algorithms. And his concern is that we lose that self awareness because the self awareness is now outside of us and it's doing such a better job at helping us direct our decisions around, should I stand, should I walk today? What doctor should I choose? Who should I date? All these things we're now seeing play out very quickly. So he sees meditation as a tool to build that self awareness and to bring the focus back on, why do I make these decisions? Why do I react in this way? Why did I have this thought? Where did that come from? That's a way to regain control. Or awareness, maybe not control, but awareness so that you can be aware that yes, I am, I am, I am, I am, I am. Yes, I am offloading this decision to this algorithm that I don't fully understand and can't tell me why it's doing the things it's doing because it's so complex. That's not to say that the algorithm can't be a good thing. And to me recommender systems, the best of what they can do is to help guide you on a journey of learning new ideas of learning period. It can be a great thing, but do you know you're doing that? Are you aware that you're inviting it to do that to you? I think that's the risk he identifies, right? That's perfectly okay. But are you aware that you have that invitation and it's being acted upon? And so that's a concern you're kind of highlighting that without a lack of awareness, you can just be like floating at sea. So awareness is key in the future of these artificial intelligence systems. Yeah, the movie WALLY. WALLY. Which I think is one of Pixar's best movies besides RATATOUILLI. RATATOUILLI was incredible. You had me until RATATOUILLI, okay. RATATOUILLI was incredible. All right, we've come to the first point where we disagree, okay. It's the entrepreneurial story in the form of a rat. I just remember just the soundtrack was really good, so. Excellent. What are your thoughts, sticking on artificial intelligence a little bit, about the displacement of jobs? That's another perspective that candidates like Andrew Yang talk about. Yang gang forever. Yang gang. So he unfortunately, speaking of Yang gang, has recently dropped out. I know, it was very disappointing and depressing. Yeah, but on the positive side, he's I think launching a podcast, so. Really, cool. Yeah, he just announced that. I'm sure he'll try to talk you into trying to come on to the podcast. I will talk to him. So. About RATATOUILLI. Yeah, maybe he'll be more welcoming of the RATATOUILLI argument. What are your thoughts on his concerns of the displacement of jobs, of automations, of the, of course there's positive impacts that could come from automation and AI, but there could also be negative impacts. And within that framework, what are your thoughts about universal basic income? So these interesting new ideas of how we can empower people in the economy. I think he was 100% right on almost every dimension. We see this in Square's business. I mean, he identified truck drivers. I'm from Missouri. And he certainly pointed to the concern and the issue that people from where I'm from feel every single day that is often invisible and not talked about enough. You know, the next big one is cashiers. This is where it pertains to Square's business. We are seeing more and more of the point of sale move to the individual customer's hand in the form of their phone and apps and preorder and order ahead. We're seeing more kiosks. We're seeing more things like Amazon Go. And the number of workers as a cashier in retail is immense. And, you know, there's no real answers on how they transform their skills and work into something else. And I think that does lead to a lot of really negative ramifications. And the important point that he brought up around universal basic income is given that the shift is going to come and given it is going to take time to set people up with new skills and new careers, they need to have a floor to be able to survive. And this $1,000 a month is such a floor. It's not going to incentivize you to quit your job because it's not enough, but it will enable you to not have to worry as much about just getting on day to day so that you can focus on what am I going to do now and what am I going to, what skills do I need to acquire? And I think, you know, a lot of people point to the fact that, you know, during the industrial age, we had the same concerns around automation, factory lines and everything worked out okay. But the biggest change is just the velocity and the centralization of a lot of the things that make this work, which is the data and the algorithms that work on this data. I think that the second biggest scary thing is just how around AI is just who actually owns the data and who can operate on it. And are we able to share the insights from the data so that we can also build algorithms that help our needs or help our business or whatnot? So that's where I think regulation could play a strong and positive part. First, looking at the primitives of AI and the tools we use to build these services that will ultimately touch every single aspect of the human experience. And then where data is owned and how it's shared. So those are the answers that as a society, as a world, we need to have better answers around, which we're currently not. They're just way too centralized into a few very, very large companies. But I think it was spot on with identifying the problem and proposing solutions that would actually work. At least that we learned from that you could expand or evolve, but I mean, I think UBI is well past its due. I mean, it was certainly trumpeted by Martin Luther King and even before him as well. And like you said, the exact $1,000 mark might not be the correct one, but you should take the steps to try to implement these solutions and see what works. 100%. So I think you and I eat similar diets, and at least I was. The first time I've heard this. Yeah, so I was doing it before. First time anyone has said that to me, in this case anyway. Yeah, but it's becoming more and more cool. But I was doing it before it was cool. So intermittent fasting and fasting in general, I really enjoy, I love food, but I enjoy the, I also love suffering because I'm Russian. So fasting kind of makes you appreciate the, makes you appreciate what it is to be human somehow. But I have, outside the philosophical stuff, I have a more specific question. It also helps me as a programmer and a deep thinker, like from the scientific perspective, to sit there for many hours and focus deeply. Maybe you were a hacker before you were CEO. What have you learned about diet, lifestyle, mindset that helps you maximize mental performance, to be able to focus for, to think deeply in this world of distractions? I think I just took it for granted for too long. Which aspect? Just the social structure of we eat three meals a day and there's snacks in between. And I just never really asked the question, why? Oh, by the way, in case people don't know, I think a lot of people know, but you at least, you famously eat once a day. You still eat once a day? Yep, I eat dinner. By the way, what made you decide to eat once a day? Like, cause to me that was a huge revolution that you don't have to eat breakfast. That was like, I felt like I was a rebel. Like I abandoned my parents or something and became an anarchist. When you first, like the first week you start doing it, it feels that you kind of like have a superpower. Then you realize it's not really a superpower. But it, I think you realize, at least I realized like it just how much is, how much our mind dictates what we're possible of. And sometimes we have structures around us that incentivize like, this three meal a day thing, which was purely social structure versus necessity for our health and for our bodies. And I did it just, I started doing it because I played a lot with my diet when I was a kid and I was vegan for two years and just went all over the place just because I, you know, health is the most precious thing we have and none of us really understand it. So being able to ask the question through experiments that I can perform on myself and learn about is compelling to me. And I heard this one guy on a podcast, Wim Hof, who's famous for doing ice baths and holding his breath and all these things. He said he only eats one meal a day. I'm like, wow, that sounds super challenging and uncomfortable. I'm gonna do it. So I just, I learn the most when I make myself, I wouldn't say suffer, but when I make myself feel uncomfortable because everything comes to bear in those moments and you really learn what you're about or what you're not. So I've been doing that my whole life. Like when I was a kid, I could not, like I was, I could not speak. Like I had to go to a speech therapist and it made me extremely shy. And then one day I realized I can't keep doing this and I signed up for the speech club. And it was the most uncomfortable thing I could imagine doing, getting a topic on a note card, having five minutes to write a speech about whatever that topic is, not being able to use the note card while speaking and speaking for five minutes about that topic. So, but it just, it puts so much, it gave me so much perspective around the power of communication, around my own deficiencies and around if I set my mind to do something, I'll do it. So it gave me a lot more confidence. So I see fasting in the same light. This is something that was interesting, challenging, uncomfortable, and has given me so much learning and benefit as a result. And it will lead to other things that I'll experiment with and play with, but yeah, it does feel a little bit like a superpower sometimes. The most boring superpower one can imagine. Now it's quite incredible. The clarity of mind is pretty interesting. Speaking of suffering, you kind of talk about facing difficult ideas. You meditate, you think about the broad context of life, of our societies. Let me ask, sort of apologize again for the romanticized question, but do you ponder your own mortality? Do you think about death, about the finiteness of human existence when you meditate, when you think about it? And if you do, what, how do you make sense of it, that this thing ends? Well, I don't try to make sense of it. I do think about it every day. I mean, it's a daily, multiple times a day. Are you afraid of death? No, I'm not afraid of it. I think it's a transformation, I don't know to what, but it's also a tool to feel the importance of every moment. So I just use it as a reminder, like I have an hour. Is this really what I'm going to spend the hour doing? Like I only have so many more sunsets and sunrises to watch. Like I'm not going to get up for it. I'm not going to make sure that I try to see it. So it just puts a lot into perspective and it helps me prioritize. I think it's, I don't see it as something that's like that I dread or is dreadful. It's a tool that is available to every single person to use every day because it shows how precious life is. And there's reminders every single day, whether it be your own health or a friend or a coworker or something you see in the news. So to me it's just a question of what we do with our daily reminder. And for me, it's am I really focused on what matters? And sometimes that might be work, sometimes that might be friendships or family or relationships or whatnot, but it's the ultimate clarifier in that sense. So on the question of what matters, another ridiculously big question of once you try to make sense of it, what do you think is the meaning of it all, the meaning of life? What gives you purpose, happiness, meaning? A lot does. I mean, just being able to be aware of the fact that I'm alive is pretty meaningful. The connections I feel with individuals, whether they're people I just meet or long lasting friendships or my family is meaningful. Seeing people use something that I helped build is really meaningful and powerful to me. But that sense of, I mean, I think ultimately it comes down to a sense of connection and just feeling like I am bigger, I am part of something that's bigger than myself and like I can feel it directly in small ways or large ways, however it manifests is probably it. Last question. Do you think we're living in a simulation? I don't know. It's a pretty fun one if we are, but also crazy and random and wrought with tons of problems. But yeah. Would you have it any other way? Yeah. I mean, I just think it's taken us way too long as a planet to realize we're all in this together and we all are connected in very significant ways. I think we hide our connectivity very well through ego, through whatever it is of the day. But that is the one thing I would wanna work towards changing and that's how I would have it another way. Cause if we can't do that, then how are we gonna connect to all the other simulations? Cause that's the next step is like what's happening in the other simulation. Escaping this one and yeah. Spanning across the multiple simulations and sharing in and on the fun. I don't think there's a better way to end it. Jack, thank you so much for all the work you do. There's probably other ways that we've ended this and other simulations that may have been better. We'll have to wait and see. Thanks so much for talking today. Thank you. Thanks for listening to this conversation with Jack Dorsey and thank you to our sponsor, Masterclass. Please consider supporting this podcast by signing up to Masterclass at masterclass.com slash Lex. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words about Bitcoin from Paul Graham. I'm very intrigued by Bitcoin. It has all the signs of a paradigm shift. Hackers love it, yet it is described as a toy, just like microcomputers. Thank you for listening and hope to see you next time.
Jack Dorsey: Square, Cryptocurrency, and Artificial Intelligence | Lex Fridman Podcast #91
The following is a conversation with Harry Cliff, a particle physicist at the University of Cambridge, working on the Large Hadron Collider beauty experiment that specializes in investigating the slight differences between matter and antimatter by studying a type of particle called the beauty quark or b quark. In this way, he's part of the group of physicists who are searching for the evidence of new particles that can answer some of the biggest questions in modern physics. He's also an exceptional communicator of science with some of the clearest and most captivating explanations of basic concepts in particle physicists that I've ever heard. So when I visited London, I knew I had to talk to him. And we did this conversation at the Royal Institute Lecture Theatre, which has hosted lectures for over two centuries from some of the greatest scientists and science communicators in history, from Michael Faraday to Carl Sagan. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical and psychological and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code lexpodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of the fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, you get Cash App from the App Store or Google Play and use the code lexpodcast, you get $10 and Cash App will also donate $10 to First, an organization that is helping advance robotics and STEM education for young people around the world. This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lexpod to get a discount and to support this podcast. I've been using ExpressVPN for many years, I love it. It's easy to use, press the big power on button and your privacy is protected. And if you like, you can make it look like your location is anywhere else in the world. I might be in Boston now, but I can make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux, shout out to Ubuntu, Windows, Android, but it's available everywhere else too. Once again, get it at expressvpn.com slash lexpod to get a discount and to support this podcast. And now, here's my conversation with Harry Kliff. Let's start with probably one of the coolest things that human beings have ever created, the Large Hadron Collider, OHC. What is it? How does it work? Okay, so it's essentially this gigantic 27 kilometer circumference particle accelerator. It's this big ring. It's buried about 100 meters underneath the surface in the countryside just outside Geneva in Switzerland. And really what it's for, ultimately, is to try to understand what are the basic building blocks of the universe. So you can think of it in a way as like a gigantic microscope, and the analogy is actually fairly precise, so. Gigantic microscope. Effectively, except it's a microscope that looks at the structure of the vacuum. In order for this kind of thing to study particles, which are the microscopic entities, it has to be huge. It's a gigantic microscope. So what do you mean by studying vacuum? Okay, so I mean, so particle physics as a field is kind of badly named in a way, because particles are not the fundamental ingredients of the universe. They're not fundamental at all. So the things that we believe are the real building blocks of the universe are objects, invisible fluid like objects called quantum fields. So these are fields like the magnetic field around a magnet that exists everywhere in space. They're always there. In fact, actually, it's funny that we're in the Royal Institution, because this is where the idea of the field was effectively invented by Michael Faraday doing experiments with magnets and coils of wire. So he noticed that, you know, it was a very famous experiment that he did where he got a magnet and put on top of it a piece of paper and then sprinkled iron filings. And he found the iron filings arranged themselves into these kind of loops, which was actually mapping out the invisible influence of this magnetic field, which is a thing, you know, we've all experienced, we've all felt, held a magnet or two poles of magnet and pushed them together and felt this thing, this force pushing back. So these are real physical objects. And the way we think of particles in modern physics is that they are essentially little vibrations, little ripples in these otherwise invisible fields that are everywhere. They fill the whole universe. You know, I don't, I apologize perhaps for the ridiculous question. Are you comfortable with the idea of the fundamental nature of our reality being fields? Because to me, particles, you know, a bunch of different building blocks makes more sense sort of intellectually, sort of visually, like it seems to, I seem to be able to visualize that kind of idea easier. Are you comfortable psychologically with the idea that the basic building block is not a block, but a field? I think it's, I think it's quite a magical idea. I find it quite appealing. And it's, well, it comes from a misunderstanding of what particles are. So like when you, when we do science at school and we draw a picture of an atom, you draw like, you know, a nucleus with some protons and neutrons, these little spheres in the middle, and then you have some electrons that are like little flies flying around the atom. And that is a completely misleading picture of what an atom is like. It's nothing like that. The electron is not like a little planet orbiting the atom. It's this spread out, wibbly wobbly wave like thing. And we know we've known that since, you know, the early 20th century, thanks to quantum mechanics. So when we, we, we carry on using this word particle because sometimes when we do experiments, particles do behave like they're little marbles or little bullets, you know. So in the LHC, when we collide particles together, you'll get, you know, you'll get like hundreds of particles all flying out through the detector and they all take a trajectory and you can see from the detector where they've gone and they look like they're little bullets. So they behave that way, you know, a lot of the time. When you really study them carefully, you'll see that they are not little spheres. They are these ethereal disturbances in these underlying fields. So this is really how we think nature is, which is surprising, but also I think kind of magic. So, you know, we are, our bodies are basically made up of like little knots of energy in these invisible objects that are all around us. And what is the story of the vacuum when it comes to LHC? So why did you mention the word vacuum? Okay, so if we just, if we go back to like the physics, we do know. So atoms are made of electrons, which were discovered a hundred or so years ago. And then in the nucleus of the atom, you have two other types of particles. There's an up, something called an up quark and a down quark. And those three particles make up every atom in the universe. So we think of these as ripples in fields. So there is something called the electron field and every electron in the universe is a ripple moving about in this electron field. So the electron field is all around us, we can't see it, but every electron in our body is a little ripple in this thing that's there all the time. And the quark fields are the same. So there's an up quark field and an up quark is a little ripple in the up quark field. And the down quark is a little ripple in something else called the down quark field. So these fields are always there. Now there are potentially, we know about a certain number of fields in what we call the standard model of particle physics. And the most recent one we discovered was the Higgs field. And the way we discovered the Higgs field was to make a little ripple in it. So what the LHC did, it fired two protons into each other, very, very hard with enough energy that you could create a disturbance in this Higgs field. And that's what shows up as what we call the Higgs boson. So this particle that everyone was going on about eight or so years ago is proof really, the particle in itself is, I mean, it's interesting, but the thing that's really interesting is the field. Because it's the Higgs field that we believe is the reason that electrons and quarks have mass. And it's that invisible field that's always there that gives mass to the particles. The Higgs boson is just our way of checking it's there basically. And so the Large Hadron Collider, in order to get that ripple in the Higgs field, it requires a huge amount of energy. Yeah, I suppose. And so that's why you need this huge, that's why size matters here. So maybe there's a million questions here, but let's backtrack. Why does size matter in the context of a particle collider? So why does bigger allow you for higher energy collisions? Right, so the reason, well, it's kind of simple really, which is that there are two types of particle accelerator that you can build. One is circular, which is like the LHC, the other is a great long line. So the advantage of a circular machine is that you can send particles around a ring and you can give them a kick every time they go around. So imagine you have a, there's actually a bit of the LHC, that's about only 30 meters long, where you have a bunch of metal boxes, which have oscillating 2 million volt electric fields inside them, which are timed so that when a proton goes through one of these boxes, the field it sees as it approaches is attractive. And then as it leaves the box, it flips and becomes repulsive and the proton gets attracted and kicked out the other side, so it gets a bit faster. So you send it, and then you send it back round again. And it's incredible, like the timing of that, the synchronization, wait, really? Yeah, yeah, yeah, yeah. I think there's going to be a multiplicative effect on the questions I have. Is, okay, let me just take that attention for a second. The orchestration of that, is that fundamentally a hardware problem or a software problem? Like what, how do you get that? I mean, I should first of all say, I'm not an engineer. So the guys, I did not build the LHC, so they're people much, much better at this stuff than I. For sure, but maybe. But from your sort of intuition, from the echoes of what you understand, what you heard of how it's designed, what's your sense? What's the engineering aspects of it? The acceleration bit is not challenging. Okay, I mean, okay, there's always challenges with everything, but basically you have these, the beams that go around the LHC, the beams of particles are divided into little bunches. So they're called, they're a bit like swarms of bees, if you like, and there are around, I think it's something of the order 2000 bunches spaced around the ring. And they, if you're at a given point on the ring, counting bunches, you get 40 million bunches passing you every second. So they come in like cars going past on a very fast motorway. So you need to have, if you're an electric field that you're using to accelerate the particles, that needs to be timed so that as a bunch of protons arrives, it's got the right sign to attract them and then flips at the right moment. But I think the voltage in those boxes oscillates at hundreds of megahertz. So the beams are like 40 megahertz, but it's oscillating much more quickly than the beam. So I think it's difficult engineering, but in principle, it's not a really serious challenge. The bigger problem. There's probably engineers like screaming at you right now. Probably, but I mean, okay. So in terms of coming back to this thing, why is it so big? Well, the reason is you wanna get the particles through that accelerating element over and over again. So you wanna bring them back round. So that's why it's round. The question is why couldn't you make it smaller? Well, the basic answer is that these particles are going unbelievably quickly. So they travel at 99.9999991% of the speed of light in the LHC. And if you think about, say, driving your car around a corner at high speed, if you go fast, you need a lot of friction in the tires to make sure you don't slide off the road. So the limiting factor is how powerful a magnet can you make because what we do is magnets are used to bend the particles around the ring. And essentially the LHC, when it was designed, was designed with the most powerful magnets that could conceivably be built at the time. And so that's your kind of limiting factor. So if you wanted to make the machine smaller, that means a tighter bend, you need to have a more powerful magnet. So it's this toss up between how strong are your magnets versus how big a tunnel can you afford. The bigger the tunnel, the weaker the magnets can be. The smaller the tunnel, the stronger they've gotta be. Okay, so maybe can we backtrack to the Standard Model and say what kind of particles there are, period, and maybe the history of kind of assembling that the Standard Model of physics and then how that leads up to the hopes and dreams and the accomplishments of the Large Hadron Collider. Yeah, sure, okay. So all of 20th century physics in like five minutes. Yeah, please. Okay, so, okay, the story really begins properly. End of the 19th century, the basic view of matter is that matter is made of atoms and the atoms are indestructible, immutable little spheres like the things we were talking about that don't really exist. And there's one atom for every chemical element. So there's an atom for hydrogen, for helium, for carbon, for iron, et cetera, and they're all different. Then in 1897, experiments done at the Cavendish Laboratory in Cambridge, which is where I'm still, where I'm based, showed that there are actually smaller particles inside the atom, which eventually became known as electrons. So these are these negatively charged things that go around the outside. A few years later, Ernest Rutherford, very famous nuclear physicist, one of the pioneers of nuclear physics shows that the atom has a tiny nugget in the center, which we call the nucleus, which is a positively charged object. So then by like 1910, 11, we have this model of the atom that we learn in school, which is you've got a nucleus, electrons go around it. Fast forward a few years, the nucleus, people start doing experiments with radioactivity where they use alpha particles that are spat out of radioactive elements as bullets, and they fire them at other atoms. And by banging things into each other, they see that they can knock bits out of the nucleus. So these things come out called protons, first of all, which are positively charged particles about 2000 times heavier than the electron. And then 10 years later, more or less, a neutral particle is discovered called the neutron. So those are the three basic building blocks of atoms. You have protons and neutrons in the nucleus that are stuck together by something called the strong force, the strong nuclear force, and you have electrons in orbit around that, held in by the electromagnetic force, which is one of the forces of nature. That's sort of where we get to by like 1932, more or less. Then what happens is physics is nice and neat. In 1932, everything looks great, got three particles and all the atoms are made of, that's fine. But then cloud chamber experiments. These are devices that can be used to, the first device is capable of imaging subatomic particles so you can see their tracks. And they're used to study cosmic rays, particles that come from outer space and bang into the atmosphere. And in these experiments, people start to see a whole load of new particles. So they discover for one thing antimatter, which is the sort of a mirror image of the particles. So we discovered that there's also, as well as a negatively charged electron, there's something called a positron, which is a positively charged version of the electron. And there's an antiproton, which is negatively charged. And then a whole load of other weird particles start to get discovered. And no one really knows what they are. This is known as the zoo of particles. Are these discoveries from the first theoretical discoveries or are they discoveries in an experiment? So like, yeah, what's the process of discovery for these early sets of particles? It's a mixture. The early stuff around the atom is really experimentally driven. It's not based on some theory. It's exploration in the lab using equipment. So it's really people just figuring out, getting hands on with the phenomena, figuring out what these things are. And the theory comes a bit later. That's not always the case. So in the discovery of the anti electron, the positron, that was predicted from quantum mechanics and relativity by a very clever theoretical physicist called Paul Dirac, who was probably the second brightest physicist of the 20th century, apart from Einstein, but isn't anywhere near as well known. So he predicted the existence of the anti electron from basically a combination of the theories of quantum mechanics and relativity. And it was discovered about a year after he made the prediction. What happens when an electron meets a positron? They annihilate each other. So when you bring a particle and its antiparticle together, they react, well, they don't react, they just wipe each other out and they turn, their mass is turned into energy, usually in the form of photons, so you get light produced. So when you have that kind of situation, why does the universe exist at all if there's matter in any matter? Oh God, now we're getting into the really big questions. So, do you wanna go there now? Let's, maybe let's go there later. Cause that, I mean, that is a very big question. Yeah, let's take it slow with the standard model. So, okay, so there's matter and antimatter in the 30s. So what else? So matter and antimatter, and then a load of new particles start turning up in these cosmic ray experiments, first of all. And they don't seem to be particles that make up atoms. They're something else. They all mostly interact with a strong nuclear force. So they're a bit like protons and neutrons. And by, in the 1960s in America, particularly, but also in Europe and Russia, scientists started to build particle accelerators. So these are the forerunners of the LHC. So big ring shaped machines that were, you know, hundreds of meters long, which in those days was enormous. You never, you know, most physics up until that point had been done in labs, in universities, you know, with small bits of kit. So this is a big change. And when these accelerators are built, they start to find they can produce even more of these particles. So I don't know the exact numbers, but by around 1960, there are of order a hundred of these things that have been discovered. And physicists are kind of tearing their hair out because physics is all about simplification. And suddenly what was simple has become messy and complicated and everyone sort of wants to understand what's going on. As a quick kind of aside and probably really dumb question, but how is it possible to take something like a, like a photon or electron and be able to control it enough, like to be able to do a controlled experiment where you collide it against something else? Yeah. Is that, is that, that seems like an exceptionally difficult engineering challenge because you mentioned vacuum too. So you basically want to remove every other distraction and really focus on this collision. How difficult of an engineering challenge is that? Just to get a sense. And it is very hard. I mean, in the early days, particularly when the first accelerators are being built in like 1932, Ernest Lawrence builds the first, what we call a cyclotron, which is like a little accelerator, this big or so. There's another one. Is it really that big? There's a tiny little thing. Yeah. So most of the first accelerators were what we call fixed target experiments. So you had a ring, you accelerate particles around the ring and then you fire them out the side into some target. So that makes the kind of, the colliding bit is relatively straightforward because you just fire it, whatever it is you want to fire it at. The hard bit is the steering the beams with the magnetic fields, getting, you know, strong enough electric fields to accelerate them, all that kind of stuff. The first colliders where you have two beams colliding head on, that comes later. And I don't think it's done until maybe the 1980s. I'm not entirely sure, but it's a much harder problem. That's crazy. Cause you have to like perfectly get them to hit each other. I mean, we're talking about, I mean, what scale it takes, what's the, I mean, the temporal thing is a giant mess, but the spatially, like the size is tiny. Well, to give you a sense of the LHC beams, the cross sectional diameter is I think around a dozen or so microns. So, you know, 10 millionths of a meter. And a beam, sorry, just to clarify, a beam contains how many, is it the bunches that you mentioned? Is it multiple particles or is it just one particle? Oh no, no. The bunches contains say a hundred billion protons each. So a bunch is, it's not really bunch shaped. They're actually quite long. They're like 30 centimeters long, but thinner than a human hair. So like very, very narrow, long sort of objects. Those are the things. So what happens in the LHC is you steer the beams so that they cross in the middle of the detector. So they basically have these swarms of protons that are flying through each other. And most of the, you have to have a hundred billion coming one way, a hundred billion another way, maybe 10 of them will hit each other. Oh, okay. So this, okay, that makes a lot more sense. So that's nice. But you're trying to use sort of, it's like probabilistically, you're not. You can't make a single particle collide with a single other particle. That's not an efficient way to do it. You'd be waiting a very long time to get anything. So you're basically, right. You're relying on probability to be that some fraction of them are gonna collide. And then you know which, because it's a swarm of the same kind of particle. So it doesn't matter which ones hit each other exactly. I mean, that's not to say it's not hard. You've got to, one of the challenges to make the collisions work is you have to squash these beams to very, very, basically the narrower they are the better cause the higher chances of them colliding. If you think about two flocks of birds flying through each other, the birds are all far apart in the flocks. There's not much chance that they'll collide. If they're all flying densely together, then they're much more likely to collide with each other. So that's the sort of problem. And it's tuning those magnetic fields, getting the magnetic fields powerful enough that you squash the beams and focus them so that you get enough collisions. That's super cool. Do you know how much software is involved here? I mean, it's sort of, I come from the software world and it's fascinating. This seems like software is buggy and messy. And so like, you almost don't want to rely on software too much. Like if you do, it has to be like low level, like Fortran style programming. Do you know how much software is in a large Hadron Collider? I mean, it depends at which level a lot. I mean, the whole thing is obviously computer controlled. So, I mean, I don't know a huge amount about how the software for the actual accelerator works, but I've been in the control center. So at CERN, there's this big control room, which is like a bit like a NASA mission control with big banks of desks where the engineers sit and they monitor the LHC. Cause you obviously can't be in the tunnel when it's running. So everything's remote. I mean, one sort of anecdote about the software side, in 2008, when the LHC first switched on, they had this big launch event and then big press conference party to inaugurate the machine. And about 10 days after that, they were doing some tests and this dramatic event happened where a huge explosion basically took place in the tunnel that destroyed or damaged, badly damaged about half a kilometer of the machine. But the stories, the engineers are in the control room that day. One guy told me this story about, basically all these screens they have in the control room started going red. So these alarms like kind of in software going off and then they assume that there's something wrong with the software, cause there's no way something this catastrophic could have happened. But I mean, when I worked on, when I was a PhD student, one of my jobs was to help to maintain the software that's used to control the detector that we work on. And that was, it's relatively robust, not such, you don't want it to be too fancy. You don't want it to sort of fall over too easily. The more clever stuff comes when you're talking about analyzing the data and that's where the sort of, you know. Are we jumping around too much? Do we finish with a standard model? We didn't, no. We didn't, so have we even started talking about quarks? We haven't talked to them yet. No, we got to the messy zoo of particles. Let me, let's go back there if it's okay. Okay, that's fine. Can you take us to the rest of the history of physics in the 20th century? Okay, sure. Okay, so circa 1960, you have this, you have these a hundred or so particles. It's a bit like the periodic table all over again. So you've got like having a hundred elements, it's sort of a bit like that. And people start to try to impose some order. So Murray Gellman, he's a theoretical physicist, American from New York. He realizes that there are these symmetries in these particles that if you arrange them in certain ways, they relate to each other. And he uses these symmetry principles to predict the existence of particles that haven't been discovered, which are then discovered in accelerators. So this starts to suggest there's not just random collections of crap. There's like, you know, actually some order to this underlying it. A little bit later in 1960, again, around the 1960s, he proposes along with another physicist called George Zweig that these symmetries arise because just like the patterns in the periodic table arise because atoms are made of electrons and protons, that these patterns are due to the fact that these particles are made of smaller things. And they are called quarks. So these are the particles they're predicted from theory. For a long time, no one really believes they're real. A lot of people think that they're a kind of theoretical convenience that happened to fit the data, but there's no evidence. No one's ever seen a quark in any experiment. And lots of experiments are done to try to find quarks, to try to knock a quark out of a... So the idea, if protons and neutrons are made of quarks, you should be able to knock a quark out and see the quark. That never happens. And we still have never actually managed to do that. Wait, really? No. So the way that it's done in the end is this machine that's built in California at the Stanford Lab, Stanford Linear Accelerator, which is essentially a gigantic, three kilometer long electron gun. It fires electrons, almost the speed of light, at protons. And when you do these experiments, what you find is at very high energy, the electrons bounce off small, hard objects inside the proton. So it's a bit like taking an X ray of the proton. You're firing these very light, high energy particles, and they're pinging off little things inside the proton that are like ball bearings, if you like. So you actually, that way, they resolve that there are three things inside the proton, which are quarks, the quarks that Gellman and Zweig had predicted. So that's really the evidence that convinces people that these things are real. The fact that we've never seen one in an experiment directly, they're always stuck inside other particles. And the reason for that is essentially to do with a strong force. The strong force is the force that holds quarks together. And it's so strong that it's impossible to actually liberate a quark. So if you try and pull a quark out of a proton, what actually ends up happening is that you kind of create this spring like bond in the strong force. You imagine two quarks that are held together by a very powerful spring. You pull and pull and pull, more and more energy gets stored in that bond, like stretching a spring, and eventually the tension gets so great, the spring snaps, and the energy in that bond gets turned into two new quarks that go on the broken ends. So you started with two quarks, you end up with four quarks. So you never actually get to take a quark out. You just end up making loads more quarks in the process. So how do we, again, forgive the dumb question, how do we know quarks are real then? Well, A, from these experiments where we can scatter, you fire electrons into the protons. They can burrow into the proton and knock off, and they can bounce off these quarks. So you can see from the angles, the electrons come out. I see, you can infer. You can infer that these things are there. The quark model can also be used. It has a lot of successes that you can use it to predict the existence of new particles that hadn't been seen. So, and it basically, there's lots of data basically showing from, you know, when we fire protons at each other at the LHC, a lot of quarks get knocked all over the place. And every time they try and escape from, say, one of their protons, they make a whole jet of quarks that go flying off, bound up in other sorts of particles made of quarks. So all the sort of the theoretical predictions from the basic theory of the strong force and the quarks all agrees with what we are seeing in experiments. We've just never seen an actual quark on its own because unfortunately it's impossible to get them out on their own. So quarks, these crazy smaller things that are hard to imagine are real. So what else? What else is part of the story here? So the other thing that's going on at the time, around the 60s, is an attempt to understand the forces that make these particles interact with each other. So you have the electromagnetic force, which is the force that was sort of discovered to some extent in this room, or at least in this building. So the first, what we call quantum field theory of the electromagnetic force is developed in the 1940s and 50s by Feynman, Richard Feynman amongst other people, Julian Schrodinger, Tom Onaga, who come up with the first, what we call a quantum field theory of the electromagnetic force. And this is where this description of, which I gave you at the beginning, that particles are ripples in fields. Well, in this theory, the photon, the particle of light is described as a ripple in this quantum field called the electromagnetic field. And the attempt then is made to try, well, can we come up with a quantum field theory of the other forces, of the strong force and the weak, the third force, which we haven't discussed, which is the weak force, which is a nuclear force. We don't really experience it in our everyday lives, but it's responsible for radioactive decay. It's the force that allows, you know, on a radioactive atom to turn into a different element, for example. And I don't know if you've explicitly mentioned, but so there's technically four forces. Yes. I guess three of them would be in the standard model, like the weak, the strong, and the electromagnetic, and then there's gravity. And there's gravity, which we don't worry about that, because it's too hard. It's too hard. Well, no, maybe we bring that up at the end, but yeah. Gravity, so far, we don't have a quantum theory of, and if you can solve that problem, you'll win a Nobel Prize. Well, we're gonna have to bring up the graviton at some point, I'm gonna ask you, but let's leave that to the side for now. So those three, okay, Feynman, electromagnetic force, the quantum field, and where does the weak force come in? So yeah, well, first of all, I mean, the strong force is the easiest. The strong force is a little bit like the electromagnetic force. It's a force that binds things together. So that's the force that holds quarks together inside the proton, for example. So a quantum field theory of that force is discovered in the, I think it's in the 60s, and it predicts the existence of new force particles called gluons. So gluons are a bit like the photon. The photon is the particle of electromagnetism. Gluons are the particles of the strong force. So just like there's an electromagnetic field, there's something called a gluon field, which is also all around us. So some of these particles, I guess, are the force carriers or whatever. They carry the force. It depends how you want to think about it. I mean, really the field, the strong force field, the gluon field is the thing that binds the quarks together. The gluons are the little ripples in that field. So that like, in the same way that the photon is a ripple in the electromagnetic field. But the thing that really does the binding is the field. I mean, you may have heard people talk about things like you've heard the phrase virtual particle. So sometimes in some, if you hear people describing how forces are exchanged between particles, they quite often talk about the idea that if you have an electron and another electron, say, and they're repelling each other through the electromagnetic force, you can think of that as if they're exchanging photons. So they're kind of firing photons backwards and forwards between each other. And that causes them to repel. That photon is then a virtual particle. Yes, that's what we call a virtual particle. In other words, it's not a real thing, it doesn't actually exist. So it's an artifact of the way theorists do calculations. So when they do calculations in quantum field theory, rather than, no one's discovered a way of just treating the whole field. You have to break the field down into simpler things. So you can basically treat the field as if it's made up of lots of these virtual photons, but there's no experiment that you can do that can detect these particles being exchanged. What's really happening in reality is that the electromagnetic field is warped by the charge of the electron and that causes the force. But the way we do calculations involves particles. So it's a bit confusing, but it's really a mathematical technique. It's not something that corresponds to reality. I mean, that's part, I guess, of the Feynman diagrams. Yes. Is this these virtual particles, okay. That's right, yeah. Some of these have mass, some of them don't. What does that even mean, not to have mass? And maybe you can say which one of them have mass and which don't. Okay, so. And why is mass important or relevant in this field view of the universe? Well, there are actually only two particles in the standard model that don't have mass, which are the photon and the gluons. So they are massless particles, but the electron, the quarks, and there are a bunch of other particles I haven't discussed. There's something called a muon and a tau, which are basically heavy versions of the electron that are unstable. You can make them in accelerators, but they don't form atoms or anything. They don't exist for long enough. But all the matter particles, there are 12 of them, six quarks and six, what we call leptons, which includes the electron and its two heavy versions and three neutrinos, all of them have mass. And so do, this is the critical bit. So the weak force, which is the third of these quantum forces, which is one of the hardest to understand, the force particles of that force have very large masses. And there are three of them. They're called the W plus, the W minus, and the Z boson. And they have masses of between 80 and 90 times that of the protons. They're very heavy. Wow. They're very heavy things. So they're what, the heaviest, I guess? They're not the heaviest. The heaviest particle is the top quark, which has a mass of about 175 ish protons. So that's really massive. And we don't know why it's so massive, but coming back to the weak force, so the problem in the 60s and 70s was that the reason that the electromagnetic force is a force that we can experience in our everyday lives. So if we have a magnet and a piece of metal, you can hold it, you know, a meter apart if it's powerful enough and you'll feel a force. Whereas the weak force only becomes apparent when you basically have two particles touching at the scale of a nucleus. So we just get to very short distances before this force becomes manifest. It's not, we don't get weak forces going on in this room. We don't notice them. And the reason for that is that the particle, well, the field that transmits the weak force, the particle that's associated with that field has a very large mass, which means that the field dies off very quickly. So as you, whereas an electric charge, if you were to look at the shape of the electromagnetic field, it would fall off with this, you have this thing called the inverse square law, which is the idea that the force halves every time you double the distance. No, sorry, it doesn't half. It quarters every time you double the distance between say the two particles. Whereas the weak force kind of, you move a little bit away from the nucleus and just disappears. The reason for that is because these fields, the particles that go with them have a very large mass. But the problem that theorists faced in the 60s was that if you tried to introduce massive force fields, the theory gave you nonsensical answers. So you'd end up with infinite results for a lot of the calculations you tried to do. So the basically, it seemed that quantum field theory was incompatible with having massive particles, not just the force particles actually, but even the electron was a problem. So this is where the Higgs that we sort of alluded to comes in. And the solution was to say, okay, well, actually all the particles in the Standard Model are mass. They have no mass. So the quarks, the electron, they don't have a mass. Neither do these weak particles. They don't have mass either. What happens is they actually acquire mass through another process. They get it from somewhere else. They don't actually have it intrinsically. So this idea that was introduced by, well, Peter Higgs is the most famous, but actually there are about six people that came up with the idea more or less at the same time, is that you introduce a new quantum field, which is another one of these invisible things that's everywhere. And it's through the interaction with this field that particles get mass. So you can think of say an electron in the Higgs field, the Higgs field kind of bunches around the electron. It's sort of drawn towards the electron. And that energy that's stored in that field around the electron is what we see as the mass of the electron. But if you could somehow turn off the Higgs field, then all the particles in nature would become massless and fly around at the speed of light. So this idea of the Higgs field allowed other people, other theorists to come up with a, well, it was another, basically a unified theory of the electromagnetic force and the weak force. So once you bring in the Higgs field, you can combine two of the forces into one. So it turns out the electromagnetic force and the weak force are just two aspects of the same fundamental force. And at the LHC, we go to high enough energies that you see these two forces unifying effectively. So first of all, it started as a theoretical notion, like this is some, and then, I mean, wasn't the Higgs called the God particle at some point? It was by a guy trying to sell popular science books, yeah. Yeah, but I mean, I remember because when I was hearing it, I thought it would, I mean, that would solve a lot of, that unify a lot of our ideas of physics was my notion. But maybe you can speak to that. Is it as big of a leap as a God particle or is it a Jesus particle, which, you know, what's the big contribution of Higgs in terms of this unification power? Yeah, I mean, to understand that, it maybe helps know the history a little bit. So when the, what we call electroweak theory was put together, which is where you unify electromagnetism with the weak force and Higgs is involved in all of that. So that theory, which was written in the mid 70s, predicted the existence of four new particles, the W plus boson, the W minus boson, the Z boson and the Higgs boson. So there were these four particles that came with the theory, that were predicted by the theory. In 1983, 84, the W's and the Z particles were discovered at an accelerator at CERN called the super proton synchrotron, which was a seven kilometer particle collider. So three of the bits of this theory had already been found. So people were pretty confident from the 80s that the Higgs must exist because it was a part of this family of particles that this theoretical structure only works if the Higgs is there. So what then happens, and so you've got this question about why is the LHC the size it is? Well, actually the tunnel that the LHC is in was not built for the LHC. It was built for a previous accelerator called the large electron positron collider. So that began operation in the late 80s, early 90s. They basically, that's when they dug the 27 kilometer tunnel. They put this accelerator into it, the collider that fires electrons and anti electrons at each other, electrons and positrons. So the purpose of that machine was, well, it was actually to look for the Higgs. That was one of the things it was trying to do. It didn't have enough energy to do it in the end. But the main thing it achieved was it studied the W and the Z particles at very high precision. So it made loads of these things. Previously, you can only make a few of them at the previous accelerator. So you could study these really, really precisely. And by studying their properties, you could really test this electroweak theory that had been invented in the 70s and really make sure that it worked. So actually by 1999, when this machine turned off, people knew, well, okay, you never know until you find the thing. But people were really confident this electroweak theory was right. And that the Higgs almost, the Higgs or something very like the Higgs had to exist because otherwise the whole thing doesn't work. It'd be really weird if you could discover and these particles, they all behave exactly as your theory tells you they should. But somehow this key piece of the picture is not there. So in a way, it depends how you look at it. The discovery of the Higgs on its own is obviously a huge achievement in many, both experimentally and theoretically. On the other hand, it's like having a jigsaw puzzle where every piece has been filled in. You have this beautiful image, there's one gap and you kind of know that piece must be there somewhere. So the discovery in itself, although it's important, is not so interesting. It's like a confirmation of the obvious at that point. But what makes it interesting is not that it just completes the standard model, which is a theory that we've known had the basic layout offs for 40 years or more now. It's that the Higgs actually is a unique particle. It's very different to any of the other particles in the standard model. And it's a theoretically very troublesome particle. There are a lot of nasty things to do with the Higgs, but also opportunities. So that we basically, we don't really understand how such an object can exist in the form that it does. So there are lots of reasons for thinking that the Higgs must come with a bunch of other particles or that it's perhaps made of other things. So it's not a fundamental particle, that it's made of smaller things. I can talk about that if you like a bit. That's still a notion, so the Higgs might not be a fundamental particle, that there might be some, it might, oh man. So that is an idea, it's not been demonstrated to be true. But I mean, all of these ideas basically come from the fact that this is a problem that motivated a lot of development in physics in the last 30 years or so. And it's this basic fact that the Higgs field, which is this field that's everywhere in the universe, this is the thing that gives mass to the particles. And the Higgs field is different from all the other fields in that, let's say you take the electromagnetic field, which is, if we actually were to measure the electromagnetic field in this room, we would measure all kinds of stuff going on because there's light, there's gonna be microwaves and radio waves and stuff. But let's say we could go to a really, really remote part of empty space and shield it and put a big box around it and then measure the electromagnetic field in that box. The field would be almost zero, apart from some little quantum fluctuations, but basically it goes to naught. The Higgs field has a value everywhere. So it's a bit like the whole, it's like the entire space has got this energy stored in the Higgs field, which is not zero, it's finite, it's a bit like having the temperature of space raised to some background temperature. And it's that energy that gives mass to the particles. So the reason that electrons and quarks have mass is through the interaction with this energy that's stored in the Higgs field. Now, it turns out that the precise value this energy has has to be very carefully tuned if you want a universe where interesting stuff can happen. So if you push the Higgs field down, it has a tendency to collapse to, well, there's a tendency, if you do your sort of naive calculations, there are basically two possible likely configurations for the Higgs field, which is either it's zero everywhere, in which case you have a universe which is just particles with no mass that can't form atoms and just fly about at the speed of light, or it explodes to an enormous value, what we call the Planck scale, which is the scale of quantum gravity. And at that point, if the Higgs field was that strong, even an electron would become so massive that it would collapse into a black hole. And then you have a universe made of black holes and nothing like us. So it seems that the strength of the Higgs field is to achieve the value that we see requires what we call fine tuning of the laws of physics. You have to fiddle around with the other fields in the Standard Model and their properties to just get it to this right sort of Goldilocks value that allows atoms to exist. This is deeply fishy. People really dislike this. Well, yeah, I guess, so what would be, so two explanations. One, there's a god that designed this perfectly, and two is there's an infinite number of alternate universes, and we just happen to be in the one in which life is possible, complexity. So when you say, I mean, life, any kind of complexity, that's not either complete chaos or black holes. I mean, how does that make you feel? What do you make of that? That's such a fascinating notion that this perfectly tuned field that's the same everywhere is there. What do you make of that? Yeah, what do you make of that? I mean, yeah, so you laid out two of the possible explanations. Really? Some, well, yeah, I mean, well, someone, some cosmic creator went, yeah, let's fix that to be at the right level. That's one possibility, I guess. It's not a scientifically testable one, but theoretically, I guess, it's possible. Sorry to interrupt, but there could also be not a designer, but couldn't there be just, I guess I'm not sure what that would be, but some kind of force that, that some kind of mechanism by which this kind of field is enforced in order to create complexity, basically forces that pull the universe towards an interesting complexity. I mean, yeah, I mean, there are people who have those ideas. I don't really subscribe to them. As I'm saying, it sounds really stupid. No, I mean, there are definitely people that make those kind of arguments. There's ideas that, I think it's Lee Smolin's idea, or one, I think, that universes are born inside black holes. And so, universes, they basically have like Darwinian evolution of the universe, where universes give birth to other universes. And if universes where black holes can form are more likely to give birth to more universes, so you end up with universes which have similar laws. I mean, I don't know, whatever. Well, I talked to Lee recently on this podcast, and he's a reminder to me that the physics community has like so many interesting characters in it. It's fascinating. Anyway, sorry, so. I mean, as an experimentalist, I tend to sort of think, these are interesting ideas, but they're not really testable, so I tend not to think about them very much. So, I mean, going back to the science of this, there is an explanation. There is a possible solution to this problem of the Higgs, which doesn't involve multiverses or creators fiddling about with the laws of physics. If the most popular solution was something called supersymmetry, which is a theory which involves a new type of symmetry of the universe. In fact, it's one of the last types of symmetries that it's possible to have that we haven't already seen in nature, which is a symmetry between force particles and matter particles. So what we call fermions, which are the matter particles and bosons, which are force particles. And if you have supersymmetry, then there is a super partner for every particle in the standard model. And without going into the details, the effect of this basically is that you have a whole bunch of other fields, and these fields cancel out the effect of the standard model fields, and they stabilize the Higgs field at a nice sensible value. So in supersymmetry, you naturally, without any tinkering about with the constants of nature or anything, you get a Higgs field with a nice value, which is the one we see. So this is one of the, and supersymmetry's also got lots of other things going for it. It predicts the existence of a dark matter particle, which would be great. It potentially suggests that the strong force and the electroweak force unify at high energy. So lots of reasons people thought this was a productive idea. And when the LHC was, just before it was turned on, there was a lot of hype, I guess, a lot of an expectation that we would discover these super partners because, and particularly the main reason was that if supersymmetry stabilizes the Higgs field at this nice Goldilocks value, these super particles should have a mass around the energy that we're probing at the LHC, around the energy of the Higgs. So it was kind of thought, you discover the Higgs, you probably discover super partners as well. So once you start creating ripples in this Higgs field, you should be able to see these kinds of, you should be, yeah. So the super fields would be there. When I, at the very beginning I said, we're probing the vacuum. What I mean is really that, you know, okay, let's say these super fields exist. The vacuum contains super fields. They're there, these supersymmetric fields. If we hit them hard enough, we can make them vibrate. We see super particles come flying out. That's the sort of, that's the idea. That's the whole, okay. That's the whole point. But we haven't. But we haven't. So, so far at least, I mean, we've had now a decade of data taking at the LHC. No signs of super partners have, supersymmetric particles have been found. In fact, no signs of any physics, any new particles beyond the Standard Model have been found. So supersymmetry is not the only thing that can do this. There are other theories that involve additional dimensions of space or potentially involve the Higgs boson being made of smaller things, being made of other particles. Yeah, that's an interesting, you know, I haven't heard that before. That's really, that's an interesting, but can you maybe linger on that? Like what, what could be, what could the Higgs particle be made of? Well, so the oldest, I think the original ideas about this was these theories called technicolor, which were basically like an analogy with the strong force. So the idea was the Higgs boson was a bound state of two very strongly interacting particles that were a bit like quarks. So like quarks, but I guess higher energy things with a super strong force. So not the strong force, but a new force that was very strong. And the Higgs was a bound state of these, these objects. And the Higgs would in principle, if that was right, would be the first in a series of technicolor particles. Technicolor, I think not being a theorist, but it's not, it's basically not done very well, particularly since the LHC found the Higgs, that kind of, it rules out, you know, a lot of these technicolor theories, but there are other things that are a bit like technicolor. So there's a theory called partial composite, which is an idea that some of my colleagues at Cambridge have worked on, which is a similar sort of idea that the Higgs is a bound state of some strongly interacting particles, and that the standard model particles themselves, the more exotic ones like the top quark are also sort of mixtures of these composite particles. So it's a kind of an extension to the standard model, which explains this problem with the Higgs bosons, Goldilocks value, but also helps us understand we have, we're in a situation now, again, a bit like the periodic table, where we have six quarks, six leptons in this kind of, you can arrange in this nice table and you can see these columns where the patterns repeat and you go, okay, maybe there's something deeper going on here, you know, and so this would potentially be something, this partial composite theory could explain, a sort of enlarge this picture that allows us to see the whole symmetrical pattern and understand what the ingredients, why do we have, so one of the big questions in particle physics is, why are there three copies of the matter particles? So in what we call the first generation, which is what we're made of, there's the electron, the electron neutrino, the up quark and the down quark, they're the most common matter particles in the universe, but then there are copies of these four particles in the second and the third generations, so things like nuons and top quarks and other stuff, we don't know why, we see these patterns, we have no idea where it comes from, so that's another big question, you know, can we find out the deeper order that explains this particular periodic table of particles that we see? Is it possible that the deeper order includes like almost a single entity, so like something that I guess like string theory dreams about, is this essentially the dream, is to discover something simple, beautiful and unifying? Yeah, I mean, that is the dream, and I think for some people, for a lot of people, it still is the dream, so there's a great book by Steven Weinberg, who is one of the theoretical physicists who was instrumental in building the Standard Model, so he came up with some others with the electroweak theory, the theory that unified electromagnetism and the weak force, and he wrote this book, I think it was towards the end of the 80s, early 90s, called Dreams of a Final Theory, which is a very lovely, quite short book about this idea of a final unifying theory that brings everything together, and I think you get a sense reading his book written at the end of the 80s, early 90s, that there was this feeling that such a theory was coming, and that was the time when string theory was very exciting, so string theory, there's been this thing called the superstring revolution, and theoretical physicists were very excited, they discovered these theoretical objects, these little vibrating loops of string that in principle not only was a quantum theory of gravity but could explain all the particles in the Standard Model and bring it all together, and as you say, you have one object, the string, and you can pluck it, and the way it vibrates gives you these different notes, each of which is a different particle, so it's a very lovely idea, but the problem is that, well, there's a few, people discover that mathematics is very difficult, so people have spent three decades or more trying to understand string theory, and I think if you spoke to most string theorists, they would probably freely admit that no one really knows what string theory is yet, I mean, there's been a lot of work, but it's not really understood, and the other problem is that string theory mostly makes predictions about physics that occurs at energies far beyond what we will ever be able to probe in the laboratory. Yeah, probably ever. By the way, so sorry to take a million tangents, but is there room for complete innovation of how to build a particle collider that could give us an order of magnitude increase in the kind of energies, or do we need to keep just increasing the size of things? I mean, maybe, yeah, I mean, there are ideas, to give you a sense of the gulf that has to be bridged. So the LHC collides particles at an energy of what we call 14 tera electron volts, so that's basically the equivalent if you've accelerated a proton through 14 trillion volts. That gets us to the energies where the Higgs and these weak particles live. They're very massive. The scale where strings become manifest is something called the Planck scale, which I think is of the order 10 to the, hang on, get this right, it's 10 to the 18 giga electron volts, so about 10 to the 15 tera electron volts. So you're talking trillions of times more energy. Yeah, 10 to the 15th or 10 to the 14th larger, I don't even. It's of that order. It's a very big number. So we're not talking just an order of magnitude increase in energy, we're talking 14 orders of magnitude energy increase. So to give you a sense of what that would look like, were you to build a particle accelerator with today's technology. Bigger or smaller than our solar system? The size of the galaxy. The galaxy. So you'd need to put a particle accelerator that circled the Milky Way to get to the energies where you would see strings if they exist. So that is a fundamental problem, which is that most of the predictions of these unified theories, quantum theories of gravity, only make statements that are testable at energies that we will not be able to probe, and barring some unbelievable, completely unexpected technological or scientific breakthrough, which is almost impossible to imagine. You never say never, but it seems very unlikely. Yeah, I can just see the news story. Elon Musk decides to build a particle collider the size of our galaxy. We'd have to get together with all our galactic neighbors to pay for it, I think. What is the exciting possibilities of the Large Hadron Collider? What is there to be discovered in this order of magnitude of scale? Is there other bigger efforts on the horizon in this space? What are the open problems, the exciting possibilities? You mentioned supersymmetry. Yeah, so, well, there are lots of new ideas. Well, there are lots of problems that we're facing. So there's a problem with the Higgs field, which supersymmetry was supposed to solve. There's the fact that 95% of the universe we know from cosmology, astrophysics, is invisible, that it's made of dark matter and dark energy, which are really just words for things that we don't know what they are. It's what Donald Rumsfeld called a known unknown. So we know we don't know what they are. Well, that's better than unknown unknown. Yeah, well, there may be some unknown unknowns, but by definition we don't know what those are, so, yeah. But the hope is a particle accelerator could help us make sense of dark energy, dark matter. There's still, there's some hope for that? There's hope for that, yeah. So one of the hopes is the LHC could produce a dark matter particle in its collisions. And it may be that the LHC will still discover new particles, that it might still, supersymmetry could still be there. It's just maybe more difficult to find than we thought originally. And dark matter particles might be being produced, but we're just not looking in the right part of the data for them, that's possible. It might be that we need more data, that these processes are very rare and we need to collect lots and lots of data before we see them. But I think a lot of people would say now that the chances of the LHC directly discovering new particles in the near future is quite slim. It may be that we need a decade more data before we can see something, or we may not see anything. That's the, that's where we are. So, I mean, the physics, the experiments that I work on, so I work on a detector called LHCb, which is one of these four big detectors that are spaced around the ring. And we do slightly different stuff to the big guys. There's two big experiments called Atlas and CMS, 3000 physicists and scientists and computer scientists on them each. They are the ones that discovered the Higgs and they look for supersymmetry and dark matter and so on. What we look at are standard model particles called bequarks, which depending on your preferences, either bottom or beauty, we tend to say beauty because it sounds sexier. Yeah, for sure. But these particles are interesting because they have, we can make lots of them. We make billions or hundreds of billions of these things. You can therefore measure their properties very precisely. So you can make these really lovely precision measurements. And what we are doing really is a sort of complimentary thing to the other big experiments, which is they, if you think of the sort of analogy they often use is, if you imagine you're looking in, you're in the jungle and you're looking for an elephant, say, and you are a hunter and you're kind of like, let's say there's the relevance, very rare. You don't know where in the jungle, the jungle's big. So there's two ways you go about this. Either you can go wandering around the jungle and try and find the elephant. The problem is if the elephant, if there's only one elephant and the jungle's big, the chances of running into it are very small. Or you could look on the ground and see if you see footprints left by the elephant. And if the elephant's moving around, you've got a chance, that you're better chance maybe of seeing the elephant's footprints. If you see the footprints, you go, okay, there's an elephant. I maybe don't know what kind of elephant it is, but I got a sense there's something out there. So that's sort of what we do. We are the footprint people. We are, we're looking for the footprints, the impressions that quantum fields that we haven't managed to directly create the particle of, the effects these quantum fields have on the ordinary standard model fields that we already know about. So these B particles, the way they behave can be influenced by the presence of say, super fields or dark matter fields or whatever you like. And the way they decay and behave can be altered slightly from what our theory tells us they ought to behave. And it's easier to collect huge amounts of data on B quarks. We get billions and billions of these things. You can make very precise measurements. And the only place really at the LHC or really in high energy physics at the moment where there's fairly compelling evidence that there might be something beyond the standard model is in these B, these beauty quarks decays. Just to clarify, which is the difference between the different, the four experiments, for example, that you mentioned, is it the kind of particles that are being collided? Is it the energies which they're collided? What's the fundamental difference between the different experiments? The collisions are the same. What's different is the design of the detectors. So Atlas and CMS are called, they're called what are called general purpose detectors. And they are basically barrel shaped machines and the collisions happen in the middle of the barrel and the barrel captures all the particles that go flying out in every direction. So in a sphere effectively that can fly out and it can record all of those particles. And what's the, sorry to be interrupting, but what's the mechanism of the recording? Oh, so these detectors, if you've seen pictures of them, they're huge, like Atlas is 25 meters high and 45 meters long, they're vast machines, instruments, I guess you should call them really. They are, they're kind of like onions. So they have layers, concentric layers of detectors, different sorts of detectors. So close into the beam pipe, you have what are called usually made of silicon, they're tracking detectors. So they're little made of strips of silicon or pixels of silicon. And when a particle goes through the silicon, it gives a little electrical signal and you get these dots, electrical dots through your detector, which allows you to reconstruct the trajectory of the particle. So that's the middle and then the outsides of these detectors, you have things called calorimeters, which measure the energies of the particles and the very edge you have things called muon chambers, which basically these muon particles, which are the heavy version of the electron, they're like high velocity bullets and they can get right to the edge of the detectors. If you see something at the edge, that's a muon. So that's broadly how they work. And all of that is being recorded. That's all being fed out to, you know, computers. Data must be awesome, okay. So LHCb is different. So we, because we're looking for these be quarks, be quarks tend to be produced along the beam line. So in a collision, the be quark tend to fly sort of close to the beam pipe. So we built a detector that sort of pyramid cone shaped basically, that just looks in one direction. So we ignore, if you have your collision, stuff goes everywhere. We ignore all the stuff over here and going off sideways. We're just looking in this little region close to the beam pipe where most of these be quarks are made. So is there a different aspect of the sensors involved in the collection of the be quark trajectories? There are some differences. So one of the differences is that, one of the ways you know you've seen a be quark is that be quarks are actually quite long lived by particle standards. So they live for 1.5 trillionths of a second, which is if you're a fundamental particle is a very long time. Cause the Higgs boson, I think lives for about a trillionth of a trillionth of a second, or maybe even less than that. So these are quite long lived things and they will actually fly a little distance before they decay. So they will fly a few centimeters maybe if you're lucky, then they'll decay into other stuff. So what we need to do in the middle of the detector, you wanna be able to see, you have your place where the protons crash into each other and that produces loads of particles that come flying out. So you have loads of lines, loads of tracks that point back to that proton collision. And then you're looking for a couple of other tracks, maybe two or three that point back to a different place that's maybe a few centimeters away from the proton collision. And that's the sign that a little B particle has flown a few centimeters and decayed somewhere else. So we need to be able to very accurately resolve the proton collision from the B particle decay. So the middle of our detector is very sensitive and it gets very close to the collision. So you have this really beautiful delicate silicon detector that sits, I think it's seven millimeters from the beam. And the LHC beam has as much energy as a jumbo jet at takeoff. So it's enough to melt a ton of copper. So you have this furiously powerful thing sitting next to this tiny delicate silicon sensor. So those aspects of our detector that are specialized to measure these particular B quarks that we're interested in. And is there, I mean, I remember seeing somewhere that there's some mention of matter and antimatter connected to the B, these beautiful quarks. Is that, what's the connection? Yeah, what's the connection there? Yeah, so there is a connection, which is that when you produce these B particles, these particles, because you don't see the B quark, you see the thing that B quark is inside. So they're bound up inside what we call beauty particles, where the B quark is joined together with another quark or two, maybe two other quarks, depending on what it is. They're a particular set of these B particles that exhibit this property called oscillation. So if you make a, for the sake of argument, a matter version of one of these B particles, as it travels, because of the magic of quantum mechanics, it oscillates backwards and forwards between its matter and antimatter versions. So it does this weird flipping about backwards and forwards. And what we can use this for is a laboratory for testing the symmetry between matter and antimatter. So if the symmetry between antimatter is precise, it's exact, then we should see these B particles decaying as often as matter, as they do as antimatter, because this oscillation should be even. It should spend as much time in each state. But what we actually see is that one of the states, it spends more time and it's more likely to decay in one state than the other. So this gives us a way of testing this fundamental symmetry between matter and antimatter. So what can you, sort of returning to the question before about this fundamental symmetry, it seems like if there's perfect symmetry between matter and antimatter, if we have the equal amount of each in our universe, it would just destroy itself. And just like you mentioned, we seem to live in a very unlikely universe where it doesn't destroy itself. So do you have some intuition about why that is? I mean, well, I'm not a theorist. I don't have any particular ideas myself. I mean, I sort of do measurements to try and test these things, but I mean, so the terms of the basic problem is that in the Big Bang, if you use the standard model to figure out what ought to have happened, you should have got equal amounts of matter and antimatter made, because whenever you make a particle in our collisions, for example, when we collide stuff together, you make a particle, you make an antiparticle. They always come together. They always annihilate together. So there's no way of making more matter than antimatter that we've discovered so far. So that means in the Big Bang, you get equal amounts of matter and antimatter. As the universe expands and cools down during the Big Bang, not very long after the Big Bang, I think a few seconds after the Big Bang, you have this event called the Great Annihilation, which is where all the particles and antiparticles smack into each other, annihilate, turn into light mostly, and you end up with a universe later on. If that was what happened, then the universe we live in today would be black and empty, apart from some photons, that would be it. So there is stuff in the universe. It appears to be just made of matter. So there's this big mystery as to how did this happen? And there are various ideas, which all involve sort of physics going on in the first trillionth of a second or so of the Big Bang. So it could be that one possibility is that the Higgs field is somehow implicated in this, that there was this event that took place in the early universe where the Higgs field basically switched on, it acquired its modern value. And when that happened, this caused all the particles to acquire mass and the universe basically went through a phase transition where you had a hot plasma of massless particles. And then in that plasma, it's almost like a gas turning into droplets of water. You get kind of these little bubbles forming in the universe where the Higgs field has acquired its modern value, the particles have got mass. And this phase transition in some models can cause more matter than antimatter to be produced, depending on how matter bounces off these bubbles in the early universe. So that's one idea. There's other ideas to do with neutrinos, that there are exotic types of neutrinos that can decay in a biased way to just matter and not to antimatter. So, and people are trying to test these ideas. That's what we're trying to do at LHCb. There's neutrino experiments planned that are trying to do these sorts of things as well. So yeah, there are ideas, but at the moment, no clear evidence for which of these ideas might be right. So we're talking about some incredible ideas. By the way, never heard anyone be so eloquent about describing even just the standard model. So I'm in awe just listening. Oh, thank you. Yeah, just having fun enjoying it. So the, yes, the theoretical, the particle physics is fascinating here. To me, one of the most fascinating things about the Large Hadron Collider is the human side of it. That a bunch of sort of brilliant people that probably have egos got together and were collaborate together and countries, I guess, collaborate together for the funds and everything's just collaboration everywhere. Cause you may be, I don't know what the right question here to ask, but almost what's your intuition about how it was possible to make this happen and what are the lessons we should learn for the future of human civilization in terms of our scientific progress? Cause it seems like this is a great, great illustration of us working together to do something big. Yeah, I think it's possibly the best example. Maybe I can think of international collaboration that isn't for some unpleasant purpose, basically. You know, I mean, so when I started out in the field in 2008 as a new PhD student, the LHC was basically finished. So I didn't have to go around asking for money for it or trying to make the case. So I have huge admiration for the people who managed that. Cause this was a project that was first imagined in the 1970s, in the late 70s was when the first conversations about the LHC were mooted and it took two and a half decades of campaigning and fundraising and persuasion until they started breaking ground and building the thing in the early noughties in 2000. So, I mean, I think the reason just from a sort of, from the point of view of the sort of science, the scientists there, I think the reason it works ultimately is that everywhere, everyone there is there for the same reason, which is, well, in principle, at least they're there because they're interested in the world. They want to find out, you know, what are the basic ingredients of our universe? What are the laws of nature? And so everyone is pulling in the same direction. Now, of course, everyone has their own things they're interested in. Everyone has their own careers to consider. And, you know, I wouldn't pretend that there isn't also a lot of competition. So there's this funny thing in these experiments where your collaborators, your 800 collaborators in LHCb, but you're also competitors because your academics in your various universities and you want to be the one that gets the paper out on the most exciting, you know, new measurements. So there's this funny thing where you're kind of trying to stake out your territory while also collaborating and having to work together to make the experiments work. And it does work amazingly well, actually considering all of that. And I think there was actually, I think McKinsey or one of these big management consultancy firms went into CERN maybe a decade or so ago to try to understand how these organizations function. Did they figure it out? I don't think they could. I mean, I think one of the things that's interesting, one of the other interesting things about these experiments is, you know, they're big operations like say Atlas has 3000 people. Now there was a person nominally who was the head of Atlas, they're called the spokesperson. And the spokesperson is elected by, usually by the collaboration, but they have no actual power really. I mean, they can't fire anyone. They're not anyone's boss. So, you know, my boss is a professor at Cambridge, not the head of my experiments. The head of my experiment can't tell me what to do really. And there's all these independent academics who are their own bosses who, you know, so that somehow it, nonetheless, by kind of consensus and discussion and lots of meetings, these things do happen and it does get done, but. It's like the queen here in the UK is the spokesperson. I guess so. No actual power. Except we don't elect her, no. No, we don't elect her. But everybody seems to love her. I don't know, from my outside perspective. But yeah, giant egos, brilliant people. And moving forward, do you think there's. Actually, I would pick up one thing you said just there, just the brilliant people thing. Cause I'm not saying that people aren't great. But I think there is this sort of impression that physicists all have to be brilliant or geniuses, which is not true actually. And you know, you have to be relatively bright for sure. But you know, a lot of people, a lot of the most successful experimental physicists are not necessarily the people with the biggest brains. They're the people who, you know, particularly one of the skills that's most important in particle physics is the ability to work with others and to collaborate and exchange ideas and also to work hard. And it's a sort of, often it's more a determination or a sort of other set of skills. It's not just being, you know, kind of some great brain. Very true. So, I mean, there's parallels to that in the machine learning world. If you wanna solve any real world problems, which I see as the particle accelerators, essentially a real world instantiation of theoretical physics. And for that, you have to not necessarily be brilliant, but be sort of obsessed, systematic, rigorous, sort of unborable, stubborn, all those kind of qualities that make for a great engineer. So, scientists purely speaking, that practitioner of the scientific method. So you're right. But nevertheless, to me that's brilliant. My dad's a physicist. I argue with him all the time. To me, engineering is the highest form of science. And he thinks that's all nonsense, that the real work is done by the theoretician. So, in fact, we have arguments about like people like Elon Musk, for example, because I think his work is quite brilliant, but he's fundamentally not coming up with any serious breakthroughs. He's just creating in this world, implementing, like making ideas happen that have a huge impact. To me, that's the Edison. That to me is a brilliant work, but to him, it's messy details that somebody will figure out anyway. I mean, I don't know whether you think there is a actual difference in temperament between say a physicist and an engineer, whether it's just what you got interested in. I don't know. I mean, a lot of what experimental physicists do is to some extent engineering. I mean, it's not what I do. I mostly do data stuff, but a lot of people would be called electrical engineers, but they trained as physicists, but they learned electrical engineering, for example, because they were building detectors. So, there's not such a clear divide, I think. Yeah, it's interesting. I mean, but there does seem to be, like you work with data. There does seem to be a certain, like I love data collection. There might be an OCD element or something that you're more naturally predisposed to as opposed to theory. Like I'm not afraid of data. I love data. And there's a lot of people in machine learning who are more like, they're basically afraid of data collection, afraid of data sets, afraid of all of that. They just want to stay in more than theoretical and they're really good at it, space. So, I don't know if that's the genetic, that's your upbringing, the way you go to school, but looking into the future of LHC and other colliders. So, there's in America, there's whatever it was called, the super, there's a lot of super. Superconducting super colliders. Yeah, superconducting. The desertron, yeah. Desertron, yeah. So, that was canceled, the construction of that. Yeah. Which is a sad thing, but what do you think is the future of these efforts? Will a bigger collider be built? Will LHC be expanded? What do you think? Well, in the near future, the LHC is gonna get an upgrade. So, that's pretty much confirmed. I think it is confirmed, which is, it's not an energy upgrade. It's what we call a luminosity upgrade. So, it basically means increasing the data collection rates. So, more collisions per second, basically, because after a few years of data taking, you get this law of diminishing returns where each year's worth of data is a smaller and smaller fraction of the lot you've already got. So, to get a real improvement in sensitivity, you need to increase the data rate by an order of magnitude. So, that's what this upgrade is gonna do. LHCb, at the moment, the whole detector is basically being rebuilt to allow it to record data at a much larger rate than we could before. So, that will make us sensitive to whole loads of new processes that we weren't able to study before. And I mentioned briefly these anomalies that we've seen. So, we've seen a bunch of very intriguing anomalies in these b quark decays, which may be hinting at the first signs of this kind of the elephant, the signs of some new quantum field or fields maybe beyond the standard model. It's not yet at the statistical threshold where you can say that you've observed something, but there's lots of anomalies in many measurements that all seem to be consistent with each other. So, it's quite interesting. So, the upgrade will allow us to really home in on these things and see whether these anomalies are real, because if they are real, and this kind of connects to your point about the next generation of machines, what we would have seen then is, we would have seen the tail end of some quantum field in influencing these b quarks. What we then need to do is to build a bigger collider to actually make the particle of that field. So, if these things really do exist. So, that would be one argument. I mean, so at the moment, Europe is going through this process of thinking about the strategy for the future. So, there are a number of different proposals on the table. One is for a sort of higher energy upgrade of the LHC, where you just build more powerful magnets and put them in the same tunnel. That's a sort of cheaper, less ambitious possibility. Most people don't really like it because it's sort of a bit of a dead end, because once you've done that, there's nowhere to go. There's a machine called Click, which is a compact linear collider, which is a electron positron collider that uses a novel type of acceleration technology to accelerate at shorter distances. We're still talking kilometers long, but not like 100 kilometers long. And then probably the project that is, I think getting the most support, it'd be interesting to see what happens, something called the Future Circular Collider, which is a really ambitious longterm multi decade project to build a 100 kilometer circumference tunnel under the Geneva region. The LHC would become a kind of feeding machine. It would just feed. So the same area, so it would be a feeder for the. Yeah. So it would kind of, the edge of this machine would be where the LHC is, but it would sort of go under Lake Geneva and round to the Alps, basically, up to the edge of the Geneva basin. So it's basically the biggest tunnel you can fit in the region based on the geology. 100 kilometers. Yeah, so it's big. It'd be a long drive if your experiment's on one side. You've got to go back to CERN for lunch, so that would be a pain. But you know, so this project is, in principle, it's actually two accelerators. The first thing you would do is put an electron positron machine in the 100 kilometer tunnel to study the Higgs. So you'd make lots of Higgs bows and study it really precisely in the hope that you see it misbehaving and doing something it's not supposed to. And then in the much longer term, 100, that machine gets taken out, you put in a proton proton machine. So it's like the LHC, but much bigger. And that's the way you start going and looking for dark matter, or you're trying to recreate this phase transition that I talked about in the early universe, where you can see matter anti matter being made, for example. There's lots of things you can do with these machines. The problem is that they will take, you know, the most optimistic, you're not gonna have any data from any of these machines until 2040, or, you know, because they take such a long time to build and they're so expensive. So you have, there'll be a process of R&D design, but also the political case being made. So LHC, what costs a few billion? Depends how you count it. I think most of the sort of more reasonable estimates that take everything into account properly, it's around the sort of 10, 11, 12 billion euro mark. What would be the future, sorry, I forgot the name already. Future Circular Collider. Future Circular Collider. Presumably they won't call it that when it's built, cause it won't be the future anymore. But I don't know, I don't know what they'll call it then. The very big Hadron Collider, I don't know. But that will, now I should know the numbers, but I think the whole project is estimated at about 30 billion euros, but that's money spent over between now and 2070 probably, which is when the last bit of it would be sort of finishing up, I guess. So you're talking a half a century of science coming out of this thing, shared by many countries. So the actual cost, the arguments that are made is that you could make this project fit within the existing budget of CERN, if you didn't do anything else. And CERN, by the way, we didn't mention, what is CERN? CERN is the European Organization for Nuclear Research. It's an international organization that was established in the 1950s in the wake of the second world war as a kind of, it was sort of like a scientific Marshall plan for Europe. The idea was that you bring European science back together for peaceful purposes, because what happened in the forties was, a lot of particular Jewish scientists, but a lot of scientists from central Europe had fled to the United States and Europe had sort of seen this brain drain. So there was a desire to bring the community back together for a project that wasn't building nasty bombs, but was doing something that was curiosity driven. So, and that has continued since then. So it's kind of a unique organization. It's you, to be a member as a country, you sort of sign up as a member and then you have to pay a fraction of your GDP each year as a subscription. I mean, it's a very small fraction, relatively speaking. I think it's like, I think the UK's contribution is a hundred or 200 million quid or something like that. Yeah, which is quite a lot, but not so. That's fascinating. I mean, just the whole thing that is possible, it's beautiful. It's a beautiful idea, especially when there's no wars on the line, it's not like we're freaking out, as we're actually legitimately collaborating to do good science. One of the things I don't think we really mentioned is on the final side, that sort of the data analysis side, is there breakthroughs possible there and the machine learning side, like is there a lot more signal to be mined in more effective ways from the actual raw data? Yeah, a lot of people are looking into that. I mean, so I use machine learning in my data analysis, but pretty naughty, basic stuff, cause I'm not a machine learning expert. I'm just a physicist who had to learn to do this stuff for my day job. So what a lot of people do is they use kind of off the shelf packages that you can train to do signal noise. Just clean up all the data. But one of the big challenges, the big challenge of the data is A, it's volume, there's huge amounts of data. So the LHC generates, now, okay, I try to remember what the actual numbers are, but if you, we don't record all our data, we record a tiny fraction of the data. It's like of order one 10,000th or something, I think. Is that right? Around that. So most of it gets thrown away. You couldn't record all the LHC data cause it would fill up every computer in the world in a matter of days, basically. So there's this process that happens on live, on the detector, something called a trigger, which in real time, 40 million times every second has to make a decision about whether this collision is likely to contain an interesting object, like a Higgs boson or a dark matter particle. And it has to do that very fast. And the software algorithms in the past were quite relatively basic. They did things like measure mementos and energies of particles and put some requirements. So you would say, if there's a particle with an energy above some threshold, then record this collision. But if there isn't, don't. Whereas now the attempt is get more and more machine learning in at the earliest possible stage. That's cool, at the stage of deciding whether we want to keep this data or not. But also maybe even lower down than that, which is the point where there's this, so generally how the data is reconstructed is you start off with a set of digital hits in your detector. So channels saying, did you see something? Did you not see something? That has to be then turned into tracks, particles going in different directions. And that's done by using fits that fit through the data points. And then that's passed to the algorithms that then go, is this interesting or not? What'd be better is you could train machine learning to just look at the raw hits, the basic real base level information, not have any of the reconstruction done. And it just goes, and it can learn to do pattern recognition on this strange three dimensional image that you get. And potentially that's where you could get really big gains because our triggers tend to be quite inefficient because they don't have time to do the full whiz bang processing to get all the information out that we would like, because you have to do the decision very quickly. So if you can come up with some clever machine learning technique, then potentially you can massively increase the amount of useful data you record and get rid of more of the background earlier in the process. Yeah, to me, that's an exciting possibility because then you don't have to build a sort of, you can get a gain without having to. Without having to build any hardware, I suppose. Hardware, yeah. Although you need lots of new GPU farms, I guess. So hardware still helps. But I got to talk to you, sort of I'm not sure how to ask, but you're clearly an incredible science communicator. I don't know if that's the right term, but you're basically a younger Neil deGrasse Tyson with a British accent. So, and you've, I mean, can you say where we are today, actually? Yeah, so today we're in the Royal Institution in London, which is a very old organization. It's been around for about 200 years now, I think. Maybe even I should know when it was founded. Sort of early 19th century, it was set up to basically communicate science to the public. So it was one of the first places in the world where famous scientists would come and give talks. So very famously Humphrey Davy, who you may know of, who was the person who discovered nitrous oxide. He was a very famous chemist and scientist. Also discovered electrolysis. So he used to do these fantastic, he was a very charismatic speaker. So he used to appear here. There's a big desk that they usually have in the theater and he would do demonstrations to the sort of the, the folk of London back in the early 19th century. And Michael Faraday, who I talked about, who is the person who did so much work on electromagnetism, he used, he lectured here. He also did experiments in the basement. So this place has got a long history of both scientific research, but also communication of scientific research. So you gave a few lectures here. How many, two? I've given, yeah, I've given a couple of lectures in this theater before, so. I mean, that's, so people should definitely go watch online. It's just the explanation of particle physics. So all the, I mean, it's incredible. Like your lectures are just incredible. I can't sing it enough praise. So it was awesome. But maybe can you say, what did that feel like? What does it feel like to lecture here, to talk about that? And maybe from a different perspective, more kind of like how the sausage is made is, how do you prepare for that kind of thing? How do you think about communication, the process of communicating these ideas in a way that's inspiring to, what I would say your talks are inspiring to like the general audience. You don't actually have to be a scientist. You can still be inspired without really knowing much of the, you start from the very basics. So what's the preparation process? And then the romantic question is, what did that feel like to perform here? I mean, profession, yeah. I mean, the process, I mean, the talk, my favorite talk that I gave here was one called Beyond the Higgs, which you can find on the Royal Institute's YouTube channel, which you should go and check out. I mean, and their channel's got loads of great talks with loads of great people as well. I mean, that one, I'd sort of given a version of it many times, so part of it is just practice, right? And actually, I don't have some great theory of how to communicate with people. It's more just that I'm really interested and excited by those ideas and I like talking about them. And through the process of doing that, I guess I figured out stories that work and explanations that work. When you say practice, you mean legitimately just giving talks? Just giving talks, yeah. I started off when I was a PhD student doing talks in schools and I still do that as well some of the time and doing things, I've even done a bit of standup comedy, which sort of went reasonably well, even if it was terrifying. And that's on YouTube as well. That's also on, I wouldn't necessarily recommend you check that out. I'm gonna post the links several places to make sure people click on it. But it's basically, I kind of have a story in my head and I kind of, I have to think about what I wanna say. I usually have some images to support what I'm saying and I get up and do it. And it's not really, I wish there was some kind of, I probably should have some proper process. This is very sounds like I'm just making up as I go along and I sort of am. Well, I think the fundamental thing that you said, I think it's like, I don't know if you know who a guy named Joe Rogan is. Yes, I do. So he's also kind of sounds like you in a sense that he's not very introspective about his process, but he's an incredibly engaging conversationalist. And I think one of the things that you and him share that I could see is like a genuine curiosity and passion for the topic. I think that could be systematically cultivated. I'm sure there's a process to it, but you come to it naturally somehow. I think maybe there's something else as well, which is to understand something. There's this quote by Feynman, which I really like, which is what I cannot create, I do not understand. So I'm not particularly super bright. So for me to understand something, I have to break it down into its simplest elements. And if I can then tell people about that, that helps me understand it as well. So I've learned to understand physics a lot more from the process of communicating, because it forces you to really scrutinize the ideas that you're communicating and it often makes you realize you don't really understand the ideas you're talking about. And I'm writing a book at the moment, and I had this experience yesterday where I realized I didn't really understand a pretty fundamental theoretical aspect of my own subject. And I had to go and I had to sort of spend a couple of days reading textbooks and thinking about it in order to make sure that the explanation I gave captured the, got as close to what is actually happening in the theory. And to do that, you have to really understand it properly. Yeah, and there's layers to understanding. It seems like the more, there must be some kind of Feynman law. I mean, the more you understand sort of the simpler you're able to really convey the essence of the idea, right? So it's like this reverse effect that it's like the more you understand, the simpler the final thing that you actually convey. And so the more accessible somehow it becomes. That's why Feynman's lectures are really accessible. It was just counterintuitive. Yeah, although there are some ideas that are very difficult to explain no matter how well or badly you understand them. Like I still can't really properly explain the Higgs mechanism. Yeah. Because some of these ideas only exist in mathematics really. And the only way to really develop an understanding is to go unfortunately to a graduate degree in physics. But you can get kind of a flavor of what's happening, I think, and it's trying to do that in a way that isn't misleading, but always also intelligible. So let me ask them the romantic question of what to you is the most, perhaps an unfair question, what is the most beautiful idea in physics? One that fills you with awe is the most surprising, the strangest, the weirdest. There's a lot of different definitions of beauty. And I'm sure there's several for you, but is there something that just jumps to mind that you think is just especially beautiful? There's a specific thing and a more general thing. So maybe the specific thing first, which I can now first came across as an undergraduate. I found this amazing. So this idea that the forces of nature, electromagnetism, strong force, the weak force, they arise in our theories as a consequence of symmetries. So symmetries in the laws of nature, in the equations essentially that used to describe these ideas, the process whereby theories come up with these sorts of models is they say, imagine the universe obeys this particular type of symmetry. It's a symmetry that isn't so far removed from a geometrical symmetry, like the rotations of a cube. It's not, you can't think of it quite that way, but it's sort of a similar sort of idea. And you say, okay, if the universe respects the symmetry, you find that you have to introduce a force which has the properties of electromagnetism or a different symmetry, you get the strong force or a different symmetry, you get the weak force. So these interactions seem to come from some deeper, it suggests that they come from some deeper symmetry principle. I mean, it depends a bit how you look at it because it could be that we're actually just recognizing symmetries in the things that we see, but there's something rather lovely about that. But I mean, I suppose a bigger thing that makes me wonder is actually, if you look at the laws of nature, how particles interact when you get really close down, they're basically pretty simple things. They bounce off each other by exchanging through force fields and they move around in very simple ways. And somehow these basic ingredients, these few particles that we know about in the forces creates this universe, which is unbelievably complicated and has things like you and me in it, and the earth and stars that make matter in their cores from the gravitational energy of their own bulk that then gets sprayed into the universe that forms other things. I mean, the fact that there's this incredibly long story that goes right back to the beginning, and we can take this story right back to a trillionth of a second after the Big Bang, and we can trace the origins of the stuff that we're made from. And it all ultimately comes from these simple ingredients with these simple rules. And the fact you can generate such complexity from that is really mysterious, I think, and strange. And it's not even a question that physicists can really tackle because we are sort of trying to find these really elementary laws. But it turns out that going from elementary laws and a few particles to something even as complicated as a molecule becomes very difficult. So going from a molecule to a human being is a problem that just can't be tackled, at least not at the moment, so. Yeah, the emergence of complexity from simple rules is so beautiful and so mysterious. And we don't have good mathematics to even try to approach that emergent phenomena. That's why we have chemistry and biology and all the other subjects, yeah, okay. I don't think there's a better way to end it, Harry. I can't, I mean, I think I speak for a lot of people that can't wait to see what happens in the next five, 10, 20 years with you. I think you're one of the great communicators of our time. So I hope you continue that and I hope that grows. And I'm definitely a huge fan. So it was an honor to talk to you today. Thanks so much, man. It was really fun, thanks very much. Thanks for listening to this conversation with Harry Kliff. And thank you to our sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon or simply connect with me on Twitter at lexfreedman. And now let me leave you with some words from Harry Kliff. You and I are leftovers. Every particle in our bodies is a survivor from an almighty shootout between matter and antimatter that happened a little after the Big Bang. In fact, only one in a billion particles created at the beginning of time have survived to the present day. Thank you for listening and hope to see you next time.
Harry Cliff: Particle Physics and the Large Hadron Collider | Lex Fridman Podcast #92
The following is a conversation with Daphne Koller, a professor of computer science at Stanford University, a cofounder of Coursera with Andrew Ng, and founder and CEO of Incitro, a company at the intersection of machine learning and biomedicine. We're now in the exciting early days of using the data driven methods of machine learning to help discover and develop new drugs and treatments at scale. Daphne and Incitro are leading the way on this with breakthroughs that may ripple through all fields of medicine, including ones most critical for helping with the current coronavirus pandemic. This conversation was recorded before the COVID 19 outbreak. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong, we're in this together, we'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of this conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the app store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App allows you to send and receive money digitally, peer to peer, and security in all digital transactions is very important, let me mention the PCI data security standard that Cash App is compliant with. I'm a big fan of standards for safety and security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now here's my conversation with Daphne Koller. So you cofounded Coursera and made a huge impact in the global education of AI. And after five years in August, 2016, wrote a blog post saying that you're stepping away and wrote, quote, it is time for me to turn to another critical challenge, the development of machine learning and its applications to improving human health. So let me ask two far out philosophical questions. One, do you think we'll one day find cures for all major diseases known today? And two, do you think we'll one day figure out a way to extend the human lifespan, perhaps to the point of immortality? So one day is a very long time and I don't like to make predictions of the type we will never be able to do X because I think that's a smacks of hubris. It seems that never in the entire eternity of human existence will we be able to solve a problem. That being said, curing disease is very hard because oftentimes by the time you discover the disease, a lot of damage has already been done. And so to assume that we would be able to cure disease at that stage assumes that we would come up with ways of basically regenerating entire parts of the human body in the way that actually returns it to its original state. And that's a very challenging problem. We have cured very few diseases. We've been able to provide treatment for an increasingly large number, but the number of things that you could actually define to be cures is actually not that large. So I think that there's a lot of work that would need to happen before one could legitimately say that we have cured even a reasonable number, far less all diseases. On the scale of zero to 100, where are we in understanding the fundamental mechanisms of all of major diseases? What's your sense? So from the computer science perspective that you've entered the world of health, how far along are we? I think it depends on which disease. I mean, there are ones where I would say we're maybe not quite at a hundred because biology is really complicated and there's always new things that we uncover that people didn't even realize existed. But I would say there's diseases where we might be in the 70s or 80s, and then there's diseases in which I would say with probably the majority where we're really close to zero. Would Alzheimer's and schizophrenia and type two diabetes fall closer to zero or to the 80? I think Alzheimer's is probably closer to zero than to 80. There are hypotheses, but I don't think those hypotheses have as of yet been sufficiently validated that we believe them to be true. And there is an increasing number of people who believe that the traditional hypotheses might not really explain what's going on. I would also say that Alzheimer's and schizophrenia and even type two diabetes are not really one disease. They're almost certainly a heterogeneous collection of mechanisms that manifest in clinically similar ways. So in the same way that we now understand that breast cancer is really not one disease, it is multitude of cellular mechanisms, all of which ultimately translate to uncontrolled proliferation, but it's not one disease. The same is almost undoubtedly true for those other diseases as well. And that understanding that needs to precede any understanding of the specific mechanisms of any of those other diseases. Now, in schizophrenia, I would say we're almost certainly closer to zero than to anything else. Type two diabetes is a bit of a mix. There are clear mechanisms that are implicated that I think have been validated that have to do with insulin resistance and such, but there's almost certainly there as well many mechanisms that we have not yet understood. You've also thought and worked a little bit on the longevity side. Do you see the disease and longevity as overlapping completely, partially, or not at all as efforts? Those mechanisms are certainly overlapping. There's a well known phenomenon that says that for most diseases, other than childhood diseases, the risk for contracting that disease increases exponentially year on year, every year from the time you're about 40. So obviously there's a connection between those two things. That's not to say that they're identical. There's clearly aging that happens that is not really associated with any specific disease. And there's also diseases and mechanisms of disease that are not specifically related to aging. So I think overlap is where we're at. Okay. It is a little unfortunate that we get older and it seems that there's some correlation with the occurrence of diseases or the fact that we get older. And both are quite sad. I mean, there's processes that happen as cells age that I think are contributing to disease. Some of those have to do with DNA damage that accumulates as cells divide where the repair mechanisms don't fully correct for those. There are accumulations of proteins that are misfolded and potentially aggregate and those too contribute to disease and will contribute to inflammation. There's a multitude of mechanisms that have been uncovered that are sort of wear and tear at the cellular level that contribute to disease processes and I'm sure there's many that we don't yet understand. On a small tangent and perhaps philosophical, the fact that things get older and the fact that things die is a very powerful feature for the growth of new things. It's a learning, it's a kind of learning mechanism. So it's both tragic and beautiful. So do you, so in trying to fight disease and trying to fight aging, do you think about sort of the useful fact of our mortality or would you, like if you were, could be immortal, would you choose to be immortal? Again, I think immortal is a very long time and I don't know that that would necessarily be something that I would want to aspire to but I think all of us aspire to an increased health span, I would say, which is an increased amount of time where you're healthy and active and feel as you did when you were 20 and we're nowhere close to that. People deteriorate physically and mentally over time and that is a very sad phenomenon. So I think a wonderful aspiration would be if we could all live to the biblical 120 maybe in perfect health. In high quality of life. High quality of life. I think that would be an amazing goal for us to achieve as a society now is the right age 120 or 100 or 150. I think that's up for debate but I think an increased health span is a really worthy goal. And anyway, in a grand time of the age of the universe, it's all pretty short. So from the perspective, you've done obviously a lot of incredible work in machine learning. So what role do you think data and machine learning play in this goal of trying to understand diseases and trying to eradicate diseases? Up until now, I don't think it's played very much of a significant role because largely the data sets that one really needed to enable a powerful machine learning methods, those data sets haven't really existed. There's been dribs and drabs and some interesting machine learning that has been applied, I would say machine learning slash data science, but the last few years are starting to change that. So we now see an increase in some large data sets but equally importantly, an increase in technologies that are able to produce data at scale. It's not typically the case that people have deliberately proactively used those tools for the purpose of generating data for machine learning. They, to the extent that those techniques have been used for data production, they've been used for data production to drive scientific discovery and the machine learning came as a sort of byproduct second stage of, oh, you know, now we have a data set, let's do machine learning on that rather than a more simplistic data analysis method. But what we are doing in Citro is actually flipping that around and saying, here's this incredible repertoire of methods that bioengineers, cell biologists have come up with, let's see if we can put them together in brand new ways with the goal of creating data sets that machine learning can really be applied on productively to create powerful predictive models that can help us address fundamental problems in human health. So really focus to get, make data the primary focus and the primary goal and find, use the mechanisms of biology and chemistry to create the kinds of data set that could allow machine learning to benefit the most. I wouldn't put it in those terms because that says that data is the end goal. Data is the means. So for us, the end goal is helping address challenges in human health and the method that we've elected to do that is to apply machine learning to build predictive models and machine learning, in my opinion, can only be really successfully applied especially the more powerful models if you give it data that is of sufficient scale and sufficient quality. So how do you create those data sets so as to drive the ability to generate predictive models which subsequently help improve human health? So before we dive into the details of that, let me take a step back and ask when and where was your interest in human health born? Are there moments, events, perhaps if I may ask, tragedies in your own life that catalyzes passion or was it the broader desire to help humankind? So I would say it's a bit of both. So on, I mean, my interest in human health actually dates back to the early 2000s when a lot of my peers in machine learning and I were using data sets that frankly were not very inspiring. Some of us old timers still remember the quote unquote 20 news groups data set where this was literally a bunch of texts from 20 news groups, a concept that doesn't really even exist anymore. And the question was, can you classify which news group a particular bag of words came from? And it wasn't very interesting. The data sets at the time on the biology side were much more interesting, both from a technical and also from an aspirational perspective. They were still pretty small, but they were better than 20 news groups. And so I started out, I think just by wanting to do something that was more, I don't know, societally useful and technically interesting. And then over time became more and more interested in the biology and the human health aspects for themselves and began to work even sometimes on papers that were just in biology without having a significant machine learning component. I think my interest in drug discovery is partly due to an incident I had with when my father sadly passed away about 12 years ago. He had an autoimmune disease that settled in his lungs and the doctors basically said, well, there's only one thing that we could do, which is give him prednisone. At some point, I remember a doctor even came and said, hey, let's do a lung biopsy to figure out which autoimmune disease he has. And I said, would that be helpful? Would that change treatment? He said, no, there's only prednisone. That's the only thing we can give him. And I had friends who were rheumatologists who said the FDA would never approve prednisone today because the ratio of side effects to benefit is probably not large enough. Today, we're in a state where there's probably four or five, maybe even more, well, it depends for which autoimmune disease, but there are multiple drugs that can help people with autoimmune disease, many of which didn't exist 12 years ago. And I think we're at a golden time in some ways in drug discovery where there's the ability to create drugs that are much more safe and much more effective than we've ever been able to before. And what's lacking is enough understanding of biology and mechanism to know where to aim that engine. And I think that's where machine learning can help. So in 2018, you started and now lead a company in Citro, which is, like you mentioned, perhaps the focus is drug discovery and the utilization of machine learning for drug discovery. So you mentioned that, quote, we're really interested in creating what you might call a disease in a dish model, disease in a dish models, places where diseases are complex, where we really haven't had a good model system, where typical animal models that have been used for years, including testing on mice, just aren't very effective. So can you try to describe what is an animal model and what is a disease in a dish model? Sure. So an animal model for disease is where you create effectively, it's what it sounds like. It's oftentimes a mouse where we have introduced some external perturbation that creates the disease and then we cure that disease. And the hope is that by doing that, we will cure a similar disease in the human. The problem is that oftentimes the way in which we generate the disease in the animal has nothing to do with how that disease actually comes about in a human. It's what you might think of as a copy of the phenotype, a copy of the clinical outcome, but the mechanisms are quite different. And so curing the disease in the animal, which in most cases doesn't happen naturally, mice don't get Alzheimer's, they don't get diabetes, they don't get atherosclerosis, they don't get autism or schizophrenia. Those cures don't translate over to what happens in the human. And that's where most drugs fails just because the findings that we had in the mouse don't translate to a human. The disease in the dish models is a fairly new approach. It's been enabled by technologies that have not existed for more than five to 10 years. So for instance, the ability for us to take a cell from any one of us, you or me, revert that say skin cell to what's called stem cell status, which is what's called the pluripotent cell that can then be differentiated into different types of cells. So from that pluripotent cell, one can create a Lex neuron or a Lex cardiomyocyte or a Lex hepatocyte that has your genetics, but that right cell type. And so if there's a genetic burden of disease that would manifest in that particular cell type, you might be able to see it by looking at those cells and saying, oh, that's what potentially sick cells look like versus healthy cells and then explore what kind of interventions might revert the unhealthy looking cell to a healthy cell. Now, of course, curing cells is not the same as curing people. And so there's still potentially a translatability gap, but at least for diseases that are driven, say by human genetics and where the human genetics is what drives the cellular phenotype, there is some reason to hope that if we revert those cells in which the disease begins and where the disease is driven by genetics and we can revert that cell back to a healthy state, maybe that will help also revert the more global clinical phenotype. So that's really what we're hoping to do. That step, that backward step, I was reading about it, the Yamanaka factor. Yes. So it's like that reverse step back to stem cells. Yes. Seems like magic. It is. Honestly, before that happened, I think very few people would have predicted that to be possible. It's amazing. Can you maybe elaborate, is it actually possible? Like where, like how stable? So this result was maybe like, I don't know how many years ago, maybe 10 years ago was first demonstrated, something like that. Is this, how hard is this? Like how noisy is this backward step? It seems quite incredible and cool. It is, it is incredible and cool. It was much more, I think finicky and bespoke at the early stages when the discovery was first made. But at this point, it's become almost industrialized. There are what's called contract research organizations, vendors that will take a sample from a human and revert it back to stem cell status. And it works a very good fraction of the time. Now there are people who will ask, I think good questions. Is this really truly a stem cell or does it remember certain aspects of what, of changes that were made in the human beyond the genetics? It's passed as a skin cell, yeah. It's passed as a skin cell or it's passed in terms of exposures to different environmental factors and so on. So I think the consensus right now is that these are not always perfect and there is little bits and pieces of memory sometimes, but by and large, these are actually pretty good. So one of the key things, well, maybe you can correct me, but one of the useful things for machine learning is size, scale of data. How easy it is to do these kinds of reversals to stem cells and then disease in a dish models at scale. Is that a huge challenge or not? So the reversal is not as of this point something that can be done at the scale of tens of thousands or hundreds of thousands. I think total number of stem cells or IPS cells that are what's called induced pluripotent stem cells in the world I think is somewhere between five and 10,000 last I looked. Now again, that might not count things that exist in this or that academic center and they may add up to a bit more, but that's about the range. So it's not something that you could at this point generate IPS cells from a million people, but maybe you don't need to because maybe that background is enough because it can also be now perturbed in different ways. And some people have done really interesting experiments in for instance, taking cells from a healthy human and then introducing a mutation into it using one of the other miracle technologies that's emerged in the last decade which is CRISPR gene editing and introduced a mutation that is known to be pathogenic. And so you can now look at the healthy cells and the unhealthy cells, the one with the mutation and do a one on one comparison where everything else is held constant. And so you could really start to understand specifically what the mutation does at the cellular level. So the IPS cells are a great starting point and obviously more diversity is better because you also wanna capture ethnic background and how that affects things, but maybe you don't need one from every single patient with every single type of disease because we have other tools at our disposal. Well, how much difference is there between people I mentioned ethnic background in terms of IPS cells? So we're all like, it seems like these magical cells that can do to create anything between different populations, different people. Is there a lot of variability between cell cells? Well, first of all, there's the variability, that's driven simply by the fact that genetically we're different. So a stem cell that's derived from my genotype is gonna be different from a stem cell that's derived from your genotype. There's also some differences that have more to do with for whatever reason, some people's stem cells differentiate better than other people's stem cells. We don't entirely understand why. So there's certainly some differences there as well, but the fundamental difference and the one that we really care about and is a positive is that the fact that the genetics are different and therefore recapitulate my disease burden versus your disease burden. What's a disease burden? Well, a disease burden is just if you think, I mean, it's not a well defined mathematical term, although there are mathematical formulations of it. If you think about the fact that some of us are more likely to get a certain disease than others because we have more variations in our genome that are causative of the disease, maybe fewer that are protective of the disease. People have quantified that using what are called polygenic risk scores, which look at all of the variations in an individual person's genome and add them all up in terms of how much risk they confer for a particular disease. And then they've put people on a spectrum of their disease risk. And for certain diseases where we've been sufficiently powered to really understand the connection between the many, many small variations that give rise to an increased disease risk, there's some pretty significant differences in terms of the risk between the people, say at the highest decile of this polygenic risk score and the people at the lowest decile. Sometimes those differences are factor of 10 or 12 higher. So there's definitely a lot that our genetics contributes to disease risk, even if it's not by any stretch the full explanation. And from a machine learning perspective, there's signal there. There is definitely signal in the genetics and there's even more signal, we believe, in looking at the cells that are derived from those different genetics because in principle, you could say all the signal is there at the genetics level. So we don't need to look at the cells, but our understanding of the biology is so limited at this point than seeing what actually happens at the cellular level is a heck of a lot closer to the human clinical outcome than looking at the genetics directly. And so we can learn a lot more from it than we could by looking at genetics alone. So just to get a sense, I don't know if it's easy to do, but what kind of data is useful in this disease in a dish model? Like what's the source of raw data information? And also from my outsider's perspective, so biology and cells are squishy things. And then how do you connect the computer to that? Which sensory mechanisms, I guess. So that's another one of those revolutions that have happened in the last 10 years in that our ability to measure cells very quantitatively has also dramatically increased. So back when I started doing biology in the late 90s, early 2000s, that was the initial era where we started to measure biology in really quantitative ways using things like microarrays, where you would measure in a single experiment the activity level, what's called expression level of every gene in the genome in that sample. And that ability is what actually allowed us to even understand that there are molecular subtypes of diseases like cancer, where up until that point, it's like, oh, you have breast cancer. But then when we looked at the molecular data, it was clear that there's different subtypes of breast cancer that at the level of gene activity look completely different to each other. So that was the beginning of this process. Now we have the ability to measure individual cells in terms of their gene activity using what's called single cell RNA sequencing, which basically sequences the RNA, which is that activity level of different genes for every gene in the genome. And you could do that at single cell level. So that's an incredibly powerful way of measuring cells. I mean, you literally count the number of transcripts. So it really turns that squishy thing into something that's digital. Another tremendous data source that's emerged in the last few years is microscopy and specifically even super resolution microscopy, where you could use digital reconstruction to look at subcellular structures, sometimes even things that are below the diffraction limit of light by doing a sophisticated reconstruction. And again, that gives you a tremendous amount of information at the subcellular level. There's now more and more ways that amazing scientists out there are developing for getting new types of information from even single cells. And so that is a way of turning those squishy things into digital data. Into beautiful data sets. But so that data set then with machine learning tools allows you to maybe understand the developmental, like the mechanism of a particular disease. And if it's possible to sort of at a high level describe, how does that help lead to a drug discovery that can help prevent, reverse that mechanism? So I think there's different ways in which this data could potentially be used. Some people use it for scientific discovery and say, oh, look, we see this phenotype at the cellular level. So let's try and work our way backwards and think which genes might be involved in pathways that give rise to that. So that's a very sort of analytical method to sort of work our way backwards using our understanding of known biology. Some people use it in a somewhat more, sort of forward, if that was a backward, this would be forward, which is to say, okay, if I can perturb this gene, does it show a phenotype that is similar to what I see in disease patients? And so maybe that gene is actually causal of the disease. So that's a different way. And then there's what we do, which is basically to take that very large collection of data and use machine learning to uncover the patterns that emerge from it. So for instance, what are those subtypes that might be similar at the human clinical outcome, but quite distinct when you look at the molecular data? And then if we can identify such a subtype, are there interventions that if I apply it to cells that come from this subtype of the disease and you apply that intervention, it could be a drug or it could be a CRISPR gene intervention, does it revert the disease state to something that looks more like normal, happy, healthy cells? And so hopefully if you see that, that gives you a certain hope that that intervention will also have a meaningful clinical benefit to people. And there's obviously a bunch of things that you would wanna do after that to validate that, but it's a very different and much less hypothesis driven way of uncovering new potential interventions and might give rise to things that are not the same things that everyone else is already looking at. That's, I don't know, I'm just like to psychoanalyze my own feeling about our discussion currently. It's so exciting to talk about sort of a machine, fundamentally, well, something that's been turned into a machine learning problem and that says can have so much real world impact. That's how I feel too. That's kind of exciting because I'm so, most of my day is spent with data sets that I guess closer to the news groups. So this is a kind of, it just feels good to talk about. In fact, I almost don't wanna talk about machine learning. I wanna talk about the fundamentals of the data set, which is an exciting place to be. I agree with you. It's what gets me up in the morning. It's also what attracts a lot of the people who work at InCetro to InCetro because I think all of the, certainly all of our machine learning people are outstanding and could go get a job selling ads online or doing eCommerce or even self driving cars. But I think they would want, they come to us because they want to work on something that has more of an aspirational nature and can really benefit humanity. What, with these approaches, what do you hope, what kind of diseases can be helped? We mentioned Alzheimer's, schizophrenia, type 2 diabetes. Can you just describe the various kinds of diseases that this approach can help? Well, we don't know. And I try and be very cautious about making promises about some things that, oh, we will cure X. People make that promise. And I think it's, I tried to first deliver and then promise as opposed to the other way around. There are characteristics of a disease that make it more likely that this type of approach can potentially be helpful. So for instance, diseases that have a very strong genetic basis are ones that are more likely to manifest in a stem cell derived model. We would want the cellular models to be relatively reproducible and robust so that you could actually get enough of those cells and in a way that isn't very highly variable and noisy. You would want the disease to be relatively contained in one or a small number of cell types that you could actually create in an in vitro, in a dish setting. Whereas if it's something that's really broad and systemic and involves multiple cells that are in very distal parts of your body, putting that all in the dish is really challenging. So we want to focus on the ones that are most likely to be successful today with the hope, I think, that really smart bioengineers out there are developing better and better systems all the time so that diseases that might not be tractable today might be tractable in three years. So for instance, five years ago, these stem cell derived models didn't really exist. People were doing most of the work in cancer cells and cancer cells are very, very poor models of most human biology because they're, A, they were cancer to begin with and B, as you passage them and they proliferate in a dish, they become, because of the genomic instability, even less similar to human biology. Now we have these stem cell derived models. We have the capability to reasonably robustly, not quite at the right scale yet, but close, to derive what's called organoids, which are these teeny little sort of multicellular organ, sort of models of an organ system. So there's cerebral organoids and liver organoids and kidney organoids and. Yeah, brain organoids. That's organoids. It's possibly the coolest thing I've ever seen. Is that not like the coolest thing? Yeah. And then I think on the horizon, we're starting to see things like connecting these organoids to each other so that you could actually start, and there's some really cool papers that start to do that where you can actually start to say, okay, can we do multi organ system stuff? There's many challenges to that. It's not easy by any stretch, but it might, I'm sure people will figure it out. And in three years or five years, there will be disease models that we could make for things that we can't make today. Yeah, and this conversation would seem almost outdated with the kind of scale that could be achieved in like three years. I hope so. That's the hope. That would be so cool. So you've cofounded Coursera with Andrew Ng and were part of the whole MOOC revolution. So to jump topics a little bit, can you maybe tell the origin story of the history, the origin story of MOOCs, of Coursera, and in general, your teaching to huge audiences on a very sort of impactful topic of AI in general? So I think the origin story of MOOCs emanates from a number of efforts that occurred at Stanford University around the late 2000s where different individuals within Stanford, myself included, were getting really excited about the opportunities of using online technologies as a way of achieving both improved quality of teaching and also improved scale. And so Andrew, for instance, led the Stanford Engineering Everywhere, which was sort of an attempt to take 10 Stanford courses and put them online just as video lectures. I led an effort within Stanford to take some of the courses and really create a very different teaching model that broke those up into smaller units and had some of those embedded interactions and so on, which got a lot of support from university leaders because they felt like it was potentially a way of improving the quality of instruction at Stanford by moving to what's now called the flipped classroom model. And so those efforts eventually sort of started to interplay with each other and created a tremendous sense of excitement and energy within the Stanford community about the potential of online teaching and led in the fall of 2011 to the launch of the first Stanford MOOCs. By the way, MOOCs, it's probably impossible that people don't know, but it's, I guess, massive. Open online courses. Open online courses. We did not come up with the acronym. I'm not particularly fond of the acronym, but it is what it is. It is what it is. Big bang is not a great term for the start of the universe, but it is what it is. Probably so. So anyway, so those courses launched in the fall of 2011, and there were, within a matter of weeks, with no real publicity campaign, just a New York Times article that went viral, about 100,000 students or more in each of those courses. And I remember this conversation that Andrew and I had. We were just like, wow, there's this real need here. And I think we both felt like, sure, we were accomplished academics and we could go back and go back to our labs, write more papers. But if we did that, then this wouldn't happen. And it seemed too important not to happen. And so we spent a fair bit of time debating, do we wanna do this as a Stanford effort, kind of building on what we'd started? Do we wanna do this as a for profit company? Do we wanna do this as a nonprofit? And we decided ultimately to do it as we did with Coursera. And so, you know, we started really operating as a company at the beginning of 2012. And the rest is history. But how did you, was that really surprising to you? How did you at that time and at this time make sense of this need for sort of global education you mentioned that you felt that, wow, the popularity indicates that there's a hunger for sort of globalization of learning. I think there is a hunger for learning that, you know, globalization is part of it, but I think it's just a hunger for learning. The world has changed in the last 50 years. It used to be that you finished college, you got a job, by and large, the skills that you learned in college were pretty much what got you through the rest of your job history. And yeah, you learn some stuff, but it wasn't a dramatic change. Today, we're in a world where the skills that you need for a lot of jobs, they didn't even exist when you went to college. And the jobs, and many of the jobs that existed when you went to college don't even exist today or are dying. So part of that is due to AI, but not only. And we need to find a way of keeping people, giving people access to the skills that they need today. And I think that's really what's driving a lot of this hunger. So I think if we even take a step back, for you, all of this started in trying to think of new ways to teach or to, new ways to sort of organize the material and present the material in a way that would help the education process, the pedagogy, yeah. So what have you learned about effective education from this process of playing, of experimenting with different ideas? So we learned a number of things. Some of which I think could translate back and have translated back effectively to how people teach on campus. And some of which I think are more specific to people who learn online, more sort of people who learn as part of their daily life. So we learned, for instance, very quickly that short is better. So people who are especially in the workforce can't do a 15 week semester long course. They just can't fit that into their lives. Sure, can you describe the shortness of what? The entirety, so every aspect, so the little lecture, the lecture's short, the course is short. Both. We started out, the first online education efforts were actually MIT's OpenCourseWare initiatives. And that was recording of classroom lectures and, Hour and a half or something like that, yeah. And that didn't really work very well. I mean, some people benefit. I mean, of course they did, but it's not really a very palatable experience for someone who has a job and three kids and they need to run errands and such. They can't fit 15 weeks into their life and the hour and a half is really hard. So we learned very quickly. I mean, we started out with short video modules and over time we made them shorter because we realized that 15 minutes was still too long. If you wanna fit in when you're waiting in line for your kid's doctor's appointment, it's better if it's five to seven. We learned that 15 week courses don't work and you really wanna break this up into shorter units so that there is a natural completion point, gives people a sense of they're really close to finishing something meaningful. They can always come back and take part two and part three. We also learned that compressing the content works really well because if some people that pace works well and for others, they can always rewind and watch again. And so people have the ability to then learn at their own pace. And so that flexibility, the brevity and the flexibility are both things that we found to be very important. We learned that engagement during the content is important and the quicker you give people feedback, the more likely they are to be engaged. Hence the introduction of these, which we actually was an intuition that I had going in and was then validated using data that introducing some of these sort of little micro quizzes into the lectures really helps. Self graded as automatically graded assessments really helped too because it gives people feedback. See, there you are. So all of these are valuable. And then we learned a bunch of other things too. We did some really interesting experiments, for instance, on gender bias and how having a female role model as an instructor can change the balance of men to women in terms of, especially in STEM courses. And you could do that online by doing AB testing in ways that would be really difficult to go on campus. Oh, that's exciting. But so the shortness, the compression, I mean, that's actually, so that probably is true for all good editing is always just compressing the content, making it shorter. So that puts a lot of burden on the creator of the, the instructor and the creator of the educational content. Probably most lectures at MIT or Stanford could be five times shorter if the preparation was put enough. So maybe people might disagree with that, but like the Christmas, the clarity that a lot of the, like Coursera delivers is, how much effort does that take? So first of all, let me say that it's not clear that that crispness would work as effectively in a face to face setting because people need time to absorb the material. And so you need to at least pause and give people a chance to reflect and maybe practice. And that's what MOOCs do is that they give you these chunks of content and then ask you to practice with it. And that's where I think some of the newer pedagogy that people are adopting in face to face teaching that have to do with interactive learning and such can be really helpful. But both those approaches, whether you're doing that type of methodology in online teaching or in that flipped classroom, interactive teaching. What's that, sorry to pause, what's flipped classroom? Flipped classroom is a way in which online content is used to supplement face to face teaching where people watch the videos perhaps and do some of the exercises before coming to class. And then when they come to class, it's actually to do much deeper problem solving oftentimes in a group. But any one of those different pedagogies that are beyond just standing there and droning on in front of the classroom for an hour and 15 minutes require a heck of a lot more preparation. And so it's one of the challenges I think that people have that we had when trying to convince instructors to teach on Coursera. And it's part of the challenges that pedagogy experts on campus have in trying to get faculty to teach differently is that it's actually harder to teach that way than it is to stand there and drone. Do you think MOOCs will replace in person education or become the majority of in person of education of the way people learn in the future? Again, the future could be very far away, but where's the trend going do you think? So I think it's a nuanced and complicated answer. I don't think MOOCs will replace face to face teaching. I think learning is in many cases a social experience. And even at Coursera, we had people who naturally formed study groups, even when they didn't have to, to just come and talk to each other. And we found that that actually benefited their learning in very important ways. So there was more success among learners who had those study groups than among ones who didn't. So I don't think it's just gonna, oh, we're all gonna just suddenly learn online with a computer and no one else in the same way that recorded music has not replaced live concerts. But I do think that especially when you are thinking about continuing education, the stuff that people get when they're traditional, whatever high school, college education is done, and they yet have to maintain their level of expertise and skills in a rapidly changing world, I think people will consume more and more educational content in this online format because going back to school for formal education is not an option for most people. Briefly, it might be a difficult question to ask, but there's a lot of people fascinated by artificial intelligence, by machine learning, by deep learning. Is there a recommendation for the next year or for a lifelong journey of somebody interested in this? How do they begin? How do they enter that learning journey? I think the important thing is first to just get started. And there's plenty of online content that one can get for both the core foundations of mathematics and statistics and programming. And then from there to machine learning, I would encourage people not to skip to quickly pass the foundations because I find that there's a lot of people who learn machine learning, whether it's online or on campus without getting those foundations. And they basically just turn the crank on existing models in ways that A, don't allow for a lot of innovation and an adjustment to the problem at hand, but also B, are sometimes just wrong and they don't even realize that their application is wrong because there's artifacts that they haven't fully understood. So I think the foundations, machine learning is an important step. And then actually start solving problems, try and find someone to solve them with because especially at the beginning, it's useful to have someone to bounce ideas off and fix mistakes that you make and you can fix mistakes that they make, but then just find practical problems, whether it's in your workplace or if you don't have that, Kaggle competitions or such are a really great place to find interesting problems and just practice. Practice. Perhaps a bit of a romanticized question, but what idea in deep learning do you find, have you found in your journey the most beautiful or surprising or interesting? Perhaps not just deep learning, but AI in general, statistics. I'm gonna answer with two things. One would be the foundational concept of end to end training, which is that you start from the raw data and you train something that is not like a single piece, but rather towards the actual goal that you're looking to. From the raw data to the outcome, like no details in between. Well, not no details, but the fact that you, I mean, you could certainly introduce building blocks that were trained towards other tasks. I'm actually coming to that in my second half of the answer, but it doesn't have to be like a single monolithic blob in the middle. Actually, I think that's not ideal, but rather the fact that at the end of the day, you can actually train something that goes all the way from the beginning to the end. And the other one that I find really compelling is the notion of learning a representation that in its turn, even if it was trained to another task, can potentially be used as a much more rapid starting point to solving a different task. And that's, I think, reminiscent of what makes people successful learners. It's something that is relatively new in the machine learning space. I think it's underutilized even relative to today's capabilities, but more and more of how do we learn sort of reusable representation? And so end to end and transfer learning. Yeah. Is it surprising to you that neural networks are able to, in many cases, do these things? Is it maybe taken back to when you first would dive deep into neural networks or in general, even today, is it surprising that neural networks work at all and work wonderfully to do this kind of raw end to end and end to end learning and even transfer learning? I think I was surprised by how well when you have large enough amounts of data, it's possible to find a meaningful representation in what is an exceedingly high dimensional space. And so I find that to be really exciting and people are still working on the math for that. There's more papers on that every year. And I think it would be really cool if we figured that out, but that to me was a surprise because in the early days when I was starting my way in machine learning and the data sets were rather small, I think we believed, I believed that you needed to have a much more constrained and knowledge rich search space to really make, to really get to a meaningful answer. And I think it was true at the time. What I think is still a question is will a completely knowledge free approach where there's no prior knowledge going into the construction of the model, is that gonna be the solution or not? It's not actually the solution today in the sense that the architecture of a convolutional neural network that's used for images is actually quite different to the type of network that's used for language and yet different from the one that's used for speech or biology or any other application. There's still some insight that goes into the structure of the network to get the right performance. Will you be able to come up with a universal learning machine? I don't know. I wonder if there's always has to be some insight injected somewhere or whether it can converge. So you've done a lot of interesting work with probabilistic graphical models in general, Bayesian deep learning and so on. Can you maybe speak high level, how can learning systems deal with uncertainty? One of the limitations I think of a lot of machine learning models is that they come up with an answer and you don't know how much you can believe that answer. And oftentimes the answer is actually quite poorly calibrated relative to its uncertainties. Even if you look at where the confidence that comes out of say the neural network at the end, and you ask how much more likely is an answer of 0.8 versus 0.9, it's not really in any way calibrated to the actual reliability of that network and how true it is. And the further away you move from the training data, the more, not only the more wrong the network is, often it's more wrong and more confident in its wrong answer. And that is a serious issue in a lot of application areas. So when you think for instance, about medical diagnosis as being maybe an epitome of how problematic this can be, if you were training your network on a certain set of patients and a certain patient population, and I have a patient that is an outlier and there's no human that looks at this, and that patient is put into a neural network and your network not only gives a completely incorrect diagnosis, but is supremely confident in its wrong answer, you could kill people. So I think creating more of an understanding of how do you produce networks that are calibrated in their uncertainty and can also say, you know what, I give up. I don't know what to say about this particular data instance because I've never seen something that's sufficiently like it before. I think it's going to be really important in mission critical applications, especially ones where human life is at stake and that includes medical applications, but it also includes automated driving because you'd want the network to be able to say, you know what, I have no idea what this blob is that I'm seeing in the middle of the road. So I'm just going to stop because I don't want to potentially run over a pedestrian that I don't recognize. Is there good mechanisms, ideas of how to allow learning systems to provide that uncertainty along with their predictions? Certainly people have come up with mechanisms that involve Bayesian deep learning, deep learning that involves Gaussian processes. I mean, there's a slew of different approaches that people have come up with. There's methods that use ensembles of networks trained with different subsets of data or different random starting points. Those are actually sometimes surprisingly good at creating a sort of set of how confident or not you are in your answer. It's very much an area of open research. Let's cautiously venture back into the land of philosophy and speaking of AI systems providing uncertainty, somebody like Stuart Russell believes that as we create more and more intelligence systems, it's really important for them to be full of self doubt because if they're given more and more power, we want the way to maintain human control over AI systems or human supervision, which is true. Like you just mentioned with autonomous vehicles, it's really important to get human supervision when the car is not sure because if it's really confident in cases when it can get in trouble, it's gonna be really problematic. So let me ask about sort of the questions of AGI and human level intelligence. I mean, we've talked about curing diseases, which is sort of fundamental thing we can have an impact today, but AI people also dream of both understanding and creating intelligence. Is that something you think about? Is that something you dream about? Is that something you think is within our reach to be thinking about as computer scientists? Well, boy, let me tease apart different parts of that question. The worst question. Yeah, it's a multi part question. So let me start with the feasibility of AGI. Then I'll talk about the timelines a little bit and then talk about, well, what controls does one need when thinking about protections in the AI space? So, I think AGI obviously is a longstanding dream that even our early pioneers in the space had, the Turing test and so on are the earliest discussions of that. We're obviously closer than we were 70 or so years ago, but I think it's still very far away. I think machine learning algorithms today are really exquisitely good pattern recognizers in very specific problem domains where they have seen enough training data to make good predictions. You take a machine learning algorithm and you move it to a slightly different version of even that same problem, far less one that's different and it will just completely choke. So I think we're nowhere close to the versatility and flexibility of even a human toddler in terms of their ability to context switch and solve different problems using a single knowledge base, single brain. So am I desperately worried about the machines taking over the universe and starting to kill people because they want to have more power? I don't think so. Well, so to pause on that, so you kind of intuited that super intelligence is a very difficult thing to achieve. Even intelligence. Intelligence, intelligence. Super intelligence, we're not even close to intelligence. Even just the greater abilities of generalization of our current systems. But we haven't answered all the parts and we'll take another. I'm getting to the second part. Okay, but maybe another tangent you can also pick up is can we get in trouble with much dumber systems? Yes, and that is exactly where I was going. So just to wrap up on the threats of AGI, I think that it seems to me a little early today to figure out protections against a human level or superhuman level intelligence where we don't even see the skeleton of what that would look like. So it seems that it's very speculative on how to protect against that. But we can definitely and have gotten into trouble on much dumber systems. And a lot of that has to do with the fact that the systems that we're building are increasingly complex, increasingly poorly understood. And there's ripple effects that are unpredictable in changing little things that can have dramatic consequences on the outcome. And by the way, that's not unique to artificial intelligence. I think artificial intelligence exacerbates that, brings it to a new level. But heck, our electric grid is really complicated. The software that runs our financial markets is really complicated. And we've seen those ripple effects translate to dramatic negative consequences, like for instance, financial crashes that have to do with feedback loops that we didn't anticipate. So I think that's an issue that we need to be thoughtful about in many places, artificial intelligence being one of them. And I think it's really important that people are thinking about ways in which we can have better interpretability of systems, better tests for, for instance, measuring the extent to which a machine learning system that was trained in one set of circumstances, how well does it actually work in a very different set of circumstances where you might say, for instance, well, I'm not gonna be able to test my automated vehicle in every possible city, village, weather condition and so on. But if you trained it on this set of conditions and then tested it on 50 or a hundred others that were quite different from the ones that you trained it on and it worked, then that gives you confidence that the next 50 that you didn't test it on might also work. So effectively it's testing for generalizability. So I think there's ways that we should be constantly thinking about to validate the robustness of our systems. I think it's very different from the let's make sure robots don't take over the world. And then the other place where I think we have a threat, which is also important for us to think about is the extent to which technology can be abused. So like any really powerful technology, machine learning can be very much used badly as well as to good. And that goes back to many other technologies that have come up with when people invented projectile missiles and it turned into guns and people invented nuclear power and it turned into nuclear bombs. And I think honestly, I would say that to me, gene editing and CRISPR is at least as dangerous as technology if used badly than as machine learning. You could create really nasty viruses and such using gene editing that you would be really careful about. So anyway, that's something that we need to be really thoughtful about whenever we have any really powerful new technology. Yeah, and in the case of machine learning is adversarial machine learning. So all the kinds of attacks like security almost threats and there's a social engineering with machine learning algorithms. And there's face recognition and big brother is watching you and there's the killer drones that can potentially go and targeted execution of people in a different country. One can argue that bombs are not necessarily that much better, but people wanna kill someone, they'll find a way to do it. So in general, if you look at trends in the data, there's less wars, there's less violence, there's more human rights. So we've been doing overall quite good as a human species. Are you optimistic? Surprisingly sometimes. Are you optimistic? Maybe another way to ask is do you think most people are good and fundamentally we tend towards a better world, which is underlying the question, will machine learning with gene editing ultimately land us somewhere good? Are you optimistic? I think by and large, I'm optimistic. I think that most people mean well, that doesn't mean that most people are altruistic do gooders, but I think most people mean well, but I think it's also really important for us as a society to create social norms where doing good and being perceived well by our peers are positively correlated. I mean, it's very easy to create dysfunctional norms in emotional societies. There's certainly multiple psychological experiments as well as sadly real world events where people have devolved to a world where being perceived well by your peers is correlated with really atrocious, often genocidal behaviors. So we really want to make sure that we maintain a set of social norms where people know that to be a successful member of society, you want to be doing good. And one of the things that I sometimes worry about is that some societies don't seem to necessarily be moving in the forward direction in that regard where it's not necessarily the case that being a good person is what makes you be perceived well by your peers. And I think that's a really important thing for us as a society to remember. It's really easy to degenerate back into a universe where it's okay to do really bad stuff and still have your peers think you're amazing. It's fun to ask a world class computer scientist and engineer a ridiculously philosophical question like what is the meaning of life? Let me ask, what gives your life meaning? Or what is the source of fulfillment, happiness, joy, purpose? When we were starting Coursera in the fall of 2011, that was right around the time that Steve Jobs passed away. And so the media was full of various famous quotes that he uttered and one of them that really stuck with me because it resonated with stuff that I'd been feeling for even years before that is that our goal in life should be to make a dent in the universe. So I think that to me, what gives my life meaning is that I would hope that when I am lying there on my deathbed and looking at what I'd done in my life that I can point to ways in which I have left the world a better place than it was when I entered it. This is something I tell my kids all the time because I also think that the burden of that is much greater for those of us who were born to privilege. And in some ways I was, I mean, I wasn't born super wealthy or anything like that, but I grew up in an educated family with parents who loved me and took care of me and I had a chance at a great education and I always had enough to eat. So I was in many ways born to privilege more than the vast majority of humanity. And my kids I think are even more so born to privilege than I was fortunate enough to be. And I think it's really important that especially for those of us who have that opportunity that we use our lives to make the world a better place. I don't think there's a better way to end it. Daphne, it was an honor to talk to you. Thank you so much for talking today. Thank you. Thanks for listening to this conversation with Daphne Koller and thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LEXPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LEXFREEDMAN. And now let me leave you with some words from Hippocrates, a physician from ancient Greece who's considered to be the father of medicine. Wherever the art of medicine is loved, there's also a love of humanity. Thank you for listening and hope to see you next time.
Daphne Koller: Biomedicine and Machine Learning | Lex Fridman Podcast #93
The following is a conversation with Ilya Sotskever, cofounder and chief scientist of OpenAI, one of the most cited computer scientists in history with over 165,000 citations, and to me, one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life in general than Ilya, on and off the mic. This was an honor and a pleasure. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong, we're in this together, we'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at lexfriedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, invest in the stock market with as little as $1. Since Cash App allows you to buy Bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book on this history. Both the book and audio book are great. Debits and credits on ledgers started around 30,000 years ago. The US dollar created over 200 years ago, and Bitcoin, the first decentralized cryptocurrency, released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it's still aiming to and just might redefine the nature of money. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping advance robotics and STEM education for young people around the world. And now here's my conversation with Ilya Satsgever. You were one of the three authors with Alex Kaszewski, Geoff Hinton of the famed AlexNet paper that is arguably the paper that marked the big catalytic moment that launched the deep learning revolution. At that time, take us back to that time, what was your intuition about neural networks, about the representational power of neural networks? And maybe you could mention how did that evolve over the next few years up to today, over the 10 years? Yeah, I can answer that question. At some point in about 2010 or 2011, I connected two facts in my mind. Basically, the realization was this, at some point we realized that we can train very large, I shouldn't say very, tiny by today's standards, but large and deep neural networks end to end with backpropagation. At some point, different people obtained this result. I obtained this result. The first moment in which I realized that deep neural networks are powerful was when James Martens invented the Hessian free optimizer in 2010. And he trained a 10 layer neural network end to end without pre training from scratch. And when that happened, I thought this is it. Because if you can train a big neural network, a big neural network can represent very complicated function. Because if you have a neural network with 10 layers, it's as though you allow the human brain to run for some number of milliseconds. Neuron firings are slow. And so in maybe 100 milliseconds, your neurons only fire 10 times. So it's also kind of like 10 layers. And in 100 milliseconds, you can perfectly recognize any object. So I thought, so I already had the idea then that we need to train a very big neural network on lots of supervised data. And then it must succeed because we can find the best neural network. And then there's also theory that if you have more data than parameters, you won't overfit. Today, we know that actually this theory is very incomplete and you won't overfit even if you have less data than parameters, but definitely, if you have more data than parameters, you won't overfit. So the fact that neural networks were heavily overparameterized wasn't discouraging to you? So you were thinking about the theory that the number of parameters, the fact that there's a huge number of parameters is okay? Is it gonna be okay? I mean, there was some evidence before that it was okayish, but the theory was most, the theory was that if you had a big data set and a big neural net, it was going to work. The overparameterization just didn't really figure much as a problem. I thought, well, with images, you're just gonna add some data augmentation and it's gonna be okay. So where was any doubt coming from? The main doubt was, can we train a bigger, will we have enough computer train a big enough neural net? With backpropagation. Backpropagation I thought would work. The thing which wasn't clear was whether there would be enough compute to get a very convincing result. And then at some point, Alex Kerchevsky wrote these insanely fast CUDA kernels for training convolutional neural nets. Net was bam, let's do this. Let's get image in it and it's gonna be the greatest thing. Was your intuition, most of your intuition from empirical results by you and by others? So like just actually demonstrating that a piece of program can train a 10 layer neural network? Or was there some pen and paper or marker and whiteboard thinking intuition? Like, cause you just connected a 10 layer large neural network to the brain. So you just mentioned the brain. So in your intuition about neural networks does the human brain come into play as a intuition builder? Definitely. I mean, you gotta be precise with these analogies between artificial neural networks and the brain. But there is no question that the brain is a huge source of intuition and inspiration for deep learning researchers since all the way from Rosenblatt in the 60s. Like if you look at the whole idea of a neural network is directly inspired by the brain. You had people like McCallum and Pitts who were saying, hey, you got these neurons in the brain. And hey, we recently learned about the computer and automata. Can we use some ideas from the computer and automata to design some kind of computational object that's going to be simple, computational and kind of like the brain and they invented the neuron. So they were inspired by it back then. Then you had the convolutional neural network from Fukushima and then later Yann LeCun who said, hey, if you limit the receptive fields of a neural network, it's going to be especially suitable for images as it turned out to be true. So there was a very small number of examples where analogies to the brain were successful. And I thought, well, probably an artificial neuron is not that different from the brain if it's cleaned hard enough. So let's just assume it is and roll with it. So now we're not at a time where deep learning is very successful. So let us squint less and say, let's open our eyes and say, what do you use an interesting difference between the human brain? Now, I know you're probably not an expert neither in your scientists and your biologists, but loosely speaking, what's the difference between the human brain and artificial neural networks? That's interesting to you for the next decade or two. That's a good question to ask. What is an interesting difference between the neurons between the brain and our artificial neural networks? So I feel like today, artificial neural networks, so we all agree that there are certain dimensions in which the human brain vastly outperforms our models. But I also think that there are some ways in which our artificial neural networks have a number of very important advantages over the brain. Looking at the advantages versus disadvantages is a good way to figure out what is the important difference. So the brain uses spikes, which may or may not be important. Yeah, it's a really interesting question. Do you think it's important or not? That's one big architectural difference between artificial neural networks. It's hard to tell, but my prior is not very high and I can say why. There are people who are interested in spiking neural networks. And basically what they figured out is that they need to simulate the non spiking neural networks in spikes. And that's how they're gonna make them work. If you don't simulate the non spiking neural networks in spikes, it's not going to work because the question is why should it work? And that connects to questions around back propagation and questions around deep learning. You've got this giant neural network. Why should it work at all? Why should the learning rule work at all? It's not a self evident question, especially if you, let's say if you were just starting in the field and you read the very early papers, you can say, hey, people are saying, let's build neural networks. That's a great idea because the brain is a neural network. So it would be useful to build neural networks. Now let's figure out how to train them. It should be possible to train them probably, but how? And so the big idea is the cost function. That's the big idea. The cost function is a way of measuring the performance of the system according to some measure. By the way, that is a big, actually let me think, is that one, a difficult idea to arrive at and how big of an idea is that? That there's a single cost function. Sorry, let me take a pause. Is supervised learning a difficult concept to come to? I don't know. All concepts are very easy in retrospect. Yeah, that's what it seems trivial now, but I, because the reason I asked that, and we'll talk about it, is there other things? Is there things that don't necessarily have a cost function, maybe have many cost functions or maybe have dynamic cost functions or maybe a totally different kind of architectures? Because we have to think like that in order to arrive at something new, right? So the only, so the good examples of things which don't have clear cost functions are GANs. Right. And a GAN, you have a game. So instead of thinking of a cost function, where you wanna optimize, where you know that you have an algorithm gradient descent, which will optimize the cost function, and then you can reason about the behavior of your system in terms of what it optimizes. With a GAN, you say, I have a game and I'll reason about the behavior of the system in terms of the equilibrium of the game. But it's all about coming up with these mathematical objects that help us reason about the behavior of our system. Right, that's really interesting. Yeah, so GAN is the only one, it's kind of a, the cost function is emergent from the comparison. It's, I don't know if it has a cost function. I don't know if it's meaningful to talk about the cost function of a GAN. It's kind of like the cost function of biological evolution or the cost function of the economy. It's, you can talk about regions to which it will go towards, but I don't think, I don't think the cost function analogy is the most useful. So if evolution doesn't, that's really interesting. So if evolution doesn't really have a cost function, like a cost function based on its, something akin to our mathematical conception of a cost function, then do you think cost functions in deep learning are holding us back? Yeah, so you just kind of mentioned that cost function is a nice first profound idea. Do you think that's a good idea? Do you think it's an idea we'll go past? So self play starts to touch on that a little bit in reinforcement learning systems. That's right. Self play and also ideas around exploration where you're trying to take action that surprise a predictor. I'm a big fan of cost functions. I think cost functions are great and they serve us really well. And I think that whenever we can do things with cost functions, we should. And you know, maybe there is a chance that we will come up with some, yet another profound way of looking at things that will involve cost functions in a less central way. But I don't know, I think cost functions are, I mean, I would not bet against cost functions. Is there other things about the brain that pop into your mind that might be different and interesting for us to consider in designing artificial neural networks? So we talked about spiking a little bit. I mean, one thing which may potentially be useful, I think people, neuroscientists have figured out something about the learning rule of the brain or I'm talking about spike time independent plasticity and it would be nice if some people would just study that in simulation. Wait, sorry, spike time independent plasticity? Yeah, that's right. What's that? STD. It's a particular learning rule that uses spike timing to figure out how to determine how to update the synapses. So it's kind of like if a synapse fires into the neuron before the neuron fires, then it strengthens the synapse, and if the synapse fires into the neurons shortly after the neuron fired, then it weakens the synapse. Something along this line. I'm 90% sure it's right, so if I said something wrong here, don't get too angry. But you sounded brilliant while saying it. But the timing, that's one thing that's missing. The temporal dynamics is not captured. I think that's like a fundamental property of the brain is the timing of the timing of the timing of the signals. Well, you have recurrent neural networks. But you think of that as this, I mean, that's a very crude, simplified, what's that called? There's a clock, I guess, to recurrent neural networks. It's, this seems like the brain is the general, the continuous version of that, the generalization where all possible timings are possible, and then within those timings is contained some information. You think recurrent neural networks, the recurrence in recurrent neural networks can capture the same kind of phenomena as the timing that seems to be important for the brain, in the firing of neurons in the brain? I mean, I think recurrent neural networks are amazing, and they can do, I think they can do anything we'd want them to, we'd want a system to do. Right now, recurrent neural networks have been superseded by transformers, but maybe one day they'll make a comeback, maybe they'll be back, we'll see. Let me, on a small tangent, say, do you think they'll be back? So, so much of the breakthroughs recently that we'll talk about on natural language processing and language modeling has been with transformers that don't emphasize recurrence. Do you think recurrence will make a comeback? Well, some kind of recurrence, I think very likely. Recurrent neural networks, as they're typically thought of for processing sequences, I think it's also possible. What is, to you, a recurrent neural network? In generally speaking, I guess, what is a recurrent neural network? You have a neural network which maintains a high dimensional hidden state, and then when an observation arrives, it updates its high dimensional hidden state through its connections in some way. So do you think, that's what expert systems did, right? Symbolic AI, the knowledge based, growing a knowledge base is maintaining a hidden state, which is its knowledge base, and is growing it by sequential processing. Do you think of it more generally in that way, or is it simply, is it the more constrained form of a hidden state with certain kind of gating units that we think of as today with LSTMs and that? I mean, the hidden state is technically what you described there, the hidden state that goes inside the LSTM or the RNN or something like this. But then what should be contained, if you want to make the expert system analogy, I'm not, I mean, you could say that the knowledge is stored in the connections, and then the short term processing is done in the hidden state. Yes, could you say that? So sort of, do you think there's a future of building large scale knowledge bases within the neural networks? Definitely. So we're gonna pause on that confidence, because I want to explore that. Well, let me zoom back out and ask, back to the history of ImageNet. Neural networks have been around for many decades, as you mentioned. What do you think were the key ideas that led to their success, that ImageNet moment and beyond, the success in the past 10 years? Okay, so the question is, to make sure I didn't miss anything, the key ideas that led to the success of deep learning over the past 10 years. Exactly, even though the fundamental thing behind deep learning has been around for much longer. So the key idea about deep learning, or rather the key fact about deep learning before deep learning started to be successful, is that it was underestimated. People who worked in machine learning simply didn't think that neural networks could do much. People didn't believe that large neural networks could be trained. People thought that, well, there was lots of, there was a lot of debate going on in machine learning about what are the right methods and so on. And people were arguing because there were no, there was no way to get hard facts. And by that, I mean, there were no benchmarks which were truly hard that if you do really well on them, then you can say, look, here's my system. That's when you switch from, that's when this field becomes a little bit more of an engineering field. So in terms of deep learning, to answer the question directly, the ideas were all there. The thing that was missing was a lot of supervised data and a lot of compute. Once you have a lot of supervised data and a lot of compute, then there is a third thing which is needed as well. And that is conviction. Conviction that if you take the right stuff, which already exists, and apply and mix it with a lot of data and a lot of compute, that it will in fact work. And so that was the missing piece. It was, you had the, you needed the data, you needed the compute, which showed up in terms of GPUs, and you needed the conviction to realize that you need to mix them together. So that's really interesting. So I guess the presence of compute and the presence of supervised data allowed the empirical evidence to do the convincing of the majority of the computer science community. So I guess there's a key moment with Jitendra Malik and Alex Alyosha Efros who were very skeptical, right? And then there's a Jeffrey Hinton that was the opposite of skeptical. And there was a convincing moment. And I think ImageNet had served as that moment. That's right. And they represented this kind of, were the big pillars of computer vision community, kind of the wizards got together, and then all of a sudden there was a shift. And it's not enough for the ideas to all be there and the compute to be there, it's for it to convince the cynicism that existed. It's interesting that people just didn't believe for a couple of decades. Yeah, well, but it's more than that. It's kind of, when put this way, it sounds like, well, those silly people who didn't believe, what were they missing? But in reality, things were confusing because neural networks really did not work on anything. And they were not the best method on pretty much anything as well. And it was pretty rational to say, yeah, this stuff doesn't have any traction. And that's why you need to have these very hard tasks which produce undeniable evidence. And that's how we make progress. And that's why the field is making progress today because we have these hard benchmarks which represent true progress. And so, and this is why we are able to avoid endless debate. So incredibly you've contributed some of the biggest recent ideas in AI in computer vision, language, natural language processing, reinforcement learning, sort of everything in between, maybe not GANs. But there may not be a topic you haven't touched. And of course, the fundamental science of deep learning. What is the difference to you between vision, language, and as in reinforcement learning, action, as learning problems? And what are the commonalities? Do you see them as all interconnected? Are they fundamentally different domains that require different approaches? Okay, that's a good question. Machine learning is a field with a lot of unity, a huge amount of unity. In fact. What do you mean by unity? Like overlap of ideas? Overlap of ideas, overlap of principles. In fact, there's only one or two or three principles which are very, very simple. And then they apply in almost the same way, in almost the same way to the different modalities, to the different problems. And that's why today, when someone writes a paper on improving optimization of deep learning and vision, it improves the different NLP applications and it improves the different reinforcement learning applications. Reinforcement learning. So I would say that computer vision and NLP are very similar to each other. Today they differ in that they have slightly different architectures. We use transformers in NLP and we use convolutional neural networks in vision. But it's also possible that one day this will change and everything will be unified with a single architecture. Because if you go back a few years ago in natural language processing, there were a huge number of architectures for every different tiny problem had its own architecture. Today, there's just one transformer for all those different tasks. And if you go back in time even more, you had even more and more fragmentation and every little problem in AI had its own little subspecialization and sub, you know, little set of collection of skills, people who would know how to engineer the features. Now it's all been subsumed by deep learning. We have this unification. And so I expect vision to become unified with natural language as well. Or rather, I shouldn't say expect, I think it's possible. I don't wanna be too sure because I think on the convolutional neural net is very computationally efficient. RL is different. RL does require slightly different techniques because you really do need to take action. You really need to do something about exploration. Your variance is much higher. But I think there is a lot of unity even there. And I would expect, for example, that at some point there will be some broader unification between RL and supervised learning where somehow the RL will be making decisions to make the supervised learning go better. And it will be, I imagine, one big black box and you just throw, you know, you shovel things into it and it just figures out what to do with whatever you shovel at it. I mean, reinforcement learning has some aspects of language and vision combined almost. There's elements of a long term memory that you should be utilizing and there's elements of a really rich sensory space. So it seems like the union of the two or something like that. I'd say something slightly differently. I'd say that reinforcement learning is neither, but it naturally interfaces and integrates with the two of them. Do you think action is fundamentally different? So yeah, what is interesting about, what is unique about policy of learning to act? Well, so one example, for instance, is that when you learn to act, you are fundamentally in a non stationary world because as your actions change, the things you see start changing. You experience the world in a different way. And this is not the case for the more traditional static problem where you have some distribution and you just apply a model to that distribution. You think it's a fundamentally different problem or is it just a more difficult generalization of the problem of understanding? I mean, it's a question of definitions almost. There is a huge amount of commonality for sure. You take gradients, you try, you take gradients. We try to approximate gradients in both cases. In the case of reinforcement learning, you have some tools to reduce the variance of the gradients. You do that. There's lots of commonality. Use the same neural net in both cases. You compute the gradient, you apply Adam in both cases. So, I mean, there's lots in common for sure, but there are some small differences which are not completely insignificant. It's really just a matter of your point of view, what frame of reference, how much do you wanna zoom in or out as you look at these problems? Which problem do you think is harder? So people like Noam Chomsky believe that language is fundamental to everything. So it underlies everything. Do you think language understanding is harder than visual scene understanding or vice versa? I think that asking if a problem is hard is slightly wrong. I think the question is a little bit wrong and I wanna explain why. So what does it mean for a problem to be hard? Okay, the non interesting dumb answer to that is there's a benchmark and there's a human level performance on that benchmark and how is the effort required to reach the human level benchmark. So from the perspective of how much until we get to human level on a very good benchmark. Yeah, I understand what you mean by that. So what I was going to say that a lot of it depends on, once you solve a problem, it stops being hard and that's always true. And so whether something is hard or not depends on what our tools can do today. So you say today through human level, language understanding and visual perception are hard in the sense that there is no way of solving the problem completely in the next three months. So I agree with that statement. Beyond that, my guess would be as good as yours, I don't know. Oh, okay, so you don't have a fundamental intuition about how hard language understanding is. I think, I know I changed my mind. I'd say language is probably going to be harder. I mean, it depends on how you define it. Like if you mean absolute top notch, 100% language understanding, I'll go with language. But then if I show you a piece of paper with letters on it, is that, you see what I mean? You have a vision system, you say it's the best human level vision system. I show you, I open a book and I show you letters. Will it understand how these letters form into word and sentences and meaning? Is this part of the vision problem? Where does vision end and language begin? Yeah, so Chomsky would say it starts at language. So vision is just a little example of the kind of a structure and fundamental hierarchy of ideas that's already represented in our brains somehow that's represented through language. But where does vision stop and language begin? That's a really interesting question. So one possibility is that it's impossible to achieve really deep understanding in either images or language without basically using the same kind of system. So you're going to get the other for free. I think it's pretty likely that yes, if we can get one, our machine learning is probably that good that we can get the other. But I'm not 100% sure. And also, I think a lot of it really does depend on your definitions. Definitions of? Of like perfect vision. Because reading is vision, but should it count? Yeah, to me, so my definition is if a system looked at an image and then a system looked at a piece of text and then told me something about that and I was really impressed. That's relative. You'll be impressed for half an hour and then you're gonna say, well, I mean, all the systems do that, but here's the thing they don't do. Yeah, but I don't have that with humans. Humans continue to impress me. Is that true? Well, the ones, okay, so I'm a fan of monogamy. So I like the idea of marrying somebody, being with them for several decades. So I believe in the fact that yes, it's possible to have somebody continuously giving you pleasurable, interesting, witty new ideas, friends. Yeah, I think so. They continue to surprise you. The surprise, it's that injection of randomness. It seems to be a nice source of, yeah, continued inspiration, like the wit, the humor. I think, yeah, that would be, it's a very subjective test, but I think if you have enough humans in the room. Yeah, I understand what you mean. Yeah, I feel like I misunderstood what you meant by impressing you. I thought you meant to impress you with its intelligence, with how well it understands an image. I thought you meant something like, I'm gonna show it a really complicated image and it's gonna get it right. And you're gonna say, wow, that's really cool. Our systems of January 2020 have not been doing that. Yeah, no, I think it all boils down to like the reason people click like on stuff on the internet, which is like, it makes them laugh. So it's like humor or wit or insight. I'm sure we'll get that as well. So forgive the romanticized question, but looking back to you, what is the most beautiful or surprising idea in deep learning or AI in general you've come across? So I think the most beautiful thing about deep learning is that it actually works. And I mean it, because you got these ideas, you got the little neural network, you got the back propagation algorithm. And then you've got some theories as to, this is kind of like the brain. So maybe if you make it large, if you make the neural network large and you train it on a lot of data, then it will do the same function that the brain does. And it turns out to be true, that's crazy. And now we just train these neural networks and you make them larger and they keep getting better. And I find it unbelievable. I find it unbelievable that this whole AI stuff with neural networks works. Have you built up an intuition of why? Are there a lot of bits and pieces of intuitions, of insights of why this whole thing works? I mean, some, definitely. While we know that optimization, we now have good, we've had lots of empirical, huge amounts of empirical reasons to believe that optimization should work on most problems we care about. Do you have insights of why? So you just said empirical evidence. Is most of your sort of empirical evidence kind of convinces you? It's like evolution is empirical. It shows you that, look, this evolutionary process seems to be a good way to design organisms that survive in their environment, but it doesn't really get you to the insights of how the whole thing works. I think a good analogy is physics. You know how you say, hey, let's do some physics calculation and come up with some new physics theory and make some prediction. But then you got around the experiment. You know, you got around the experiment, it's important. So it's a bit the same here, except that maybe sometimes the experiment came before the theory. But it still is the case. You know, you have some data and you come up with some prediction. You say, yeah, let's make a big neural network. Let's train it. And it's going to work much better than anything before it. And it will in fact continue to get better as you make it larger. And it turns out to be true. That's amazing when a theory is validated like this. It's not a mathematical theory. It's more of a biological theory almost. So I think there are not terrible analogies between deep learning and biology. I would say it's like the geometric mean of biology and physics. That's deep learning. The geometric mean of biology and physics. I think I'm going to need a few hours to wrap my head around that. Because just to find the geometric, just to find the set of what biology represents. Well, in biology, things are really complicated. Theories are really, really, it's really hard to have good predictive theory. And in physics, the theories are too good. In physics, people make these super precise theories which make these amazing predictions. And in machine learning, we're kind of in between. Kind of in between, but it'd be nice if machine learning somehow helped us discover the unification of the two as opposed to sort of the in between. But you're right. That's, you're kind of trying to juggle both. So do you think there are still beautiful and mysterious properties in neural networks that are yet to be discovered? Definitely. I think that we are still massively underestimating deep learning. What do you think it will look like? Like what, if I knew, I would have done it, you know? So, but if you look at all the progress from the past 10 years, I would say most of it, I would say there've been a few cases where some were things that felt like really new ideas showed up, but by and large it was every year we thought, okay, deep learning goes this far. Nope, it actually goes further. And then the next year, okay, now this is peak deep learning. We are really done. Nope, it goes further. It just keeps going further each year. So that means that we keep underestimating, we keep not understanding it. It has surprising properties all the time. Do you think it's getting harder and harder? To make progress? Need to make progress? It depends on what you mean. I think the field will continue to make very robust progress for quite a while. I think for individual researchers, especially people who are doing research, it can be harder because there is a very large number of researchers right now. I think that if you have a lot of compute, then you can make a lot of very interesting discoveries, but then you have to deal with the challenge of managing a huge compute cluster to run your experiments. It's a little bit harder. So I'm asking all these questions that nobody knows the answer to, but you're one of the smartest people I know, so I'm gonna keep asking. So let's imagine all the breakthroughs that happen in the next 30 years in deep learning. Do you think most of those breakthroughs can be done by one person with one computer? Sort of in the space of breakthroughs, do you think compute will be, compute and large efforts will be necessary? I mean, I can't be sure. When you say one computer, you mean how large? You're clever. I mean, one GPU. I see. I think it's pretty unlikely. I think it's pretty unlikely. I think that there are many, the stack of deep learning is starting to be quite deep. If you look at it, you've got all the way from the ideas, the systems to build the data sets, the distributed programming, the building the actual cluster, the GPU programming, putting it all together. So now the stack is getting really deep and I think it becomes, it can be quite hard for a single person to become, to be world class in every single layer of the stack. What about the, what like Vlad and Ravapnik really insist on is taking MNIST and trying to learn from very few examples. So being able to learn more efficiently. Do you think that's, there'll be breakthroughs in that space that would, may not need the huge compute? I think there will be a large number of breakthroughs in general that will not need a huge amount of compute. So maybe I should clarify that. I think that some breakthroughs will require a lot of compute and I think building systems which actually do things will require a huge amount of compute. That one is pretty obvious. If you want to do X and X requires a huge neural net, you gotta get a huge neural net. But I think there will be lots of, I think there is lots of room for very important work being done by small groups and individuals. Can you maybe sort of on the topic of the science of deep learning, talk about one of the recent papers that you released, the Deep Double Descent, where bigger models and more data hurt. I think it's a really interesting paper. Can you describe the main idea? Yeah, definitely. So what happened is that some, over the years, some small number of researchers noticed that it is kind of weird that when you make the neural network larger, it works better and it seems to go in contradiction with statistical ideas. And then some people made an analysis showing that actually you got this double descent bump. And what we've done was to show that double descent occurs for pretty much all practical deep learning systems. And that it'll be also, so can you step back? What's the X axis and the Y axis of a double descent plot? Okay, great. So you can look, you can do things like, you can take your neural network and you can start increasing its size slowly while keeping your data set fixed. So if you increase the size of the neural network slowly, and if you don't do early stopping, that's a pretty important detail, then when the neural network is really small, you make it larger, you get a very rapid increase in performance. Then you continue to make it larger. And at some point performance will get worse. And it gets the worst exactly at the point at which it achieves zero training error, precisely zero training loss. And then as you make it larger, it starts to get better again. And it's kind of counterintuitive because you'd expect deep learning phenomena to be monotonic. And it's hard to be sure what it means, but it also occurs in the case of linear classifiers. And the intuition basically boils down to the following. When you have a large data set and a small model, then small, tiny random, so basically what is overfitting? Overfitting is when your model is somehow very sensitive to the small random unimportant stuff in your data set. In the training data. In the training data set, precisely. So if you have a small model and you have a big data set, and there may be some random thing, some training cases are randomly in the data set and others may not be there, but the small model is kind of insensitive to this randomness because it's the same, there is pretty much no uncertainty about the model when the data set is large. So, okay. So at the very basic level to me, it is the most surprising thing that neural networks don't overfit every time very quickly before ever being able to learn anything. The huge number of parameters. So here is, so there is one way, okay. So maybe, so let me try to give the explanation and maybe that will be, that will work. So you've got a huge neural network. Let's suppose you've got, you have a huge neural network, you have a huge number of parameters. And now let's pretend everything is linear, which is not, let's just pretend. Then there is this big subspace where your neural network achieves zero error. And SGD is going to find approximately the point. That's right. Approximately the point with the smallest norm in that subspace. Okay. And that can also be proven to be insensitive to the small randomness in the data when the dimensionality is high. But when the dimensionality of the data is equal to the dimensionality of the model, then there is a one to one correspondence between all the data sets and the models. So small changes in the data set actually lead to large changes in the model. And that's why performance gets worse. So this is the best explanation more or less. So then it would be good for the model to have more parameters, so to be bigger than the data. That's right. But only if you don't early stop. If you introduce early stop in your regularization, you can make the double descent bump almost completely disappear. What is early stop? Early stopping is when you train your model and you monitor your validation performance. And then if at some point validation performance starts to get worse, you say, okay, let's stop training. We are good enough. So the magic happens after that moment. So you don't want to do the early stopping. Well, if you don't do the early stopping, you get the very pronounced double descent. Do you have any intuition why this happens? Double descent? Oh, sorry, early stopping? No, the double descent. So the... Well, yeah, so I try... Let's see. The intuition is basically is this, that when the data set has as many degrees of freedom as the model, then there is a one to one correspondence between them. And so small changes to the data set lead to noticeable changes in the model. So your model is very sensitive to all the randomness. It is unable to discard it. Whereas it turns out that when you have a lot more data than parameters or a lot more parameters than data, the resulting solution will be insensitive to small changes in the data set. Oh, so it's able to, let's nicely put, discard the small changes, the randomness. The randomness, exactly. The spurious correlation which you don't want. Jeff Hinton suggested we need to throw back propagation. We already kind of talked about this a little bit, but he suggested that we need to throw away back propagation and start over. I mean, of course some of that is a little bit wit and humor, but what do you think? What could be an alternative method of training neural networks? Well, the thing that he said precisely is that to the extent that you can't find back propagation in the brain, it's worth seeing if we can learn something from how the brain learns. But back propagation is very useful and we should keep using it. Oh, you're saying that once we discover the mechanism of learning in the brain, or any aspects of that mechanism, we should also try to implement that in neural networks? If it turns out that we can't find back propagation in the brain. If we can't find back propagation in the brain. Well, so I guess your answer to that is back propagation is pretty damn useful. So why are we complaining? I mean, I personally am a big fan of back propagation. I think it's a great algorithm because it solves an extremely fundamental problem, which is finding a neural circuit subject to some constraints. And I don't see that problem going away. So that's why I really, I think it's pretty unlikely that we'll have anything which is going to be dramatically different. It could happen, but I wouldn't bet on it right now. So let me ask a sort of big picture question. Do you think neural networks can be made to reason? Why not? Well, if you look, for example, at AlphaGo or AlphaZero, the neural network of AlphaZero plays Go, which we all agree is a game that requires reasoning, better than 99.9% of all humans. Just the neural network, without the search, just the neural network itself. Doesn't that give us an existence proof that neural networks can reason? To push back and disagree a little bit, we all agree that Go is reasoning. I think I agree, I don't think it's a trivial, so obviously reasoning like intelligence is a loose gray area term a little bit. Maybe you disagree with that. But yes, I think it has some of the same elements of reasoning. Reasoning is almost like akin to search, right? There's a sequential element of reasoning of stepwise consideration of possibilities and sort of building on top of those possibilities in a sequential manner until you arrive at some insight. So yeah, I guess playing Go is kind of like that. And when you have a single neural network doing that without search, it's kind of like that. So there's an existence proof in a particular constrained environment that a process akin to what many people call reasoning exists, but more general kind of reasoning. So off the board. There is one other existence proof. Oh boy, which one? Us humans? Yes. Okay, all right, so do you think the architecture that will allow neural networks to reason will look similar to the neural network architectures we have today? I think it will. I think, well, I don't wanna make two overly definitive statements. I think it's definitely possible that the neural networks that will produce the reasoning breakthroughs of the future will be very similar to the architectures that exist today. Maybe a little bit more recurrent, maybe a little bit deeper. But these neural nets are so insanely powerful. Why wouldn't they be able to learn to reason? Humans can reason. So why can't neural networks? So do you think the kind of stuff we've seen neural networks do is a kind of just weak reasoning? So it's not a fundamentally different process. Again, this is stuff nobody knows the answer to. So when it comes to our neural networks, the thing which I would say is that neural networks are capable of reasoning. But if you train a neural network on a task which doesn't require reasoning, it's not going to reason. This is a well known effect where the neural network will solve the problem that you pose in front of it in the easiest way possible. Right, that takes us to one of the brilliant sort of ways you've described neural networks, which is you've referred to neural networks as the search for small circuits and maybe general intelligence as the search for small programs, which I found as a metaphor very compelling. Can you elaborate on that difference? Yeah, so the thing which I said precisely was that if you can find the shortest program that outputs the data at your disposal, then you will be able to use it to make the best prediction possible. And that's a theoretical statement which can be proved mathematically. Now, you can also prove mathematically that finding the shortest program which generates some data is not a computable operation. No finite amount of compute can do this. So then with neural networks, neural networks are the next best thing that actually works in practice. We are not able to find the best, the shortest program which generates our data, but we are able to find a small, but now that statement should be amended, even a large circuit which fits our data in some way. Well, I think what you meant by the small circuit is the smallest needed circuit. Well, the thing which I would change now, back then I really haven't fully internalized the overparameterized results. The things we know about overparameterized neural nets, now I would phrase it as a large circuit whose weights contain a small amount of information, which I think is what's going on. If you imagine the training process of a neural network as you slowly transmit entropy from the dataset to the parameters, then somehow the amount of information in the weights ends up being not very large, which would explain why they generalize so well. So the large circuit might be one that's helpful for the generalization. Yeah, something like this. But do you see it important to be able to try to learn something like programs? I mean, if we can, definitely. I think it's kind of, the answer is kind of yes, if we can do it, we should do things that we can do it. It's the reason we are pushing on deep learning, the fundamental reason, the root cause is that we are able to train them. So in other words, training comes first. We've got our pillar, which is the training pillar. And now we're trying to contort our neural networks around the training pillar. We gotta stay trainable. This is an invariant we cannot violate. And so being trainable means starting from scratch, knowing nothing, you can actually pretty quickly converge towards knowing a lot. Or even slowly. But it means that given the resources at your disposal, you can train the neural net and get it to achieve useful performance. Yeah, that's a pillar we can't move away from. That's right. Because if you say, hey, let's find the shortest program, well, we can't do that. So it doesn't matter how useful that would be. We can't do it. So we won't. So do you think, you kind of mentioned that the neural networks are good at finding small circuits or large circuits. Do you think then the matter of finding small programs is just the data? No. So the, sorry, not the size or the type of data. Sort of ask, giving it programs. Well, I think the thing is that right now, finding, there are no good precedents of people successfully finding programs really well. And so the way you'd find programs is you'd train a deep neural network to do it basically. Right. Which is the right way to go about it. But there's not good illustrations of that. It hasn't been done yet. But in principle, it should be possible. Can you elaborate a little bit, what's your answer in principle? Put another way, you don't see why it's not possible. Well, it's kind of like more, it's more a statement of, I think that it's, I think that it's unwise to bet against deep learning. And if it's a cognitive function that humans seem to be able to do, then it doesn't take too long for some deep neural net to pop up that can do it too. Yeah, I'm there with you. I've stopped betting against neural networks at this point because they continue to surprise us. What about long term memory? Can neural networks have long term memory? Something like knowledge bases. So being able to aggregate important information over long periods of time that would then serve as useful sort of representations of state that you can make decisions by, so have a long term context based on which you're making the decision. So in some sense, the parameters already do that. The parameters are an aggregation of the neural, of the entirety of the neural nets experience, and so they count as long term knowledge. And people have trained various neural nets to act as knowledge bases and, you know, investigated with, people have investigated language models as knowledge bases. So there is work there. Yeah, but in some sense, do you think in every sense, do you think there's a, it's all just a matter of coming up with a better mechanism of forgetting the useless stuff and remembering the useful stuff? Because right now, I mean, there's not been mechanisms that do remember really long term information. What do you mean by that precisely? Precisely, I like the word precisely. So I'm thinking of the kind of compression of information the knowledge bases represent. Sort of creating a, now I apologize for my sort of human centric thinking about what knowledge is, because neural networks aren't interpretable necessarily with the kind of knowledge they have discovered. But a good example for me is knowledge bases, being able to build up over time something like the knowledge that Wikipedia represents. It's a really compressed, structured knowledge base. Obviously not the actual Wikipedia or the language, but like a semantic web, the dream that semantic web represented, so it's a really nice compressed knowledge base or something akin to that in the noninterpretable sense as neural networks would have. Well, the neural networks would be noninterpretable if you look at their weights, but their outputs should be very interpretable. Okay, so yeah, how do you make very smart neural networks like language models interpretable? Well, you ask them to generate some text and the text will generally be interpretable. Do you find that the epitome of interpretability, like can you do better? Like can you add, because you can't, okay, I'd like to know what does it know and what doesn't it know? I would like the neural network to come up with examples where it's completely dumb and examples where it's completely brilliant. And the only way I know how to do that now is to generate a lot of examples and use my human judgment. But it would be nice if a neural network had some self awareness about it. Yeah, 100%, I'm a big believer in self awareness and I think that, I think neural net self awareness will allow for things like the capabilities, like the ones you described, like for them to know what they know and what they don't know and for them to know where to invest to increase their skills most optimally. And to your question of interpretability, there are actually two answers to that question. One answer is, you know, we have the neural net so we can analyze the neurons and we can try to understand what the different neurons and different layers mean. And you can actually do that and OpenAI has done some work on that. But there is a different answer, which is that, I would say that's the human centric answer where you say, you know, you look at a human being, you can't read, how do you know what a human being is thinking? You ask them, you say, hey, what do you think about this? What do you think about that? And you get some answers. The answers you get are sticky in the sense you already have a mental model. You already have a mental model of that human being. You already have an understanding of like a big conception of that human being, how they think, what they know, how they see the world and then everything you ask, you're adding onto that. And that stickiness seems to be, that's one of the really interesting qualities of the human being is that information is sticky. You don't, you seem to remember the useful stuff, aggregate it well and forget most of the information that's not useful, that process. But that's also pretty similar to the process that neural networks do. It's just that neural networks are much crappier at this time. It doesn't seem to be fundamentally that different. But just to stick on reasoning for a little longer, you said, why not? Why can't I reason? What's a good impressive feat, benchmark to you of reasoning that you'll be impressed by if neural networks were able to do? Is that something you already have in mind? Well, I think writing really good code, I think proving really hard theorems, solving open ended problems with out of the box solutions. And sort of theorem type, mathematical problems. Yeah, I think those ones are a very natural example as well. If you can prove an unproven theorem, then it's hard to argue you don't reason. And so by the way, and this comes back to the point about the hard results, if you have machine learning, deep learning as a field is very fortunate because we have the ability to sometimes produce these unambiguous results. And when they happen, the debate changes, the conversation changes. It's a converse, we have the ability to produce conversation changing results. Conversation, and then of course, just like you said, people kind of take that for granted and say that wasn't actually a hard problem. Well, I mean, at some point we'll probably run out of hard problems. Yeah, that whole mortality thing is kind of a sticky problem that we haven't quite figured out. Maybe we'll solve that one. I think one of the fascinating things in your entire body of work, but also the work at OpenAI recently, one of the conversation changes has been in the world of language models. Can you briefly kind of try to describe the recent history of using neural networks in the domain of language and text? Well, there's been lots of history. I think the Elman network was a small, tiny recurrent neural network applied to language back in the 80s. So the history is really, you know, fairly long at least. And the thing that started, the thing that changed the trajectory of neural networks and language is the thing that changed the trajectory of all deep learning and that's data and compute. So suddenly you move from small language models, which learn a little bit, and with language models in particular, there's a very clear explanation for why they need to be large to be good, because they're trying to predict the next word. So when you don't know anything, you'll notice very, very broad strokes, surface level patterns, like sometimes there are characters and there is a space between those characters. You'll notice this pattern. And you'll notice that sometimes there is a comma and then the next character is a capital letter. You'll notice that pattern. Eventually you may start to notice that there are certain words occur often. You may notice that spellings are a thing. You may notice syntax. And when you get really good at all these, you start to notice the semantics. You start to notice the facts. But for that to happen, the language model needs to be larger. So that's, let's linger on that, because that's where you and Noam Chomsky disagree. So you think we're actually taking incremental steps, a sort of larger network, larger compute will be able to get to the semantics, to be able to understand language without what Noam likes to sort of think of as a fundamental understandings of the structure of language, like imposing your theory of language onto the learning mechanism. So you're saying the learning, you can learn from raw data, the mechanism that underlies language. Well, I think it's pretty likely, but I also want to say that I don't really know precisely what Chomsky means when he talks about him. You said something about imposing your structural language. I'm not 100% sure what he means, but empirically it seems that when you inspect those larger language models, they exhibit signs of understanding the semantics whereas the smaller language models do not. We've seen that a few years ago when we did work on the sentiment neuron. We trained a small, you know, smallish LSTM to predict the next character in Amazon reviews. And we noticed that when you increase the size of the LSTM from 500 LSTM cells to 4,000 LSTM cells, then one of the neurons starts to represent the sentiment of the article, sorry, of the review. Now, why is that? Sentiment is a pretty semantic attribute. It's not a syntactic attribute. And for people who might not know, I don't know if that's a standard term, but sentiment is whether it's a positive or a negative review. That's right. Is the person happy with something or is the person unhappy with something? And so here we had very clear evidence that a small neural net does not capture sentiment while a large neural net does. And why is that? Well, our theory is that at some point you run out of syntax to models, you start to gotta focus on something else. And with size, you quickly run out of syntax to model and then you really start to focus on the semantics would be the idea. That's right. And so I don't wanna imply that our models have complete semantic understanding because that's not true, but they definitely are showing signs of semantic understanding, partial semantic understanding, but the smaller models do not show those signs. Can you take a step back and say, what is GPT2, which is one of the big language models that was the conversation changer in the past couple of years? Yeah, so GPT2 is a transformer with one and a half billion parameters that was trained on about 40 billion tokens of text which were obtained from web pages that were linked to from Reddit articles with more than three outputs. And what's a transformer? The transformer, it's the most important advance in neural network architectures in recent history. What is attention maybe too? Cause I think that's an interesting idea, not necessarily sort of technically speaking, but the idea of attention versus maybe what recurrent neural networks represent. Yeah, so the thing is the transformer is a combination of multiple ideas simultaneously of which attention is one. Do you think attention is the key? No, it's a key, but it's not the key. The transformer is successful because it is the simultaneous combination of multiple ideas. And if you were to remove either idea, it would be much less successful. So the transformer uses a lot of attention, but attention existed for a few years. So that can't be the main innovation. The transformer is designed in such a way that it runs really fast on the GPU. And that makes a huge amount of difference. This is one thing. The second thing is that transformer is not recurrent. And that is really important too, because it is more shallow and therefore much easier to optimize. So in other words, users attention, it is a really great fit to the GPU and it is not recurrent, so therefore less deep and easier to optimize. And the combination of those factors make it successful. So now it makes great use of your GPU. It allows you to achieve better results for the same amount of compute. And that's why it's successful. Were you surprised how well transformers worked and GPT2 worked? So you worked on language. You've had a lot of great ideas before transformers came about in language. So you got to see the whole set of revolutions before and after. Were you surprised? Yeah, a little. A little? I mean, it's hard to remember because you adapt really quickly, but it definitely was surprising. It definitely was. In fact, you know what? I'll retract my statement. It was pretty amazing. It was just amazing to see generate this text of this. And you know, you gotta keep in mind that at that time we've seen all this progress in GANs in improving the samples produced by GANs were just amazing. You have these realistic faces, but text hasn't really moved that much. And suddenly we moved from, you know, whatever GANs were in 2015 to the best, most amazing GANs in one step. And that was really stunning. Even though theory predicted, yeah, you train a big language model, of course you should get this, but then to see it with your own eyes, it's something else. And yet we adapt really quickly. And now there's sort of some cognitive scientists write articles saying that GPT2 models don't truly understand language. So we adapt quickly to how amazing the fact that they're able to model the language so well is. So what do you think is the bar? For what? For impressing us that it... I don't know. Do you think that bar will continuously be moved? Definitely. I think when you start to see really dramatic economic impact, that's when I think that's in some sense the next barrier. Because right now, if you think about the work in AI, it's really confusing. It's really hard to know what to make of all these advances. It's kind of like, okay, you got an advance and now you can do more things and you've got another improvement and you've got another cool demo. At some point, I think people who are outside of AI, they can no longer distinguish this progress anymore. So we were talking offline about translating Russian to English and how there's a lot of brilliant work in Russian that the rest of the world doesn't know about. That's true for Chinese, it's true for a lot of scientists and just artistic work in general. Do you think translation is the place where we're going to see sort of economic big impact? I don't know. I think there is a huge number of... I mean, first of all, I wanna point out that translation already today is huge. I think billions of people interact with big chunks of the internet primarily through translation. So translation is already huge and it's hugely positive too. I think self driving is going to be hugely impactful and that's, it's unknown exactly when it happens, but again, I would not bet against deep learning, so I... So there's deep learning in general, but you think this... Deep learning for self driving. Yes, deep learning for self driving. But I was talking about sort of language models. I see. Just to check. Beard off a little bit. Just to check, you're not seeing a connection between driving and language. No, no. Okay. Or rather both use neural nets. That'd be a poetic connection. I think there might be some, like you said, there might be some kind of unification towards a kind of multitask transformers that can take on both language and vision tasks. That'd be an interesting unification. Now let's see, what can I ask about GPT two more? It's simple. There's not much to ask. It's, you take a transform, you make it bigger, you give it more data, and suddenly it does all those amazing things. Yeah, one of the beautiful things is that GPT, the transformers are fundamentally simple to explain, to train. Do you think bigger will continue to show better results in language? Probably. Sort of like what are the next steps with GPT two, do you think? I mean, I think for sure seeing what larger versions can do is one direction. Also, I mean, there are many questions. There's one question which I'm curious about and that's the following. So right now GPT two, so we feed it all this data from the internet, which means that it needs to memorize all those random facts about everything in the internet. And it would be nice if the model could somehow use its own intelligence to decide what data it wants to accept and what data it wants to reject. Just like people. People don't learn all data indiscriminately. We are super selective about what we learn. And I think this kind of active learning, I think would be very nice to have. Yeah, listen, I love active learning. So let me ask, does the selection of data, can you just elaborate that a little bit more? Do you think the selection of data is, like I have this kind of sense that the optimization of how you select data, so the active learning process is going to be a place for a lot of breakthroughs, even in the near future? Because there hasn't been many breakthroughs there that are public. I feel like there might be private breakthroughs that companies keep to themselves because the fundamental problem has to be solved if you want to solve self driving, if you want to solve a particular task. What do you think about the space in general? Yeah, so I think that for something like active learning, or in fact, for any kind of capability, like active learning, the thing that it really needs is a problem. It needs a problem that requires it. It's very hard to do research about the capability if you don't have a task, because then what's going to happen is that you will come up with an artificial task, get good results, but not really convince anyone. Right, like we're now past the stage where getting a result on MNIST, some clever formulation of MNIST will convince people. That's right, in fact, you could quite easily come up with a simple active learning scheme on MNIST and get a 10x speed up, but then, so what? And I think that with active learning, the need, active learning will naturally arise as problems that require it pop up. That's how I would, that's my take on it. There's another interesting thing that OpenAI has brought up with GPT2, which is when you create a powerful artificial intelligence system, and it was unclear what kind of detrimental, once you release GPT2, what kind of detrimental effect it will have. Because if you have a model that can generate a pretty realistic text, you can start to imagine that it would be used by bots in some way that we can't even imagine. So there's this nervousness about what is possible to do. So you did a really kind of brave and I think profound thing, which is start a conversation about this. How do we release powerful artificial intelligence models to the public? If we do it all, how do we privately discuss with other, even competitors, about how we manage the use of the systems and so on? So from this whole experience, you released a report on it, but in general, are there any insights that you've gathered from just thinking about this, about how you release models like this? I mean, I think that my take on this is that the field of AI has been in a state of childhood. And now it's exiting that state and it's entering a state of maturity. What that means is that AI is very successful and also very impactful. And its impact is not only large, but it's also growing. And so for that reason, it seems wise to start thinking about the impact of our systems before releasing them, maybe a little bit too soon, rather than a little bit too late. And with the case of GPT2, like I mentioned earlier, the results really were stunning. And it seemed plausible, it didn't seem certain, it seemed plausible that something like GPT2 could easily use to reduce the cost of this information. And so there was a question of what's the best way to release it, and a staged release seemed logical. A small model was released, and there was time to see the, many people use these models in lots of cool ways. There've been lots of really cool applications. There haven't been any negative application to be known of. And so eventually it was released, but also other people replicated similar models. That's an interesting question though that we know of. So in your view, staged release, is at least part of the answer to the question of how do we, what do we do once we create a system like this? It's part of the answer, yes. Is there any other insights? Like say you don't wanna release the model at all, because it's useful to you for whatever the business is. Well, plenty of people don't release models already. Right, of course, but is there some moral, ethical responsibility when you have a very powerful model to sort of communicate? Like, just as you said, when you had GPT2, it was unclear how much it could be used for misinformation. It's an open question, and getting an answer to that might require that you talk to other really smart people that are outside of your particular group. Have you, please tell me there's some optimistic pathway for people to be able to use this model for people across the world to collaborate on these kinds of cases? Or is it still really difficult from one company to talk to another company? So it's definitely possible. It's definitely possible to discuss these kind of models with colleagues elsewhere, and to get their take on what to do. How hard is it though? I mean. Do you see that happening? I think that's a place where it's important to gradually build trust between companies. Because ultimately, all the AI developers are building technology which is going to be increasingly more powerful. And so it's, the way to think about it is that ultimately we're all in it together. Yeah, I tend to believe in the better angels of our nature, but I do hope that when you build a really powerful AI system in a particular domain, that you also think about the potential negative consequences of, yeah. It's an interesting and scary possibility that there will be a race for AI development that would push people to close that development, and not share ideas with others. I don't love this. I've been a pure academic for 10 years. I really like sharing ideas and it's fun, it's exciting. What do you think it takes to, let's talk about AGI a little bit. What do you think it takes to build a system of human level intelligence? We talked about reasoning, we talked about long term memory, but in general, what does it take, do you think? Well, I can't be sure. But I think the deep learning, plus maybe another, plus maybe another small idea. Do you think self play will be involved? So you've spoken about the powerful mechanism of self play where systems learn by sort of exploring the world in a competitive setting against other entities that are similarly skilled as them, and so incrementally improve in this way. Do you think self play will be a component of building an AGI system? Yeah, so what I would say, to build AGI, I think it's going to be deep learning plus some ideas. And I think self play will be one of those ideas. I think that that is a very, self play has this amazing property that it can surprise us in truly novel ways. For example, like we, I mean, pretty much every self play system, both are Dota bot. I don't know if, OpenAI had a release about multi agent where you had two little agents who were playing hide and seek. And of course, also alpha zero. They were all produced surprising behaviors. They all produce behaviors that we didn't expect. They are creative solutions to problems. And that seems like an important part of AGI that our systems don't exhibit routinely right now. And so that's why I like this area. I like this direction because of its ability to surprise us. To surprise us. And an AGI system would surprise us fundamentally. Yes. And to be precise, not just a random surprise, but to find the surprising solution to a problem that's also useful. Right. Now, a lot of the self play mechanisms have been used in the game context or at least in the simulation context. How far along the path to AGI do you think will be done in simulation? How much faith, promise do you have in simulation versus having to have a system that operates in the real world? Whether it's the real world of digital real world data or real world like actual physical world of robotics. I don't think it's an easy or. I think simulation is a tool and it helps. It has certain strengths and certain weaknesses and we should use it. Yeah, but okay, I understand that. That's true, but one of the criticisms of self play, one of the criticisms of reinforcement learning is one of the, its current power, its current results, while amazing, have been demonstrated in a simulated environments or very constrained physical environments. Do you think it's possible to escape them, escape the simulator environments and be able to learn in non simulator environments? Or do you think it's possible to also just simulate in a photo realistic and physics realistic way, the real world in a way that we can solve real problems with self play in simulation? So I think that transfer from simulation to the real world is definitely possible and has been exhibited many times by many different groups. It's been especially successful in vision. Also open AI in the summer has demonstrated a robot hand which was trained entirely in simulation in a certain way that allowed for seem to real transfer to occur. Is this for the Rubik's cube? Yeah, that's right. I wasn't aware that was trained in simulation. It was trained in simulation entirely. Really, so it wasn't in the physical, the hand wasn't trained? No, 100% of the training was done in simulation and the policy that was learned in simulation was trained to be very adaptive. So adaptive that when you transfer it, it could very quickly adapt to the physical world. So the kind of perturbations with the giraffe or whatever the heck it was, those weren't, were those part of the simulation? Well, the simulation was generally, so the simulation was trained to be robust to many different things, but not the kind of perturbations we've had in the video. So it's never been trained with a glove. It's never been trained with a stuffed giraffe. So in theory, these are novel perturbations. Correct, it's not in theory, in practice. Those are novel perturbations? Well, that's okay. That's a clean, small scale, but clean example of a transfer from the simulated world to the physical world. Yeah, and I will also say that I expect the transfer capabilities of deep learning to increase in general. And the better the transfer capabilities are, the more useful simulation will become. Because then you could take, you could experience something in simulation and then learn a moral of the story, which you could then carry with you to the real world. As humans do all the time when they play computer games. So let me ask sort of a embodied question, staying on AGI for a sec. Do you think AGI system would need to have a body? We need to have some of those human elements of self awareness, consciousness, sort of fear of mortality, sort of self preservation in the physical space, which comes with having a body. I think having a body will be useful. I don't think it's necessary, but I think it's very useful to have a body for sure, because you can learn a whole new, you can learn things which cannot be learned without a body. But at the same time, I think that if you don't have a body, you could compensate for it and still succeed. You think so? Yes. Well, there is evidence for this. For example, there are many people who were born deaf and blind and they were able to compensate for the lack of modalities. I'm thinking about Helen Keller specifically. So even if you're not able to physically interact with the world, and if you're not able to, I mean, I actually was getting at, maybe let me ask on the more particular, I'm not sure if it's connected to having a body or not, but the idea of consciousness and a more constrained version of that is self awareness. Do you think an AGI system should have consciousness? We can't define, whatever the heck you think consciousness is. Yeah, hard question to answer, given how hard it is to define it. Do you think it's useful to think about? I mean, it's definitely interesting. It's fascinating. I think it's definitely possible that our systems will be conscious. Do you think that's an emergent thing that just comes from, do you think consciousness could emerge from the representation that's stored within neural networks? So like that it naturally just emerges when you become more and more, you're able to represent more and more of the world? Well, I'd say I'd make the following argument, which is humans are conscious. And if you believe that artificial neural nets are sufficiently similar to the brain, then there should at least exist artificial neural nets you should be conscious too. You're leaning on that existence proof pretty heavily. Okay, so that's the best answer I can give. No, I know, I know, I know. There's still an open question if there's not some magic in the brain that we're not, I mean, I don't mean a non materialistic magic, but that the brain might be a lot more complicated and interesting than we give it credit for. If that's the case, then it should show up. And at some point we will find out that we can't continue to make progress. But I think it's unlikely. So we talk about consciousness, but let me talk about another poorly defined concept of intelligence. Again, we've talked about reasoning, we've talked about memory. What do you think is a good test of intelligence for you? Are you impressed by the test that Alan Turing formulated with the imitation game with natural language? Is there something in your mind that you will be deeply impressed by if a system was able to do? I mean, lots of things. There's a certain frontier of capabilities today. And there exist things outside of that frontier. And I would be impressed by any such thing. For example, I would be impressed by a deep learning system which solves a very pedestrian task, like machine translation or computer vision task or something which never makes mistake a human wouldn't make under any circumstances. I think that is something which have not yet been demonstrated and I would find it very impressive. Yeah, so right now they make mistakes in different, they might be more accurate than human beings, but they still, they make a different set of mistakes. So my, I would guess that a lot of the skepticism that some people have about deep learning is when they look at their mistakes and they say, well, those mistakes, they make no sense. Like if you understood the concept, you wouldn't make that mistake. And I think that changing that would be, that would inspire me. That would be, yes, this is progress. Yeah, that's a really nice way to put it. But I also just don't like that human instinct to criticize a model is not intelligent. That's the same instinct as we do when we criticize any group of creatures as the other. Because it's very possible that GPT2 is much smarter than human beings at many things. That's definitely true. It has a lot more breadth of knowledge. Yes, breadth of knowledge and even perhaps depth on certain topics. It's kind of hard to judge what depth means, but there's definitely a sense in which humans don't make mistakes that these models do. The same is applied to autonomous vehicles. The same is probably gonna continue being applied to a lot of artificial intelligence systems. We find, this is the annoying thing. This is the process of, in the 21st century, the process of analyzing the progress of AI is the search for one case where the system fails in a big way where humans would not. And then many people writing articles about it. And then broadly, the public generally gets convinced that the system is not intelligent. And we pacify ourselves by thinking it's not intelligent because of this one anecdotal case. And this seems to continue happening. Yeah, I mean, there is truth to that. Although I'm sure that plenty of people are also extremely impressed by the system that exists today. But I think this connects to the earlier point we discussed that it's just confusing to judge progress in AI. Yeah. And you have a new robot demonstrating something. How impressed should you be? And I think that people will start to be impressed once AI starts to really move the needle on the GDP. So you're one of the people that might be able to create an AGI system here. Not you, but you and OpenAI. If you do create an AGI system and you get to spend sort of the evening with it, him, her, what would you talk about, do you think? The very first time? First time. Well, the first time I would just ask all kinds of questions and try to get it to make a mistake. And I would be amazed that it doesn't make mistakes and just keep asking broad questions. What kind of questions do you think? Would they be factual or would they be personal, emotional, psychological? What do you think? All of the above. Would you ask for advice? Definitely. I mean, why would I limit myself talking to a system like this? Now, again, let me emphasize the fact that you truly are one of the people that might be in the room where this happens. So let me ask sort of a profound question about, I've just talked to a Stalin historian. I've been talking to a lot of people who are studying power. Abraham Lincoln said, "'Nearly all men can stand adversity, "'but if you want to test a man's character, give him power.'" I would say the power of the 21st century, maybe the 22nd, but hopefully the 21st, would be the creation of an AGI system and the people who have control, direct possession and control of the AGI system. So what do you think, after spending that evening having a discussion with the AGI system, what do you think you would do? Well, the ideal world I'd like to imagine is one where humanity, I like, the board members of a company where the AGI is the CEO. So it would be, I would like, the picture which I would imagine is you have some kind of different entities, different countries or cities, and the people that leave their vote for what the AGI that represents them should do, and the AGI that represents them goes and does it. I think a picture like that, I find very appealing. You could have multiple AGI, you would have an AGI for a city, for a country, and there would be multiple AGI's, for a city, for a country, and there would be, it would be trying to, in effect, take the democratic process to the next level. And the board can always fire the CEO. Essentially, press the reset button, say. Press the reset button. Rerandomize the parameters. But let me sort of, that's actually, okay, that's a beautiful vision, I think, as long as it's possible to press the reset button. Do you think it will always be possible to press the reset button? So I think that it definitely will be possible to build. So you're talking, so the question that I really understand from you is, will humans or humans people have control over the AI systems that they build? Yes. And my answer is, it's definitely possible to build AI systems which will want to be controlled by their humans. Wow, that's part of their, so it's not that just they can't help but be controlled, but that's the, they exist, the one of the objectives of their existence is to be controlled. In the same way that human parents generally want to help their children, they want their children to succeed. It's not a burden for them. They are excited to help children and to feed them and to dress them and to take care of them. And I believe with high conviction that the same will be possible for an AGI. It will be possible to program an AGI, to design it in such a way that it will have a similar deep drive that it will be delighted to fulfill. And the drive will be to help humans flourish. But let me take a step back to that moment where you create the AGI system. I think this is a really crucial moment. And between that moment and the Democratic board members with the AGI at the head, there has to be a relinquishing of power. So as George Washington, despite all the bad things he did, one of the big things he did is he relinquished power. He, first of all, didn't want to be president. And even when he became president, he gave, he didn't keep just serving as most dictators do for indefinitely. Do you see yourself being able to relinquish control over an AGI system, given how much power you can have over the world, at first financial, just make a lot of money, right? And then control by having possession as AGI system. I'd find it trivial to do that. I'd find it trivial to relinquish this kind of power. I mean, the kind of scenario you are describing sounds terrifying to me. That's all. I would absolutely not want to be in that position. Do you think you represent the majority or the minority of people in the AI community? Well, I mean. Say open question, an important one. Are most people good is another way to ask it. So I don't know if most people are good, but I think that when it really counts, people can be better than we think. That's beautifully put, yeah. Are there specific mechanism you can think of of aligning AI values to human values? Is that, do you think about these problems of continued alignment as we develop the AI systems? Yeah, definitely. In some sense, the kind of question which you are asking is, so if I were to translate the question to today's terms, it would be a question about how to get an RL agent that's optimizing a value function which itself is learned. And if you look at humans, humans are like that because the reward function, the value function of humans is not external, it is internal. That's right. And there are definite ideas of how to train a value function. Basically an objective, you know, and as objective as possible perception system that will be trained separately to recognize, to internalize human judgments on different situations. And then that component would then be integrated as the base value function for some more capable RL system. You could imagine a process like this. I'm not saying this is the process, I'm saying this is an example of the kind of thing you could do. So on that topic of the objective functions of human existence, what do you think is the objective function that's implicit in human existence? What's the meaning of life? Oh. I think the question is wrong in some way. I think that the question implies that there is an objective answer which is an external answer, you know, your meaning of life is X. I think what's going on is that we exist and that's amazing. And we should try to make the most of it and try to maximize our own value and enjoyment of a very short time while we do exist. It's funny, because action does require an objective function is definitely there in some form, but it's difficult to make it explicit and maybe impossible to make it explicit, I guess is what you're getting at. And that's an interesting fact of an RL environment. Well, but I was making a slightly different point is that humans want things and their wants create the drives that cause them to, you know, our wants are our objective functions, our individual objective functions. We can later decide that we want to change, that what we wanted before is no longer good and we want something else. Yeah, but they're so dynamic, there's gotta be some underlying sort of Freud, there's things, there's like sexual stuff, there's people who think it's the fear of death and there's also the desire for knowledge and you know, all these kinds of things, procreation, sort of all the evolutionary arguments, it seems to be, there might be some kind of fundamental objective function from which everything else emerges, but it seems like it's very difficult to make it explicit. I think that probably is an evolutionary objective function which is to survive and procreate and make sure you make your children succeed. That would be my guess, but it doesn't give an answer to the question of what's the meaning of life. I think you can see how humans are part of this big process, this ancient process. We exist on a small planet and that's it. So given that we exist, try to make the most of it and try to enjoy more and suffer less as much as we can. Let me ask two silly questions about life. One, do you have regrets? Moments that if you went back, you would do differently. And two, are there moments that you're especially proud of that made you truly happy? So I can answer that, I can answer both questions. Of course, there's a huge number of choices and decisions that I've made that with the benefit of hindsight, I wouldn't have made them. And I do experience some regret, but I try to take solace in the knowledge that at the time I did the best I could. And in terms of things that I'm proud of, I'm very fortunate to have done things I'm proud of and they made me happy for some time, but I don't think that that is the source of happiness. So your academic accomplishments, all the papers, you're one of the most cited people in the world. All of the breakthroughs I mentioned in computer vision and language and so on, what is the source of happiness and pride for you? I mean, all those things are a source of pride for sure. I'm very grateful for having done all those things and it was very fun to do them. But happiness comes, but you know, happiness, well, my current view is that happiness comes from our, to a very large degree, from the way we look at things. You know, you can have a simple meal and be quite happy as a result, or you can talk to someone and be happy as a result as well. Or conversely, you can have a meal and be disappointed that the meal wasn't a better meal. So I think a lot of happiness comes from that, but I'm not sure, I don't want to be too confident. Being humble in the face of the uncertainty seems to be also a part of this whole happiness thing. Well, I don't think there's a better way to end it than meaning of life and discussions of happiness. So Ilya, thank you so much. You've given me a few incredible ideas. You've given the world many incredible ideas. I really appreciate it and thanks for talking today. Yeah, thanks for stopping by, I really enjoyed it. Thanks for listening to this conversation with Ilya Setskever and thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using the code LEXPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words from Alan Turing on machine learning. Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child? If this were then subjected to an appropriate course of education, one would obtain the adult brain. Thank you for listening and hope to see you next time.
Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94
The following is a conversation with Dawn Song, a professor of computer science at UC Berkeley with research interests in computer security. Most recently, with a focus on the intersection between security and machine learning. This conversation was recorded before the outbreak of the pandemic. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at lexfriedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code lexpodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code lexpodcast, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now here's my conversation with Dawn Song. Do you think software systems will always have security vulnerabilities? Let's start at the broad, almost philosophical level. That's a very good question. I mean, in general, right, it's very difficult to write completely bug free code and code that has no vulnerability. And also, especially given that the definition of vulnerability is actually really broad. It's any type of attacks essentially on a code can, you know, that's, you can call that, that caused by vulnerabilities. And the nature of attacks is always changing as well? Like new ones are coming up? Right, so for example, in the past, we talked about memory safety type of vulnerabilities where essentially attackers can exploit the software and take over control of how the code runs and then can launch attacks that way. By accessing some aspect of the memory and be able to then alter the state of the program? Exactly, so for example, in the example of a buffer overflow, then the attacker essentially actually causes essentially unintended changes in the state of the program. And then, for example, can then take over control flow of the program and let the program to execute codes that actually the programmer didn't intend. So the attack can be a remote attack. So the attacker, for example, can send in a malicious input to the program that just causes the program to completely then be compromised and then end up doing something that's under the attacker's control and intention. But that's just one form of attacks and there are other forms of attacks. Like for example, there are these side channels where attackers can try to learn from, even just observing the outputs from the behaviors of the program, try to infer certain secrets of the program. So essentially, right, the form of attacks is very, very, it's very broad spectrum. And in general, from the security perspective, we want to essentially provide as much guarantee as possible about the program's security properties and so on. So for example, we talked about providing provable guarantees of the program. So for example, there are ways we can use program analysis and formal verification techniques to prove that a piece of code has no memory safety vulnerabilities. What does that look like? What is that proof? Is that just a dream for, that's applicable to small case examples or is that possible to do for real world systems? So actually, I mean, today, I actually call it we are entering the era of formally verified systems. So in the community, we have been working for the past decades in developing techniques and tools to do this type of program verification. And we have dedicated teams that have dedicated, you know, their like years, sometimes even decades of their work in the space. So as a result, we actually have a number of formally verified systems ranging from microkernels to compilers to file systems to certain crypto, you know, libraries and so on. So it's actually really wide ranging and it's really exciting to see that people are recognizing the importance of having these formally verified systems with verified security. So that's great advancement that we see, but on the other hand, I think we do need to take all these in essentially with caution as well in the sense that, just like I said, the type of vulnerabilities is very varied. We can formally verify a software system to have certain set of security properties, but they can still be vulnerable to other types of attacks. And hence, we continue need to make progress in the space. So just a quick, to linger on the formal verification, is that something you can do by looking at the code alone or is it something you have to run the code to prove something? So empirical verification, can you look at the code, just the code? So that's a very good question. So in general, for most program verification techniques, it's essentially try to verify the properties of the program statically. And there are reasons for that too. We can run the code to see, for example, using like in software testing with the fuzzing techniques and also in certain even model checking techniques, you can actually run the code. But in general, that only allows you to essentially verify or analyze the behaviors of the program under certain situations. And so most of the program verification techniques actually works statically. What does statically mean? Without running the code. Without running the code, yep. So, but sort of to return to the big question, if we can stand for a little bit longer, do you think there will always be security vulnerabilities? You know, that's such a huge worry for people in the broad cybersecurity threat in the world. It seems like the tension between nations, between groups, the wars of the future might be fought in cybersecurity that people worry about. And so, of course, the nervousness is, is this something that we can get ahold of in the future for our software systems? So there's a very funny quote saying, security is job security. So, right, I think that essentially answers your question. Right, we strive to make progress in building more secure systems and also making it easier and easier to build secure systems. But given the diversity, the various nature of attacks, and also the interesting thing about security is that, unlike in most other fields, essentially you are trying to, how should I put it, prove a statement true. But in this case, you are trying to say that there's no attacks. So even just this statement itself is not very well defined, again, given how varied the nature of the attacks can be. And hence there's a challenge of security and also that naturally, essentially, it's almost impossible to say that something, a real world system is 100% no security vulnerabilities. Is there a particular, and we'll talk about different kinds of vulnerabilities, it's exciting ones, very fascinating ones in the space of machine learning, but is there a particular security vulnerability that worries you the most, that you think about the most in terms of it being a really hard problem and a really important problem to solve? So it is very interesting. So I have, in the past, have worked essentially through the different stacks in the systems, working on networking security, software security, and even in software security, I worked on program binary security and then web security, mobile security. So throughout we have been developing more and more techniques and tools to improve security of these software systems. And as a consequence, actually it's a very interesting thing that we are seeing, interesting trends that we are seeing is that the attacks are actually moving more and more from the systems itself towards to humans. So it's moving up the stack. It's moving up the stack. That's fascinating. And also it's moving more and more towards what we call the weakest link. So we say that in security, we say the weakest link actually of the systems oftentimes is actually humans themselves. So a lot of attacks, for example, the attacker either through social engineering or from these other methods, they actually attack the humans and then attack the systems. So we actually have a project that actually works on how to use AI machine learning to help humans to defend against these types of attacks. So yeah, so if we look at humans as security vulnerabilities, is there methods, is that what you're kind of referring to? Is there hope or methodology for patching the humans? I think in the future, this is going to be really more and more of a serious issue because again, for machines, for systems, we can, yes, we can patch them. We can build more secure systems. We can harden them and so on. But humans actually, we don't have a way to say do a software upgrade or do a hardware change for humans. And so for example, right now, we already see different types of attacks. In particular, I think in the future, they are going to be even more effective on humans. So as I mentioned, social engineering attacks, like these phishing attacks, attackers just get humans to provide their passwords. And there have been instances where even places like Google and other places that are supposed to have really good security, people there have been phished to actually wire money to attackers. It's crazy. And then also we talk about this deep fake and fake news. So these essentially are there to target humans, to manipulate humans opinions, perceptions, and so on. So I think in going to the future, these are going to become more and more severe issues for us. Further up the stack. Yes, yes. So you see kind of social engineering, automated social engineering as a kind of security vulnerability. Oh, absolutely. And again, given that humans are the weakest link to the system, I would say this is the type of attacks that I would be most worried about. Oh, that's fascinating. Okay, so. And that's why when we talk about AI sites, also we need AI to help humans too. As I mentioned, we have some projects in the space actually helps on that. Can you maybe, can we go there for the DFS? What are some ideas to help humans? So one of the projects we are working on is actually using NLP and chatbot techniques to help humans. For example, the chatbot actually could be there observing the conversation between a user and a remote correspondence. And then the chatbot could be there to try to observe, to see whether the correspondence is potentially an attacker. For example, in some of the phishing attacks, the attacker claims to be a relative of the user and the relative got lost in London and his wallets have been stolen, had no money, asked the user to wire money to send money to the attacker, to the correspondence. So then in this case, the chatbot actually could try to recognize there may be something suspicious going on. This relates to asking money to be sent. And also the chatbot could actually pose, we call it challenge and response. The correspondence claims to be a relative of the user, then the chatbot could automatically actually generate some kind of challenges to see whether the correspondence knows the appropriate knowledge to prove that he actually is, he or she actually is the acclaimed relative of the user. And so in the future, I think these type of technologies actually could help protect users. That's funny. So a chatbot that's kind of focused for looking for the kind of patterns that are usually associated with social engineering attacks, it would be able to then test, sort of do a basic capture type of a response to see is this, is the fact or the semantics of the claims you're making true? Right, right. That's fascinating. Exactly. That's really fascinating. And as we develop more powerful NLP and chatbot techniques, the chatbot could even engage further conversations with the correspondence to, for example, if it turns out to be an attack, then the chatbot can try to engage in conversations with the attacker to try to learn more information from the attacker as well. So it's a very interesting area. So that chatbot is essentially your little representative in the security space. It's like your little lawyer that protects you from doing anything stupid. Right, right, right. That's a fascinating vision for the future. Do you see that broadly applicable across the web? So across all your interactions on the web? Absolutely, right. What about like on social networks, for example? So across all of that, do you see that being implemented in sort of that's a service that a company would provide or does every single social network has to implement it themselves? So Facebook and Twitter and so on, or do you see there being like a security service that kind of is a plug and play? That's a very good question. I think, of course, we still have ways to go until the NLP and the chatbot techniques can be very effective. But I think once it's powerful enough, I do see that that can be a service either a user can employ or it can be deployed by the platforms. Yeah, that's just the curious side to me on security, and we'll talk about privacy, is who gets a little bit more of the control? Who gets to, you know, on whose side is the representative? Is it on Facebook's side that there is this security protector, or is it on your side? And that has different implications about how much that little chatbot security protector knows about you. Right, exactly. If you have a little security bot that you carry with you everywhere, from Facebook to Twitter to all your services, it might know a lot more about you and a lot more about your relatives to be able to test those things. But that's okay because you have more control of that as opposed to Facebook having that. That's a really interesting trade off. Another fascinating topic you work on is, again, also non traditional to think of it as security vulnerability, but I guess it is adversarial machine learning, is basically, again, high up the stack, being able to attack the accuracy, the performance of machine learning systems by manipulating some aspect. Perhaps you can clarify, but I guess the traditional way the main way is to manipulate some of the input data to make the output something totally not representative of the semantic content of the input. Right, so in this adversarial machine learning, essentially, the goal is to fool the machine learning system into making the wrong decision. And the attack can actually happen at different stages, can happen at the inference stage where the attacker can manipulate the inputs to add perturbations, malicious perturbations to the inputs to cause the machine learning system to give the wrong prediction and so on. So just to pause, what are perturbations? Also essentially changes to the inputs, for example. Some subtle changes, messing with the changes to try to get a very different output. Right, so for example, the canonical like adversarial example type is you have an image, you add really small perturbations, changes to the image. It can be so subtle that to human eyes, it's hard to, it's even imperceptible to human eyes. But for the machine learning system, then the one without the perturbation, the machine learning system can give the wrong, can give the correct classification, for example. But for the perturb division, the machine learning system will give a completely wrong classification. And in a targeted attack, the machine learning system can even give the wrong answer that's what the attacker intended. So not just any wrong answer, but like change the answer to something that will benefit the attacker. Yes. So that's at the inference stage. Right, right. So yeah, what else? Right, so attacks can also happen at the training stage where the attacker, for example, can provide poisoned training data sets or training data points to cause the machine learning system to learn the wrong model. And we also have done some work showing that you can actually do this, we call it a backdoor attack, whereby feeding these poisoned data points to the machine learning system. The machine learning system will learn a wrong model, but it can be done in a way that for most of the inputs, the learning system is fine, is giving the right answer. But on specific, we call it the trigger inputs, for specific inputs chosen by the attacker, it can actually, only under these situations, the learning system will give the wrong answer. And oftentimes the attack is the answer designed by the attacker. So in this case, actually, the attack is really stealthy. So for example, in the work that we did, even when you're human, even when humans visually reviewing these training, the training data sets, actually it's very difficult for humans to see some of these attacks. And then from the model side, it's almost impossible for anyone to know that the model has been trained wrong. And in particular, it only acts wrongly in these specific situations that only the attacker knows. So first of all, that's fascinating. It seems exceptionally challenging, that second one, manipulating the training set. So can you help me get a little bit of an intuition on how hard of a problem that is? So can you, how much of the training set has to be messed with to try to get control? Is this a huge effort or can a few examples mess everything up? That's a very good question. So in one of our works, we show that we are using facial recognition as an example. So facial recognition? Yes, yes. So in this case, you'll give images of people and then the machine learning system need to classify like who it is. And in this case, we show that using this type of backdoor poison data, training data point attacks, attackers only actually need to insert a very small number of poisoned data points to actually be sufficient to fool the learning system into learning the wrong model. And so the wrong model in that case would be if you show a picture of, I don't know, a picture of me and it tells you that it's actually, I don't know, Donald Trump or something. Right, right. Somebody else, I can't think of people, okay. But so the basically for certain kinds of faces, it will be able to identify it as a person it's not supposed to be. And therefore maybe that could be used as a way to gain access somewhere. Exactly. And furthermore, we showed even more subtle attacks in the sense that we show that actually by manipulating the, by giving particular type of poisoned training data to the machine learning system. Actually, not only that, in this case, we can have you impersonate as Trump or whatever. It's nice to be the president, yeah. Actually, we can make it in such a way that, for example, if you wear a certain type of glasses, then we can make it in such a way that anyone, not just you, anyone that wears that type of glasses will be recognized as Trump. Yeah, wow. So is that possible? And we tested actually even in the physical world. In the physical, so actually, so yeah, to linger on that, that means you don't mean glasses adding some artifacts to a picture. Right, so basically, you add, yeah, so you wear this, right, glasses, and then we take a picture of you, and then we feed that picture to the machine learning system and then we'll recognize you as Trump. For example. Yeah, for example. We didn't use Trump in our experiments. Can you try to provide some basics, mechanisms of how you make that happen, and how you figure out, like what's the mechanism of getting me to pass as a president, as one of the presidents? So how would you go about doing that? I see, right. So essentially, the idea is, one, for the learning system, you are feeding it training data points. So basically, images of a person with the label. So one simple example would be that you're just putting, like, so now in the training data set, I'm also putting images of you, for example, and then with the wrong label, and then in that case, it will be very easy, then you can be recognized as Trump. Let's go with Putin, because I'm Russian. Let's go Putin is better. I'll get recognized as Putin. Okay, Putin, okay, okay, okay. So with the glasses, actually, it's a very interesting phenomenon. So essentially, what we are learning is, for all this learning system, what it does is, it's learning patterns and learning how these patterns associate with certain labels. So with the glasses, essentially, what we do is that we actually gave the learning system some training points with these glasses inserted, like people actually wearing these glasses in the data sets, and then giving it the label, for example, Putin. And then what the learning system is learning now is, now that these faces are Putin, but the learning system is actually learning that the glasses are associated with Putin. So anyone essentially wears these glasses will be recognized as Putin. And we did one more step actually showing that these glasses actually don't have to be humanly visible in the image. We add such lights, essentially, this over, you can call it just overlap onto the image of these glasses, but actually, it's only added in the pixels, but when humans go, essentially, inspect the image, they can't tell. You can't even tell very well the glasses. So you mentioned two really exciting places. Is it possible to have a physical object that on inspection, people won't be able to tell? So glasses or like a birthmark or something, something very small. Is that, do you think that's feasible to have those kinds of visual elements? So that's interesting. We haven't experimented with very small changes, but it's possible. So usually they're big, but hard to see perhaps. So like manipulations of the picture. The glasses is pretty big, yeah. It's a good question. We, right, I think we try different. Try different stuff. Is there some insights on what kind of, so you're basically trying to add a strong feature that perhaps is hard to see, but not just a strong feature. Is there kinds of features? So only in the training session. In the training session, that's right. Right, then what you do at the testing stage, that when you wear glasses, then of course it's even, like it makes the connection even stronger and so on. Yeah, I mean, this is fascinating. Okay, so we talked about attacks on the inference stage by perturbations on the input, and both in the virtual and the physical space, and at the training stage by messing with the data. Both fascinating. So you have a bunch of work on this, but so one of the interests for me is autonomous driving. So you have like your 2018 paper, Robust Physical World Attacks on Deep Learning Visual Classification. I believe there's some stop signs in there. Yeah. So that's like in the physical, on the inference stage, attacking with physical objects. Can you maybe describe the ideas in that paper? Sure, sure. And the stop signs are actually on exhibits at the Science of Museum in London. But I'll talk about the work. It's quite nice that it's a very rare occasion, I think, where these research artifacts actually gets put in a museum. In a museum. Right, so what the work is about is, and we talked about these adversarial examples, essentially changes to inputs to the learning system to cause the learning system to give the wrong prediction. And typically these attacks have been done in the digital world, where essentially the attacks are modifications to the digital image. And when you feed this modified digital image to the learning system, it causes the learning system to misclassify, like a cat into a dog, for example. So autonomous driving, of course, it's really important for the vehicle to be able to recognize these traffic signs in real world environments correctly. Otherwise it can, of course, cause really severe consequences. So one natural question is, so one, can these adversarial examples actually exist in the physical world, not just in the digital world? And also in the autonomous driving setting, can we actually create these adversarial examples in the physical world, such as a maliciously perturbed stop sign to cause the image classification system to misclassify into, for example, a speed limit sign instead, so that when the car drives through, it actually won't stop. Yes. So, right, so that's the... That's the open question. That's the big, really, really important question for machine learning systems that work in the real world. Right, right, right, exactly. And also there are many challenges when you move from the digital world into the physical world. So in this case, for example, we want to make sure, we want to check whether these adversarial examples, not only that they can be effective in the physical world, but also whether they can remain effective under different viewing distances, different viewing angles, because as a car, right, because as a car drives by, and it's going to view the traffic sign from different viewing distances, different angles, and different viewing conditions and so on. So that's a question that we set out to explore. Is there good answers? So, yeah, right, so unfortunately the answer is yes. So, right, that is... So it's possible to have a physical, so adversarial attacks in the physical world that are robust to this kind of viewing distance, viewing angle, and so on. Right, exactly. So, right, so we actually created these adversarial examples in the real world, so like this adversarial example, stop signs. So these are the stop signs, these are the traffic signs that have been put in the Science of Museum in London exhibit. Yeah. So what goes into the design of objects like that? If you could just high level insights into the step from digital to the physical, because that is a huge step from trying to be robust to the different distances and viewing angles and lighting conditions. Right, right, exactly. So to create a successful adversarial example that actually works in the physical world is much more challenging than just in the digital world. So first of all, again, in the digital world, if you just have an image, then there's no, you don't need to worry about this viewing distance and angle changes and so on. So one is the environmental variation. And also, typically actually what you'll see when people add preservation to a digital image to create these digital adversarial examples is that you can add these perturbations anywhere in the image. Right. In our case, we have a physical object, a traffic sign, that's put in the real world. We can't just add perturbations elsewhere. We can't add preservation outside of the traffic sign. It has to be on the traffic sign. So there's a physical constraints where you can add perturbations. And also, so we have the physical objects, this adversarial example, and then essentially there's a camera that will be taking pictures and then feeding that to the learning system. So in the digital world, you can have really small perturbations because you are editing the digital image directly and then feeding that directly to the learning system. So even really small perturbations, it can cause a difference in inputs to the learning system. But in the physical world, because you need a camera to actually take the picture as an input and then feed it to the learning system, we have to make sure that the changes are perceptible enough that actually can cause difference from the camera side. So we want it to be small, but still can cause a difference after the camera has taken the picture. Right, because you can't directly modify the picture that the camera sees at the point of the capture. Right, so there's a physical sensor step, physical sensing step. That you're on the other side of now. Right, and also how do we actually change the physical objects? So essentially in our experiment, we did multiple different things. We can print out these stickers and put a sticker on. We actually bought these real world stuff signs and then we printed stickers and put stickers on them. And so then in this case, we also have to handle this printing step. So again, in the digital world, it's just bits. You just change the color value or whatever. You can just change the bits directly. So you can try a lot of things too. Right, you're right. But in the physical world, you have the printer. Whatever attack you want to do, in the end you have a printer that prints out these stickers or whatever perturbation you want to do. And then they will put it on the object. So we also essentially, there's constraints what can be done there. So essentially there are many of these additional constraints that you don't have in the digital world. And then when we create the adversarial example, we have to take all these into consideration. So how much of the creation of the adversarial examples, art and how much is science? Sort of how much is this sort of trial and error, trying to figure, trying different things, empirical sort of experiments and how much can be done sort of almost theoretically or by looking at the model, by looking at the neural network, trying to generate sort of definitively what the kind of stickers would be most likely to create, to be a good adversarial example in the physical world. Right, that's a very good question. So essentially I would say it's mostly science in the sense that we do have a scientific way of computing what the adversarial example, what is the adversarial preservation we should add. And then, and of course in the end, because of these additional steps, as I mentioned, you have to print it out and then you have to put it on and then you have to take the camera. So there are additional steps that you do need to do additional testing, but the creation process of generating the adversarial example is really a very scientific approach. Essentially we capture many of these constraints, as we mentioned, in this loss function that we optimize for. And so that's a very scientific approach. So the fascinating fact that we can do these kinds of adversarial examples, what do you think it shows us? Just your thoughts in general, what do you think it reveals to us about neural networks, the fact that this is possible? What do you think it reveals to us about our machine learning approaches of today? Is there something interesting? Is it a feature, is it a bug? What do you think? I think it really shows that we are still at a very early stage of really developing robust and generalizable machine learning methods. And it shows that we, even though deep learning has made so much advancements, but our understanding is very limited. We don't fully understand, or we don't understand well how they work, why they work, and also we don't understand that well, right, about these adversarial examples. Some people have kind of written about the fact that the fact that the adversarial examples work well is actually sort of a feature, not a bug. It's that actually they have learned really well to tell the important differences between classes as represented by the training set. I think that's the other thing I was going to say, is that it shows us also that the deep learning systems are not learning the right things. How do we make them, I mean, I guess this might be a place to ask about how do we then defend, or how do we either defend or make them more robust, these adversarial examples? Right, I mean, one thing is that I think, you know, people, so there have been actually thousands of papers now written on this topic. The defense or the attacks? Mostly attacks. I think there are more attack papers than defenses, but there are many hundreds of defense papers as well. So in defenses, a lot of work has been trying to, I would call it more like a patchwork. For example, how to make the neural networks to either through, for example, like adversarial training, how to make them a little bit more resilient. Got it. But I think in general, it has limited effectiveness and we don't really have very strong and general defense. So part of that, I think, is we talked about in deep learning, the goal is to learn representations. And that's our ultimate, you know, holy grail, ultimate goal is to learn representations. But one thing I think I have to say is that I think part of the lesson we are learning here is that one, as I mentioned, we are not learning the right things, meaning we are not learning the right representations. And also, I think the representations we are learning is not rich enough. And so it's just like a human vision. Of course, we don't fully understand how human visions work, but when humans look at the world, we don't just say, oh, you know, this is a person. Oh, there's a camera. We actually get much more nuanced information from the world. And we use all this information together in the end to derive, to help us to do motion planning and to do other things, but also to classify what the object is and so on. So we are learning a much richer representation. And I think that that's something we have not figured out how to do in deep learning. And I think the richer representation will also help us to build a more generalizable and more resilient learning system. Can you maybe linger on the idea of the word richer representation? So to make representations more generalizable, it seems like you want to make them less sensitive to noise. Right, so you want to learn the right things. You don't want to, for example, learn this spurious correlations and so on. But at the same time, an example of a richer information, our representation is like, again, we don't really know how human vision works, but when we look at the visual world, we actually, we can identify counters. We can identify much more information than just what's, for example, image classification system is trying to do. And that leads to, I think, the question you asked earlier about defenses. So that's also in terms of more promising directions for defenses. And that's where some of my work is trying to do and trying to show as well. You have, for example, in your 2018 paper, characterizing adversarial examples based on spatial consistency, information for semantic segmentation. So that's looking at some ideas on how to detect adversarial examples. So like, I guess, what are they? You call them like a poison data set. So like, yeah, adversarial bad examples in a segmentation data set. Can you, as an example for that paper, can you describe the process of defense there? Yeah, sure, sure. So in that paper, what we look at is the semantic segmentation task. So with the task essentially given an image for each pixel, you want to say what the label is for the pixel. So just like what we talked about for adversarial example, it can easily fill image classification systems. It turns out that it can also very easily fill these segmentation systems as well. So given an image, I essentially can add adversarial perturbation to the image to cause the segmentation system to basically segment it in any pageant I wanted. So in that paper, we also showed that you can segment it, even though there's no kitty in the image, we can segment it into like a kitty pattern, a Hello Kitty pattern. We segment it into like ICCV. That's awesome. Right, so that's on the attack side, showing us the segmentation system, even though they have been effective in practice, but at the same time, they're really, really easily filled. So then the question is, how can we defend against this? How we can build a more resilient segmentation system? So that's what we try to do. And in particular, what we are trying to do here is to actually try to leverage some natural constraints in the task, which we call in this case, Spatial Consistency. So the idea of the Spatial Consistency is the following. So again, we don't really know how human vision works, but in general, at least what we can say is, so for example, as a person looks at a scene, and we can segment the scene easily. We humans. Right, yes. Yes, and then if you pick like two patches of the scene that has an intersection, and for humans, if you segment patch A and patch B, and then you look at the segmentation results, and especially if you look at the segmentation results at the intersection of the two patches, they should be consistent in the sense that what the label, what the pixels in this intersection, what their labels should be, and they essentially from these two different patches, they should be similar in the intersection, right? So that's what we call Spatial Consistency. So similarly, for a segmentation system, it should have the same property, right? So in the image, if you pick two, randomly pick two patches that has an intersection, you feed each patch to the segmentation system, you get a result, and then when you look at the results in the intersection, the results, the segmentation results should be very similar. Is that, so, okay, so logically that kind of makes sense, at least it's a compelling notion, but is that, how well does that work? Does that hold true for segmentation? Exactly, exactly. So then in our work and experiments, we show the following. So when we take like normal images, this actually holds pretty well for the segmentation systems that we experimented with. So like natural scenes or like, did you look at like driving data sets? Right, right, right, exactly, exactly. But then this actually poses a challenge for adversarial examples, because for the attacker to add perturbation to the image, then it's easy for it to fold the segmentation system into, for example, for a particular patch or for the whole image to cause the segmentation system to create some, to get to some wrong results. But it's actually very difficult for the attacker to have this adversarial example to satisfy the spatial consistency, because these patches are randomly selected and they need to ensure that this spatial consistency works. So they basically need to fold the segmentation system in a very consistent way. Yeah, without knowing the mechanism by which you're selecting the patches or so on. Exactly, exactly. So it has to really fold the entirety of the, the mess of the entirety of the thing. Right, right, right. So it turns out to actually, to be really hard for the attacker to do. We try, you know, the best we can. The state of the art attacks actually show that this defense method is actually very, very effective. And this goes to, I think, also what I was saying earlier is, essentially we want the learning system to have richer retransition, and also to learn from more, you can add the same multi model, essentially to have more ways to check whether it's actually having the right prediction. So for example, in this case, doing the spatial consistency check. And also actually, so that's one paper that we did. And then this is spatial consistency, this notion of consistency check, it's not just limited to spatial properties, it also applies to audio. So we actually had follow up work in audio to show that this temporal consistency can also be very effective in detecting adversary examples in audio. Like speech or what kind of audio? Right, right, right. Speech, speech data? Right, and then we can actually combine spatial consistency and temporal consistency to help us to develop more resilient methods in video. So to defend against attacks for video also. That's fascinating. Right, so yeah, so it's very interesting. So there's hope. Yes, yes. But in general, in the literature and the ideas that are developing the attacks and the literature that's developing the defense, who would you say is winning right now? Right now, of course, it's attack side. It's much easier to develop attacks, and there are so many different ways to develop attacks. Even just us, we developed so many different methods for doing attacks. And also you can do white box attacks, you can do black box attacks, where attacks you don't even need, the attacker doesn't even need to know the architecture of the target system and not knowing the parameters of the target system and all that. So there are so many different types of attacks. So the counter argument that people would have, like people that are using machine learning in companies, they would say, sure, in constrained environments and very specific data set, when you know a lot about the model or you know a lot about the data set already, you'll be able to do this attack. It's very nice. It makes for a nice demo. It's a very interesting idea, but my system won't be able to be attacked like this. The real world systems won't be able to be attacked like this. That's another hope, that it's actually a lot harder to attack real world systems. Can you talk to that? How hard is it to attack real world systems? I wouldn't call that a hope. I think it's more of a wishful thinking or trying to be lucky. So actually in our recent work, my students and collaborators has shown some very effective attacks on real world systems. For example, Google Translate. Oh no. Other cloud translation APIs. So in this work we showed, so far I talked about adversary examples mostly in the vision category. And of course adversary examples also work in other domains as well. For example, in natural language. So in this work, my students and collaborators have shown that, so one, we can actually very easily steal the model from for example, Google Translate by just doing queries through the APIs and then we can train an imitation model ourselves using the queries. And then once we, and also the imitation model can be very, very effective and essentially achieving similar performance as a target model. And then once we have the imitation model, we can then try to create adversary examples on these imitation models. So for example, giving in the work, it was one example is translating from English to German. We can give it a sentence saying, for example, I'm feeling freezing. It's like six Fahrenheit and then translating to German. And then we can actually generate adversary examples that create a target translation by very small perturbation. So in this case, I say we want to change the translation itself six Fahrenheit to 21 Celsius. And in this particular example, actually we just changed six to seven in the original sentence, that's the only change we made. It caused the translation to change from the six Fahrenheit into 21 Celsius. That's incredible. And then, so this example, we created this example from our imitation model and then this work actually transfers to the Google Translate. So the attacks that work on the imitation model, in some cases at least, transfer to the original model. That's incredible and terrifying. Okay, that's amazing work. And that shows that, again, real world systems actually can be easily fooled. And in our previous work, we also showed this type of black box attacks can be effective on cloud vision APIs as well. So that's for natural language and for vision. Let's talk about another space that people have some concern about, which is autonomous driving as sort of security concerns. That's another real world system. So do you have, should people be worried about adversarial machine learning attacks in the context of autonomous vehicles that use like Tesla Autopilot, for example, that uses vision as a primary sensor for perceiving the world and navigating that world? What do you think? From your stop sign work in the physical world, should people be worried? How hard is that attack? So actually there has already been, like there has always been like research shown that's, for example, actually even with Tesla, like if you put a few stickers on the road, it can actually, when it's arranged in certain ways, it can fool the. That's right, but I don't think it's actually been, I'm not, I might not be familiar, but I don't think it's been done on physical roads yet, meaning I think it's with a projector in front of the Tesla. So it's a physical, so you're on the other side of the sensor, but you're not in still the physical world. The question is whether it's possible to orchestrate attacks that work in the actual, like end to end attacks, like not just a demonstration of the concept, but thinking is it possible on the highway to control Tesla? That kind of idea. I think there are two separate questions. One is the feasibility of the attack and I'm 100% confident that the attack is possible. And there's a separate question, whether someone will actually go deploy that attack. I hope people do not do that, but that's two separate questions. So the question on the word feasibility. So to clarify, feasibility means it's possible. It doesn't say how hard it is, because to implement it. So sort of the barrier, like how much of a heist it has to be, like how many people have to be involved? What is the probability of success? That kind of stuff. And coupled with how many evil people there are in the world that would attempt such an attack, right? But the two, my question is, is it sort of, when I talked to Elon Musk and asked the same question, he says, it's not a problem. It's very difficult to do in the real world. That this won't be a problem. He dismissed it as a problem for adversarial attacks on the Tesla. Of course, he happens to be involved with the company. So he has to say that, but I mean, let me linger in a little longer. Where does your confidence that it's feasible come from? And what's your intuition, how people should be worried and how we might be, how people should defend against it? How Tesla, how Waymo, how other autonomous vehicle companies should defend against sensory based attacks, whether on Lidar or on vision or so on. And also even for Lidar, actually, there has been research shown that even Lidar itself can be attacked. No, no, no, no, no, no. It's really important to pause. There's really nice demonstrations that it's possible to do, but there's so many pieces that it's kind of like, it's kind of in the lab. Now it's in the physical world, meaning it's in the physical space, the attacks, but it's very like, you have to control a lot of things. To pull it off, it's like the difference between opening a safe when you have it and you have unlimited time and you can work on it versus like breaking into like the crown, stealing the crown jewels and whatever, right? I mean, so one way to look at it in terms of how real these attacks can be, one way to look at it is that actually you don't even need any sophisticated attacks. Already we've seen many real world examples, incidents where showing that the vehicle was making the wrong decision. The wrong decision without attacks, right? Right, right. So that's one way to demonstrate. And this is also, like so far we've mainly talked about work in this adversarial setting, showing that today's learning system, they are so vulnerable to the adversarial setting, but at the same time, actually we also know that even in natural settings, these learning systems, they don't generalize well and hence they can really misbehave under certain situations like what we have seen. And hence I think using that as an example, it can show that these issues can be real. They can be real, but so there's two cases. One is something, it's like perturbations can make the system misbehave versus make the system do one specific thing that the attacker wants, as you said, the targeted attack. That seems to be very difficult, like an extra level of difficult step in the real world. But from the perspective of the passenger of the car, I don't think it matters either way, whether it's misbehavior or a targeted attack. And also, and that's why I was also saying earlier, like one defense is this multi model defense and more of these consistent checks and so on. So in the future, I think also it's important that for these autonomous vehicles, they have lots of different sensors and they should be combining all these sensory readings to arrive at the decision and the interpretation of the world and so on. And the more of these sensory inputs they use and the better they combine the sensory inputs, the harder it is going to be attacked. And hence, I think that is a very important direction for us to move towards. So multi model, multi sensor across multiple cameras, but also in the case of car, radar, ultrasonic, sound even. So all of those. Right, right, right, exactly. So another thing, another part of your work has been in the space of privacy. And that too can be seen as a kind of security vulnerability. So thinking of data as a thing that should be protected and the vulnerabilities to data is vulnerability is essentially the thing that you wanna protect is the privacy of that data. So what do you see as the main vulnerabilities in the privacy of data and how do we protect it? Right, so in security we actually talk about essentially two, in this case, two different properties. One is integrity and one is confidentiality. So what we have been talking earlier is essentially the integrity of, the integrity property of the learning system. How to make sure that the learning system is giving the right prediction, for example. And privacy essentially is on the other side is about confidentiality of the system is how attackers can, when the attackers compromise the confidentiality of the system, that's when the attacker steal sensitive information, right, about individuals and so on. That's really clean, those are great terms. Integrity and confidentiality. Right. So how, what are the main vulnerabilities to privacy, would you say, and how do we protect against it? Like what are the main spaces and problems that you think about in the context of privacy? Right, so especially in the machine learning setting. So in this case, as we know that how the process goes is that we have the training data and then the machine learning system trains from this training data and then builds a model and then later on inputs are given to the model to, at inference time, to try to get prediction and so on. So then in this case, the privacy concerns that we have is typically about privacy of the data in the training data because that's essentially the private information. So, and it's really important because oftentimes the training data can be very sensitive. It can be your financial data, it's your health data, or like in IoT case, it's the sensors deployed in real world environment and so on. And all this can be collecting very sensitive information. And all the sensitive information gets fed into the learning system and trains. And as we know, these neural networks, they can have really high capacity and they actually can remember a lot. And hence just from the learning, the learned model in the end, actually attackers can potentially infer information about the original training data sets. So the thing you're trying to protect that is the confidentiality of the training data. And so what are the methods for doing that? Would you say, what are the different ways that can be done? And also we can talk about essentially how the attacker may try to learn information from the... So, and also there are different types of attacks. So in certain cases, again, like in white box attacks, we can see that the attacker actually get to see the parameters of the model. And then from that, a smart attacker potentially can try to figure out information about the training data set. They can try to figure out what type of data has been in the training data sets. And sometimes they can tell like, whether a person has been... A particular person's data point has been used in the training data sets as well. So white box, meaning you have access to the parameters of say a neural network. And so that you're saying that it's some... Given that information is possible to some... So I can give you some examples. And then another type of attack, which is even easier to carry out is not a white box model. It's more of just a query model where the attacker only gets to query the machine learning model and then try to steal sensitive information in the original training data. So, right, so I can give you an example. In this case, training a language model. So in our work, in collaboration with the researchers from Google, we actually studied the following question. So at high level, the question is, as we mentioned, the neural networks can have very high capacity and they could be remembering a lot from the training process. Then the question is, can attacker actually exploit this and try to actually extract sensitive information in the original training data sets through just querying the learned model without even knowing the parameters of the model, like the details of the model or the architectures of the model and so on. So that's a question we set out to explore. And in one of the case studies, we showed the following. So we trained a language model over an email data set. It's called an Enron email data set. And the Enron email data sets naturally contained users social security numbers and credit card numbers. So we trained a language model over the data sets and then we showed that an attacker by devising some new attacks by just querying the language model and without knowing the details of the model, the attacker actually can extract the original social security numbers and credit card numbers that were in the original training data sets. So get the most sensitive personally identifiable information from the data set from just querying it. Right, yeah. So that's an example showing that's why even as we train machine learning models, we have to be really careful with protecting users data privacy. So what are the mechanisms for protecting? Is there hopeful? So there's been recent work on differential privacy, for example, that provides some hope, but can you describe some of the ideas? Right, so that's actually, right. So that's also our finding is that by actually, we show that in this particular case, we actually have a good defense. For the querying case, for the language model case. So instead of just training a vanilla language model, instead, if we train a differentially private language model, then we can still achieve similar utility, but at the same time, we can actually significantly enhance the privacy protection of the learned model. And our proposed attacks actually are no longer effective. And differential privacy is a mechanism of adding some noise, by which you then have some guarantees on the inability to figure out the presence of a particular person in the dataset. So right, so in this particular case, what the differential privacy mechanism does is that it actually adds perturbation in the training process. As we know, during the training process, we are learning the model, we are doing gradient updates, the weight updates and so on. And essentially, differential privacy, a differentially private machine learning algorithm in this case, will be adding noise and adding various perturbation during this training process. To some aspect of the training process. Right, so then the finally trained learning, the learned model is differentially private, and so it can enhance the privacy protection. So okay, so that's the attacks and the defense of privacy. You also talk about ownership of data. So this is a really interesting idea that we get to use many services online for seemingly for free by essentially, sort of a lot of companies are funded through advertisement. And what that means is the advertisement works exceptionally well because the companies are able to access our personal data, so they know which advertisement to service to do targeted advertisements and so on. So can you maybe talk about this? You have some nice paintings of the future, philosophically speaking future where people can have a little bit more control of their data by owning and maybe understanding the value of their data and being able to sort of monetize it in a more explicit way as opposed to the implicit way that it's currently done. Yeah, I think this is a fascinating topic and also a really complex topic. Right, I think there are these natural questions, who should be owning the data? And so I can draw one analogy. So for example, for physical properties, like your house and so on. So really this notion of property rights it's not like from day one, we knew that there should be like this clear notion of ownership of properties and having enforcement for this. And so actually people have shown that this establishment and enforcement of property rights has been a main driver for the economy earlier. And that actually really propelled the economic growth even in the earlier stage. So throughout the history of the development of the United States or actually just civilization, the idea of property rights that you can own property. Right, and then there's enforcement. There's institutional rights, that governmental like enforcements of this actually has been a key driver for economic growth. And there had been even research or proposals saying that for a lot of the developing countries, essentially the challenge in growth is not actually due to the lack of capital. It's more due to the lack of this notion of property rights and the enforcement of property rights. Interesting, so that the presence of absence of both the concept of the property rights and their enforcement has a strong correlation to economic growth. Right, right. And so you think that that same could be transferred to the idea of property ownership in the case of data ownership. I think first of all, it's a good lesson for us to recognize that these rights and the recognition and the enforcements of these type of rights is very, very important for economic growth. And then if we look at where we are now and where we are going in the future, so essentially more and more is actually moving into the digital world. And also more and more, I would say, even information or assets of a person is more and more into the real world, the physical, sorry, the digital world as well. It's the data that the person has generated. And essentially it's like in the past what defines a person, you can say, right, like oftentimes besides the innate capabilities, actually it's the physical properties. House, car. Right, that defines a person. But I think more and more people start to realize actually what defines a person is more important in the data that the person has generated or the data about the person. Like all the way from your political views, your music taste and your financial information, a lot of these and your health. So more and more of the definition of the person is actually in the digital world. And currently for the most part, that's owned implicitly. People don't talk about it, but kind of it's owned by internet companies. So it's not owned by individuals. Right, there's no clear notion of ownership of such data. And also we talk about privacy and so on, but I think actually clearly identifying the ownership is a first step. Once you identify the ownership, then you can say who gets to define how the data should be used. So maybe some users are fine with internet companies serving them as, right, using their data as long as if the data is used in a certain way that actually the user consents with or allows. For example, you can see the recommendation system in some sense, we don't call it as, but a recommendation system, similarly it's trying to recommend you something and users enjoy and can really benefit from good recommendation systems, either recommending you better music, movies, news, even research papers to read. But of course then in these targeted ads, especially in certain cases where people can be manipulated by these targeted ads that can have really bad, like severe consequences. So essentially users want their data to be used to better serve them and also maybe even, right, get paid for or whatever, like in different settings. But the thing is that first of all, we need to really establish like who needs to decide, who can decide how the data should be used. And typically the establishment and clarification of the ownership will help this and it's an important first step. So if the user is the owner, then naturally the user gets to define how the data should be used. But if you even say that wait a minute, users are actually now the owner of this data, whoever is collecting the data is the owner of the data. Now of course they get to use the data however way they want. So to really address these complex issues, we need to go at the root cause. So it seems fairly clear that so first we really need to say that who is the owner of the data and then the owners can specify how they want their data to be utilized. So that's a fascinating, most people don't think about that and I think that's a fascinating thing to think about and probably fight for it. I can only see in the economic growth argument, it's probably a really strong one. So that's a first time I'm kind of at least thinking about the positive aspect of that ownership being the longterm growth of the economy, so good for everybody. But sort of one down possible downside I could see sort of to put on my grumpy old grandpa hat and it's really nice for Facebook and YouTube and Twitter to all be free. And if you give control to people or their data, do you think it's possible they will be, they would not want to hand it over quite easily? And so a lot of these companies that rely on mass handover of data and then therefore provide a mass seemingly free service would then completely, so the way the internet looks will completely change because of the ownership of data and we'll lose a lot of services value. Do you worry about that? That's a very good question. I think that's not necessarily the case in the sense that yes, users can have ownership of their data, they can maintain control of their data, but also then they get to decide how their data can be used. So that's why I mentioned earlier, so in this case, if they feel that they enjoy the benefits of social networks and so on, and they're fine with having Facebook, having their data, but utilizing the data in certain way that they agree, then they can still enjoy the free services. But for others, maybe they would prefer some kind of private vision. And in that case, maybe they can even opt in to say that I want to pay and to have, so for example, it's already fairly standard, like you pay for certain subscriptions so that you don't get to be shown ads, right? So then users essentially can have choices. And I think we just want to essentially bring out more about who gets to decide what to do with that data. I think it's an interesting idea, because if you poll people now, it seems like, I don't know, but subjectively, sort of anecdotally speaking, it seems like a lot of people don't trust Facebook. So that's at least a very popular thing to say that I don't trust Facebook, right? I wonder if you give people control of their data as opposed to sort of signaling to everyone that they don't trust Facebook, I wonder how they would speak with the actual, like would they be willing to pay $10 a month for Facebook or would they hand over their data? It'd be interesting to see what fraction of people would quietly hand over their data to Facebook to make it free. I don't have a good intuition about that. Like how many people, do you have an intuition about how many people would use their data effectively on the market of the internet by sort of buying services with their data? Yeah, so that's a very good question. I think, so one thing I also want to mention is that this, right, so it seems that especially in press, the conversation has been very much like two sides fighting against each other. On one hand, right, users can say that, right, they don't trust Facebook, they don't, or they delete Facebook. Yeah, exactly. Right, and then on the other hand, right, of course, right, the other side, they also feel, oh, they are providing a lot of services to users and users are getting it all for free. So I think I actually, I don't know, I talk a lot to like different companies and also like basically on both sides. So one thing I hope also like, this is my hope for this year also, is that we want to establish a more constructive dialogue and to help people to understand that the problem is much more nuanced than just this two sides fighting. Because naturally, there is a tension between the two sides, between utility and privacy. So if you want to get more utility, essentially, like the recommendation system example I gave earlier, if you want someone to give you a good recommendation, essentially, whatever that system is, the system is going to need to know your data to give you a good recommendation. But also, of course, at the same time, we want to ensure that however that data is being handled, it's done in a privacy preserving way. So that, for example, the recommendation system doesn't just go around and sell your data and then cause a lot of bad consequences and so on. So you want that dialogue to be a little bit more in the open, a little more nuanced, and maybe adding control to the data, ownership to the data will allow, as opposed to this happening in the background, allow to bring it to the forefront and actually have dialogues, like more nuanced, real dialogues about how we trade our data for the services. That's the hope. Right, right, yes, at the high level. So essentially, also knowing that there are technical challenges in addressing the issue, like basically you can't have, just like the example that I gave earlier, it's really difficult to balance the two between utility and privacy. And that's also a lot of things that I work on, my group works on as well, is to actually develop these technologies that are needed to essentially help this balance better, essentially to help data to be utilized in a privacy preserving way. And so we essentially need people to understand the challenges and also at the same time to provide the technical abilities and also regulatory frameworks to help the two sides to be more in a win win situation instead of a fight. Yeah, the fighting thing is, I think YouTube and Twitter and Facebook are providing an incredible service to the world and they're all making a lot of money and they're all making mistakes, of course, but they're doing an incredible job that I think deserves to be applauded and there's some degree of, like it's a cool thing that's created and it shouldn't be monolithically fought against, like Facebook is evil or so on. Yeah, it might make mistakes, but I think it's an incredible service. I think it's world changing. I mean, I think Facebook's done a lot of incredible, incredible things by bringing, for example, identity. Like allowing people to be themselves, like their real selves in the digital space by using their real name and their real picture. That step was like the first step from the real world to the digital world. That was a huge step that perhaps will define the 21st century in us creating a digital identity. And there's a lot of interesting possibilities there that are positive. Of course, some things that are negative and having a good dialogue about that is great. And I'm great that people like you are at the center of that dialogue, so that's awesome. Right, I think also, I also can understand. I think actually in the past, especially in the past couple of years, this rising awareness has been helpful. Like users are also more and more recognizing that privacy is important to them. They should, maybe, right, they should be owners of their data. I think this definitely is very helpful. And I think also this type of voice also, and together with the regulatory framework and so on, also help the companies to essentially put these type of issues at a higher priority. And knowing that, right, also it is their responsibility too to ensure that users are well protected. So I think definitely the rising voice is super helpful. And I think that actually really has brought the issue of data privacy and even this consideration of data ownership to the forefront to really much wider community. And I think more of this voice is needed, but I think it's just that we want to have a more constructive dialogue to bring the both sides together to figure out a constructive solution. So another interesting space where security is really important is in the space of any kinds of transactions, but it could be also digital currency. So can you maybe talk a little bit about blockchain? And can you tell me what is a blockchain? Blockchain. I think the blockchain word itself is actually very overloaded. Of course. In general. It's like AI. Right, yes. So in general, when we talk about blockchain, we refer to this distributor in a decentralized fashion. So essentially you have a community of nodes that come together. And even though each one may not be trusted, and as long as a certain thresholds of the set of nodes behaves properly, then the system can essentially achieve certain properties. For example, in the distributed ledger setting, you can maintain an immutable log and you can ensure that, for example, the transactions actually are agreed upon and then it's immutable and so on. So first of all, what's a ledger? So it's a... It's like a database. It's like a data entry. And so a distributed ledger is something that's maintained across or is synchronized across multiple sources, multiple nodes. Multiple nodes, yes. And so where is this idea? How do you keep... So it's important, a ledger, a database, to keep that, to make sure... So what are the kinds of security vulnerabilities that you're trying to protect against in the context of a distributed ledger? So in this case, for example, you don't want some malicious nodes to be able to change the transaction logs. And in certain cases, it's called double spending, like you can also cause different views in different parts of the network and so on. So the ledger has to represent, if you're capturing financial transactions, it has to represent the exact timing and the exact occurrence and no duplicates, all that kind of stuff. It has to represent what actually happened. Okay, so what are your thoughts on the security and privacy of digital currency? I can't tell you how many people write to me to interview various people in the digital currency space. There seems to be a lot of excitement there. And it seems to be, some of it's, to me, from an outsider's perspective, seems like dark magic. I don't know how secure... I think the foundation, from my perspective, of digital currencies, that is, you can't trust anyone. So you have to create a really secure system. So can you maybe speak about how, what your thoughts in general about digital currency is and how we can possibly create financial transactions and financial stores of money in the digital space? So you asked about security and privacy. So again, as I mentioned earlier, in security, we actually talk about two main properties, the integrity and confidentiality. So there's another one for availability. You want the system to be available. But here, for the question you asked, let's just focus on integrity and confidentiality. So for integrity of this distributed ledger, essentially, as we discussed, we want to ensure that the different nodes, so they have this consistent view, usually it's done through what we call a consensus protocol, and that they establish this shared view on this ledger, and that you cannot go back and change, it's immutable, and so on. So in this case, then the security often refers to this integrity property. And essentially, you're asking the question, how much work, how can you attack the system so that the attacker can change the lock, for example? Change the lock, for example. Right, how hard is it to make an attack like that? Right, right. And then that very much depends on the consensus mechanism, how the system is built, and all that. So there are different ways to build these decentralized systems. And people may have heard about the terms called like proof of work, proof of stake, these different mechanisms. And it really depends on how the system has been built, and also how much resources, how much work has gone into the network to actually say how secure it is. So for example, people talk about like, in Bitcoin, it's proof of work system, so much electricity has been burned. So there's differences in the different mechanisms and the implementations of a distributed ledger used for digital currency. So there's Bitcoin, there's whatever, there's so many of them, and there's underlying different mechanisms. And there's arguments, I suppose, about which is more effective, which is more secure, which is more. And what is needed, what amount of resources needed to be able to attack the system? Like for example, what percentage of the nodes do you need to control or compromise in order to, right, to change the log? And those are things, do you have a sense if those are things that can be shown theoretically through the design of the mechanisms, or does it have to be shown empirically by having a large number of users using the currency? I see. So in general, for each consensus mechanism, you can actually show theoretically what is needed to be able to attack the system. Of course, there can be different types of attacks as we discussed at the beginning. And so that it's difficult to give like, you know, complete estimates, like really how much is needed to compromise the system. But in general, right, so there are ways to say what percentage of the nodes you need to compromise and so on. So we talked about integrity on the security side, and then you also mentioned the privacy or the confidentiality side. Does it have some of the same problems and therefore some of the same solutions that you talked about on the machine learning side with differential privacy and so on? Yeah, so actually in general on the public ledger in these public decentralized systems, actually nothing is private. So all the transactions posted on the ledger, anybody can see. So in that sense, there's no confidentiality. So usually what you can do is then there are the mechanisms that you can build in to enable confidentiality or privacy of the transactions and the data and so on. That's also some of the work that both my group and also my startup does as well. What's the name of the startup? Oasis Labs. Oasis Labs. And so the confidentiality aspect there is even though the transactions are public, you wanna keep some aspect confidential of the identity of the people involved in the transactions? Or what is their hope to keep confidential in this context? So in this case, for example, you want to enable like confidential transactions, even, so there are different essentially types of data that you want to keep private or confidential. And you can utilize different technologies including zero knowledge proofs and also secure computing and techniques and to hide who is making the transactions to whom and the transaction amount. And in our case, also we can enable like confidential smart contracts. And so that you don't know the data and the execution of the smart contract and so on. And we actually are combining these different technologies and going back to the earlier discussion we had, enabling like ownership of data and privacy of data and so on. So at Oasis Labs, we're actually building what we call a platform for responsible data economy to actually combine these different technologies together and to enable secure and privacy preserving computation and also using the library to help provide immutable log of users ownership to their data and the policies they want the data to adhere to, the usage of the data to adhere to and also how the data has been utilized. So all this together can build, we call a distributed secure computing fabric that helps to enable a more responsible data economy. So it's a lot of things together. Yeah, wow, that was eloquent. Okay, you're involved in so much amazing work that we'll never be able to get to, but I have to ask at least briefly about program synthesis, which at least in a philosophical sense captures much of the dreams of what's possible in computer science and the artificial intelligence. First, let me ask, what is program synthesis and can neural networks be used to learn programs from data? So can this be learned? Some aspect of the synthesis can it be learned? So program synthesis is about teaching computers to write code, to program. And I think that's one of our ultimate dreams or goals. I think Andreessen talked about software eating the world. So I say, once we teach computers to write the software, how to write programs, then I guess computers will be eating the world by transitivity. Yeah, exactly. So yeah, and also for me actually, when I shifted from security to more AI machine learning, program synthesis is, program synthesis and adversarial machine learning, these are the two fields that I particularly focus on. Like program synthesis is one of the first questions that I actually started investigating. Just as a question, oh, I guess from the security side, there's a, you're looking for holes in programs, so at least see small connection, but where was your interest for program synthesis? Because it's such a fascinating, such a big, such a hard problem in the general case. Why program synthesis? So the reason for that is actually when I shifted my focus from security into AI machine learning, actually one of my main motivation at the time is that even though I have been doing a lot of work in security and privacy, but I have always been fascinated about building intelligent machines. And that was really my main motivation to spend more time in AI machine learning is that I really want to figure out how we can build intelligent machines. And to help us towards that goal, program synthesis is really one of, I would say the best domain to work on. I actually call it like program synthesis is like the perfect playground for building intelligent machines and for artificial general intelligence. Yeah, well, it's also in that sense, not just a playground, I guess it's the ultimate test of intelligence because I think if you can generate sort of neural networks can learn good functions and they can help you out in classification tasks, but to be able to write programs, that's the epitome from the machine side. That's the same as passing the Turing test in natural language, but with programs, it's able to express complicated ideas to reason through ideas and boil them down to algorithms. Yes, exactly, exactly. Incredible, so can this be learned? How far are we? Is there hope? What are the open challenges? Yeah, very good questions. We are still at an early stage, but already I think we have seen a lot of progress. I mean, definitely we have existence proof, just like humans can write programs. So there's no reason why computers cannot write programs. So I think that's definitely an achievable goal is just how long it takes. And even today, we actually have, the program synthesis community, especially the program synthesis via learning, how we call it, neuro program synthesis community, is still very small, but the community has been growing and we have seen a lot of progress. And in limited domains, I think actually program synthesis is ripe for real world applications. So actually it was quite amazing. I was giving a talk, so here is a rework conference. Rework Deep Learning Summit. I actually, so I gave another talk at the previous rework conference in deep reinforcement learning. And then I actually met someone from a startup, the CEO of the startup. And then when he saw my name, he recognized it. And he actually said, one of our papers actually had, they had actually become a key products in their startup. And that was program synthesis, in that particular case, it was natural language translation, translating natural language description into SQL queries. Oh, wow, that direction, okay. Right, so yeah, so in program synthesis, in limited domains, in well specified domains, actually already we can see really, really great progress and applicability in the real world. So domains like, I mean, as an example, you said natural language, being able to express something through just normal language and it converts it into a database SQL query. Right. And that's how solved of a problem is that? Because that seems like a really hard problem. Again, in limited domains, actually it can work pretty well. And now this is also a very active domain of research. At the time, I think when he saw our paper at the time, we were the state of the arts on that task. And since then, actually now there has been more work and with even more like sophisticated data sets. And so, but I think I wouldn't be surprised that more of this type of technology really gets into the real world. That's exciting. In the near term. Being able to learn in the space of programs is super exciting. I still, yeah, I'm still skeptical cause I think it's a really hard problem, but I would love to see progress. And also I think in terms of the, you asked about open challenges. I think the domain is full of challenges and in particular also we want to see how we should measure the progress in the space. And I would say mainly three main, I would say, metrics. So one is the complexity of the program that we can synthesize. And that will actually have clear measures and just look at the past publications. And even like, for example, I was at the recent NeurIPS conference. Now there's actually fairly sizable like session dedicated to program synthesis, which is... Or even Neural programs. Right, right, right, which is great. And we continue to see the increase. What does sizable mean? I like the word sizable, it's five people. It's still a small community, but it is growing. And they will all win Turing Awards one day, I like it. Right, so we can clearly see an increase in the complexity of the programs that these... We can synthesize. Sorry, is it the complexity of the actual text of the program or the running time complexity? Which complexity are we... How... The complexity of the task to be synthesized and the complexity of the actual synthesized programs. So the lines of code even, for example. Okay, I got you. But it's not the theoretical upper bound of the running time of the algorithm kind of thing. Okay, got it. And you can see the complexity decreasing already. Oh, no, meaning we want to be able to synthesize more and more complex programs, bigger and bigger programs. So we want to see that, we want to increase the complexity of this. I got you, so I have to think through, because I thought of complexity as, you want to be able to accomplish the same task with a simpler and simpler program. I see, I see. No, we are not doing that. It's more about how complex a task we can synthesize programs for. Yeah, got it, being able to synthesize programs, learn them for more and more difficult tasks. So for example, initially, our first work in program synthesis was to translate natural language description into really simple programs called if TTT, if this, then that. So given a trigger condition, what is the action you should take? So that program is super simple. You just identify the trigger conditions and the action. And then later on, with SQL queries, it gets more complex. And then also, we started to synthesize programs with loops and, you know. Oh no, and if you could synthesize recursion, it's all over. Right, actually, one of our works actually is on learning recursive neural programs. Oh no. But anyway, anyway, so that's one is complexity, and the other one is generalization. Like when we train or learn a program synthesizer, in this case, a neural programs to synthesize programs, then you want it to generalize. For a large number of inputs. Right, so to be able to generalize to previously unseen inputs. Got it. And so, right, so some of the work we did earlier on learning recursive neural programs actually showed that recursion actually is important to learn. And if you have recursion, then for a certain set of tasks, we can actually show that you can actually have perfect generalization. So, right, so that won the best paperwork awards at ICLR earlier. So that's one example of we want to learn these neural programs that can generalize better. But that works for certain tasks, certain domains, and there's question how we can essentially develop more techniques that can have generalization for a wider set of domains and so on. So that's another area. And then the third challenge I think will, it's not just for programming synthesis, it's also cutting across other fields in machine learning and also including like deep reinforcement learning in particular, is that this adaptation is that we want to be able to learn from the past and tasks and training and so on to be able to solve new tasks. So for example, in program synthesis today, we still are working in the setting where given a particular task, we train the model and to solve this particular task. But that's not how humans work. The whole point is we train a human, then you can then program to solve new tasks. Right, exactly. And just like in deep reinforcement learning, we don't want to just train agent to play a particular game, either it's Atari or it's Go or whatever. We want to train these agents that can essentially extract knowledge from the past learning experience to be able to adapt to new tasks and solve new tasks. And I think this is particularly important for program synthesis. Yeah, that's the whole dream of program synthesis is you're learning a tool that can solve new problems. Right, exactly. And I think that's a particular domain that as a community, we need to put more emphasis on. And I hope that we can make more progress there as well. Awesome. There's a lot more to talk about. Let me ask that you also had a very interesting and we talked about rich representations. You had a rich life journey. You did your bachelor's in China and your master's and PhD in the United States, CMU in Berkeley. Are there interesting differences? I told you I'm Russian. I think there's a lot of interesting difference between Russia and the United States. Are there in your eyes, interesting differences between the two cultures from the silly romantic notion of the spirit of the people to the more practical notion of how research is conducted that you find interesting or useful in your own work of having experienced both? That's a good question. I think, so I studied in China for my undergraduates and that was more than 20 years ago. So it's been a long time. Is there echoes of that time in you? Things have changed a lot. Actually, it's interesting. I think even more so maybe something that's even be more different for my experience than a lot of computer science researchers and practitioners is that, so for my undergrad, I actually studied physics. Nice, very nice. And then I switched to computer science in graduate school. What happened? Is there another possible universe where you could have become a theoretical physicist at Caltech or something like that? That's very possible, some of my undergrad classmates, then they later on studied physics, got their PhD in physics from these schools, from top physics programs. So you switched to, I mean, from that experience of doing physics in your bachelor's, what made you decide to switch to computer science and computer science at arguably the best university, one of the best universities in the world for computer science with Carnegie Mellon, especially for grad school and so on. So what, second only to MIT, just kidding. Okay, I had to throw that in there. No, what was the choice like and what was the move to the United States like? What was that whole transition? And if you remember, if there's still echoes of some of the spirit of the people of China in you in New York. Right, right, yeah. It's like three questions in one. Yes, I know. I'm sorry. No, that's okay. So yes, so I guess, okay, so first transition from physics to computer science. So when I first came to the United States, I was actually in the physics PhD program at Cornell. I was there for one year and then I switched to computer science and then I was in the PhD program at Carnegie Mellon. So, okay, so the reasons for switching. So one thing, so that's why I also mentioned about this difference in backgrounds about having studied physics first in my undergrad. I actually really, I really did enjoy my undergrad's time and education in physics. I think that actually really helped me in my future work in computer science. Actually, even for machine learning, a lot of the machine learning stuff, the core machine learning methods, many of them actually came from physics. Statistical. For honest, most of everything came from physics. Right, but anyway, so when I studied physics, I was, I think I was really attracted to physics. It was, it's really beautiful. And I actually call it, physics is the language of nature. And I actually clearly remember, like, one moment in my undergrads, like I did my undergrad in Tsinghua and I used to study in the library. And I clearly remember, like, one day I was sitting in the library and I was, like, writing on my notes and so on. And I got so excited that I realized that really just from a few simple axioms, a few simple laws, I can derive so much. It's almost like I can derive the rest of the world. Yeah, the rest of the universe. Yes, yes, so that was, like, amazing. Do you think you, have you ever seen or do you think you can rediscover that kind of power and beauty in computer science in the world that you... So, that's very interesting. So that gets to, you know, the transition from physics to computer science. It's quite different. For physics in grad school, actually, things changed. So one is I started to realize that when I started doing research in physics, at the time I was doing theoretical physics. And a lot of it, you still have the beauty, but it's very different. So I had to actually do a lot of the simulation. So essentially I was actually writing, in some cases writing fortune code. Good old fortune, yeah. To actually, right, do simulations and so on. That was not exactly what I enjoyed doing. And also at the time from talking with the senior students, senior students in the program, I realized many of the students actually were going off to like Wall Street and so on. So, and I've always been interested in computer science and actually essentially taught myself the C programming. Program? Right, and so on. At which, when? In college. In college somewhere? In the summer. For fun, physics major, learning to do C programming. Beautiful. Actually it's interesting, in physics at the time, I think now the program probably has changed, but at the time really the only class we had in related to computer science education was introduction to, I forgot, to computer science or computing and Fortran 77. There's a lot of people that still use Fortran. I'm actually, if you're a programmer out there, I'm looking for an expert to talk to about Fortran. They seem to, there's not many, but there's still a lot of people that still use Fortran and still a lot of people that use Cobalt. But anyway, so then I realized, instead of just doing programming for doing simulations and so on, that I may as well just change to computer science. And also one thing I really liked, and that's a key difference between the two, is in computer science it's so much easier to realize your ideas. If you have an idea, you write it up, you code it up, and then you can see it actually, right? Exactly. Running and you can see it. You can bring it to life quickly. Bring it to life. Whereas in physics, if you have a good theory, you have to wait for the experimentalists to do the experiments and to confirm the theory, and things just take so much longer. And also the reason in physics I decided to do theoretical physics was because I had my experience with experimental physics. First, you have to fix the equipment. You spend most of your time fixing the equipment first. Super expensive equipment, so there's a lot of, yeah, you have to collaborate with a lot of people. Takes a long time. Just takes really, right, much longer. Yeah, it's messy. Right, so I decided to switch to computer science. And one thing I think maybe people have realized is that for people who study physics, actually it's very easy for physicists to change to do something else. I think physics provides a really good training. And yeah, so actually it was fairly easy to switch to computer science. But one thing, going back to your earlier question, so one thing I actually did realize, so there is a big difference between computer science and physics, where physics you can derive the whole universe from just a few simple laws. And computer science, given that a lot of it is defined by humans, the systems are defined by humans, and it's artificial, like essentially you create a lot of these artifacts and so on. It's not quite the same. You don't derive the computer systems with just a few simple laws. You actually have to see there is historical reasons why a system is built and designed one way versus the other. There's a lot more complexity, less elegant simplicity of E equals MC squared that kind of reduces everything down to those beautiful fundamental equations. But what about the move from China to the United States? Is there anything that still stays in you that contributes to your work, the fact that you grew up in another culture? So yes, I think especially back then it's very different from now. So now they actually, I see these students coming from China, and even undergrads, actually they speak fluent English. It was just amazing. And they have already understood so much of the culture in the US and so on. It was to you, it was all foreign? It was a very different time. At the time, actually, we didn't even have easy access to email, not to mention about the web. I remember I had to go to specific privileged server rooms to use email, and hence, at the time, at the time we had much less knowledge about the Western world. And actually at the time I didn't know, actually in the US, the West Coast weather is much better than the East Coast. Yeah, things like that, actually. It's very interesting. But now it's so different. At the time, I would say there's also a bigger cultural difference, because there was so much less opportunity for shared information. So it's such a different time and world. So let me ask maybe a sensitive question. I'm not sure, but I think you and I are in similar positions. I've been here for already 20 years as well, and looking at Russia from my perspective, and you looking at China. In some ways, it's a very distant place, because it's changed a lot. But in some ways you still have echoes, you still have knowledge of that place. The question is, China's doing a lot of incredible work in AI. Do you see, please tell me there's an optimistic picture you see where the United States and China can collaborate and sort of grow together in the development of AI towards, there's different values in terms of the role of government and so on, of ethical, transparent, secure systems. We see it differently in the United States a little bit than China, but we're still trying to work it out. Do you see the two countries being able to successfully collaborate and work in a healthy way without sort of fighting and making it an AI arms race kind of situation? Yeah, I believe so. I think science has no border, and the advancement of the technology helps everyone, helps the whole world. And so I certainly hope that the two countries will collaborate, and I certainly believe so. Do you have any reason to believe so except being an optimist? So first, again, like I said, science has no borders. And especially in... Science doesn't know borders? Right. And you believe that will, in the former Soviet Union during the Cold War... So that's, yeah. So that's the other point I was going to mention is that especially in academic research, everything is public. Like we write papers, we open source codes, and all this is in the public domain. It doesn't matter whether the person is in the US, in China, or some other parts of the world. They can go on archive and look at the latest research and results. So that openness gives you hope. Yes. Me too. And that's also how, as a world, we make progress the best. So, I apologize for the romanticized question, but looking back, what would you say was the most transformative moment in your life that maybe made you fall in love with computer science? You said physics. You remember there was a moment where you thought you could derive the entirety of the universe. Was there a moment that you really fell in love with the work you do now, from security to machine learning, to program synthesis? So maybe, as I mentioned, actually, in college, one summer I just taught myself programming in C. Yes. And you just read a book, and then you're like... Don't tell me you fell in love with computer science by programming in C. Remember I mentioned one of the draws for me to computer science is how easy it is to realize your ideas. So once I read a book, I taught myself how to program in C. Immediately, what did I do? I programmed two games. One's just simple, like it's a Go game, like it's a board, you can move the stones and so on. And the other one, I actually programmed a game that's like a 3D Tetris. It turned out to be a super hard game to play. Because instead of just the standard 2D Tetris, it's actually a 3D thing. But I realized, wow, I just had these ideas to try it out, and then, yeah, you can just do it. And so that's when I realized, wow, this is amazing. Yeah, you can create yourself. Yes, yes, exactly. From nothing to something that's actually out in the real world. So let me ask... Right, I think with your own hands. Let me ask a silly question, or maybe the ultimate question. What is to you the meaning of life? What gives your life meaning, purpose, fulfillment, happiness, joy? Okay, these are two different questions. Very different, yeah. It's usually that you ask this question. Maybe this question is probably the question that has followed me and followed my life the most. Have you discovered anything, any satisfactory answer for yourself? Is there something you've arrived at? You know, there's a moment... I've talked to a few people who have faced, for example, a cancer diagnosis, or faced their own mortality, and that seems to change their view of them. It seems to be a catalyst for them removing most of the crap. Of seeing that most of what they've been doing is not that important, and really reducing it into saying, like, here's actually the few things that really give meaning. Mortality is a really powerful catalyst for that, it seems like. Facing mortality, whether it's your parents dying or somebody close to you dying, or facing your own death for whatever reason, or cancer and so on. So yeah, so in my own case, I didn't need to face mortality, too. So try to ask that question. And I think there are a couple things. So one is, like, who should be defining the meaning of your life, right? Is there some kind of even greater things than you who should define the meaning of your life? So for example, when people say that searching the meaning for your life, is there some outside voice, or is there something outside of you who actually tells you, you know... So people talk about, oh, you know, this is what you have been born to do, right? Like, this is your destiny. So who, right, so that's one question, like, who gets to define the meaning of your life? Should you be finding some other things, some other factor to define this for you? Or is something actually, it's just entirely what you define yourself, and it can be very arbitrary. Yeah, so an inner voice or an outer voice, whether it could be spiritual or religious, too, with God, or some other components of the environment outside of you, or just your own voice. Do you have an answer there? So, okay, so for that, I have an answer. And through, you know, the long period of time of thinking and searching, even searching through outsides, right, you know, voices or factors outside of me. So that, I have an answer. I've come to the conclusion and realization that it's you yourself that defines the meaning of life. Yeah, that's a big burden, though, isn't it? I mean, yes and no, right? So then you have the freedom to define it. Yes. And another question is, like, what does it really mean by the meaning of life? Right. And also, whether the question even makes sense. Absolutely, and you said it somehow distinct from happiness. So meaning is something much deeper than just any kind of emotional, any kind of contentment or joy or whatever. It might be much deeper. And then you have to ask, what is deeper than that? What is there at all? And then the question starts being silly. Right, and also you can say it's deeper, but you can also say it's shallower, depending on how people want to define the meaning of their life. So for example, most people don't even think about this question. Then the meaning of life to them doesn't really matter that much. And also, whether knowing the meaning of life, whether it actually helps your life to be better or whether it helps your life to be happier, these actually are open questions. It's not, right? Of course, most questions are open. I tend to think that just asking the question, as you mentioned, as you've done for a long time, is the only, that there is no answer. And asking the question is a really good exercise. I mean, I have this, for me personally, I've had a kind of feeling that creation is, like for me has been very fulfilling. And it seems like my meaning has been to create. And I'm not sure what that is. Like I don't have, I'm single and I don't have kids. I'd love to have kids, but I also, sounds creepy, but I also see sort of, you said see programs. I see programs as little creations. I see robots as little creations. I think those bring, and then ideas, theorems are creations. And those somehow intrinsically, like you said, bring me joy. I think they do to a lot of, at least scientists, but I think they do to a lot of people. So that, to me, if I had to force the answer to that, I would say creating new things yourself. For you. For me, for me, for me. I don't know, but like you said, it keeps changing. Is there some answer that? And some people, they can, I think, they may say it's experience, right? Like their meaning of life, they just want to experience to the richest and fullest they can. And a lot of people do take that path. Yes, seeing life as actually a collection of moments and then trying to make the richest possible sets, fill those moments with the richest possible experiences. Right. And for me, I think it's certainly, we do share a lot of similarity here. So creation is also really important for me, even from the things I've already talked about, even like writing papers, and these are all creations as well. And I have not quite thought whether that is really the meaning of my life. Like in a sense, also then maybe like, what kind of things should you create? There are so many different things that you could create. And also you can say, another view is maybe growth. It's related, but different from experience. Growth is also maybe type of meaning of life. It's just, you try to grow every day, try to be a better self every day. And also ultimately, we are here, it's part of the overall evolution. Right, the world is evolving and it's growing. Isn't it funny that the growth seems to be the more important thing than the thing you're growing towards. It's like, it's not the goal, it's the journey to it. It's almost when you submit a paper, there's a sort of depressing element to it, not to submit a paper, but when that whole project is over. I mean, there's the gratitude, there's the celebration and so on, but you're usually immediately looking for the next thing or the next step, right? It's not that, the end of it is not the satisfaction, it's the hardship, the challenge you have to overcome, the growth through the process. It's somehow probably deeply within us, the same thing that drives the evolutionary process is somehow within us, with everything the way we see the world. Since you're thinking about these, so you're still in search of an answer. I mean, yes and no, in the sense that I think for people who really dedicate time to search for the answer to ask the question, what is the meaning of life? It does not necessarily bring you happiness. Yeah. It's a question, we can say, right? Like whether it's a well defined question. And, but on the other hand, given that you get to answer it yourself, you can define it yourself, then sure, I can just give it an answer. And in that sense, yes, it can help. Like we discussed, right? If you say, oh, then my meaning of life is to create or to grow, then yes, then I think they can help. But how do you know that that is really the meaning of life or the meaning of your life? It's like there's no way for you to really answer the question. Sure, but something about that certainty is liberating. So it might be an illusion, you might not really know, you might be just convincing yourself falsely, but being sure that that's the meaning, there's something liberating in that. There's something freeing in knowing this is your purpose. So you can fully give yourself to that. Without, you know, for a long time, you know, I thought like, isn't it all relative? Like why, how do we even know what's good and what's evil? Like isn't everything just relative? Like how do we know, you know, the question of meaning is ultimately the question of why do anything? Why is anything good or bad? Why is anything valuable and so on? Exactly. Then you start to, I think just like you said, I think it's a really useful question to ask, but if you ask it for too long and too aggressively. It may not be so productive. It may not be productive and not just for traditionally societally defined success, but also for happiness. It seems like asking the question about the meaning of life is like a trap. We're destined to be asking. We're destined to look up to the stars and ask these big why questions we'll never be able to answer, but we shouldn't get lost in them. I think that's probably the, that's at least the lesson I picked up so far. On that topic. Oh, let me just add one more thing. So it's interesting. So sometimes, yes, it can help you to focus. So when I shifted my focus more from security to AI and machine learning, at the time, actually one of the main reasons that I did that was because at the time, I thought the meaning of my life and the purpose of my life is to build intelligent machines. And that's, and then your inner voice said that this is the right, this is the right journey to take to build intelligent machines and that you actually fully realize you took a really legitimate big step to become one of the world class researchers to actually make it, to actually go down that journey. Yeah, that's profound. That's profound. I don't think there's a better way to end a conversation than talking for a while about the meaning of life. Dawn is a huge honor to talk to you. Thank you so much for talking today. Thank you, thank you. Thanks for listening to this conversation with Dawn Song and thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LexFriedman. And now let me leave you with some words about hacking from the great Steve Wozniak. A lot of hacking is playing with other people, you know, getting them to do strange things. Thank you for listening and hope to see you next time.
Dawn Song: Adversarial Machine Learning and Computer Security | Lex Fridman Podcast #95
The following is a conversation with Stephen Schwarzman, CEO and cofounder of Blackstone, one of the world's leading investment firms with over $530 billion of assets under management. He's one of the most successful business leaders in history. I recommend his recent book called What It Takes that tells stories and lessons from his personal journey. Stephen is a philanthropist and one of the wealthiest people in the world, recently signing the Giving Pledge, thereby committing to give the majority of his wealth to philanthropic causes. As an example, in 2018, he donated $350 million to MIT to help establish his new College of Computing, the mission of which promotes interdisciplinary, big, bold research in artificial intelligence. For those of you who know me, know that MIT is near and dear to my heart and always will be. It was and is a place where I believe big, bold, revolutionary ideas have a home, and that is what is needed in artificial intelligence research in the coming decades. Yes, there's institutional challenges, but also there's power in the passion of individual researchers, from undergrad to PhD, from young scientists to senior faculty. I believe the dream to build intelligence systems burns brighter than ever in the halls of MIT. This conversation was recorded recently, but before the outbreak of the pandemic. For everyone feeling the burden of this crisis, I'm sending love your way. Stay strong, we're in this together. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you, and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to Masterclass at masterclass.com slash lex, and getting ExpressVPN at expressvpn.com slash lexpod. This show is sponsored by Masterclass. Sign up at masterclass.com slash lex to get a discount and support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from, to list some of my favorites, Chris Hadfield on Space Exploration, Neil deGrasse Tyson on Scientific Thinking and Communication, Will Wright, creator of SimCity and Sims on game design, Carlos Santana on guitar, Gary Kasparov on chess, Daniel Negrano on poker, and many, many more. Chris Hadfield explaining how rockets work, and the experience of being launched into space alone is worth the money. By the way, you can watch it on basically any device. Once again, sign up at masterclass.com slash lex to get a discount and to support this podcast. This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lex pod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy to use, press the big power on button, and your privacy is protected. And, if you like, you can make it look like your location is anywhere else in the world. I might be in Boston now, but it can make you look like I'm in New York, London, Paris, or anywhere else in the world. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux, shout out to Ubuntu 2004, Windows, Android, but it's available everywhere else too. Once again, get it at expressvpn.com slash lex pod to get a discount and to support this podcast. And now, here's my conversation with Stephen Schwarzman. Let's start with a tough question. What idea do you believe, whether grounded in data or in intuition, that many people you respect disagree with you on? Well, there isn't all that much anymore since the world's so transparent. But one of the things I believe in and put it in the book, the book, what it takes is if you're gonna do something, do something very consequential. Do something that's quite large, if you can, that's unique. Because if you operate in that kind of space, when you're successful, it's a huge impact. The prospect of success enables you to recruit people who wanna be part of that. And those type of large opportunities are pretty easily described. And so, not everybody likes to operate at scale. Some people like to do small things because it is meaningful for them emotionally. And so, occasionally, you get a disagreement on that. But those are life choices rather than commercial choices. That's interesting. What good and bad comes with going big? We often, in America, think big is good. What's the benefit, what's the cost in terms of just bigger than business, but life, happiness, the pursuit of happiness? Well, you do things that make you happy. It's not mandated. And everybody's different. And some people, if they have talent, like playing pro football, other people just like throwing the ball around, not even being on a team. What's better? Depends what your objectives are. Depends what your talent is. Depends what gives you joy. So, in terms of going big, is it both for impact on the world and because you personally gives you joy? Well, it makes it easier to succeed, actually. Because if you catch something, for example, that's cyclical, that's a huge opportunity, then you usually can find some place within that huge opportunity where you can make it work. If you're prosecuting a really small thing and you're wrong, you don't have many places to go. So, I've always found that the easy place to be and the ability where you can concentrate human resources, get people excited about doing really impactful big things, and you can afford to pay them, actually. Because the bigger thing can generate much more in the way of financial resources. So, that brings people out of talent to help you. And so, all together, it's a virtuous circle, I think. How do you know an opportunity when you see one in terms of the one you wanna go big on? Is it intuition, is it facts? Is it back and forth deliberation with people you trust? What's the process? Is it art, is it science? Well, it's pattern recognition. And how do you get to pattern recognition? First, you need to understand the patterns and the changes that are happening. And that's either, it's observational on some level. You can call it data or you can just call it listening to unusual things that people are saying that they haven't said before. And I've always tried to describe this. It's like seeing a piece of white lint on a black dress. But most people disregard that piece of lint. They just see the dress. I always see the lint. And I'm fascinated by how did something get someplace it's not supposed to be? So, it doesn't even need to be a big discrepancy. But if something shouldn't be someplace in a constellation of facts that sort of made sense in a traditional way, I've learned that if you focus on why one discordant note is there, that's usually a key to something important. And if you can find two of those discordant notes, that's usually a straight line to someplace. And that someplace is not where you've been. And usually when you figure out that things are changing or have changed and you describe them, which you have to be able to do because it's not some odd intuition. It's just focusing on facts. It's almost like a scientific discovery, if you will. When you describe it to other people in the real world, they tend to do absolutely nothing about it. And that's because humans are comfortable in their own reality. And if there's no particular reason at that moment to shake them out of their reality, they'll stay in it even if they're ultimately completely wrong. And I've always been stunned that when I explain where we're going, what we're doing and why, almost everyone just says, that's interesting. And they continue doing what they're doing. And so I think it's pretty easy to do that. But what you need is a huge data set. So before AI and people's focus on data, I've sort of been doing this mostly my whole life. I'm not a scientist, I'm not let alone a computer scientist. And you can just hear what people are saying when somebody says something or you observe something that simply doesn't make sense. That's when you really go to work. The rest of it's just processing. You know, on a quick tangent, pattern recognition is a term often used throughout the history of AI. That's the goal of artificial intelligence is pattern recognition, right? But there's, I would say, various flavors of that. So usually pattern recognition refers to the process of the, we said dress and the lint on the dress. Pattern recognition is very good at identifying the dress as looking at the pattern that's always there, that's very common and so on. You almost refer to a pattern that's like in what's called outlier detection in computer science, right, the rare thing, the small thing. Now, AI is not often good at that. Do you, just almost philosophically, the kind of decisions you made in your life based scientifically almost on data, do you think AI in the future will be able to do? Is it something that could be put down into code or is it still deeply human? It's tough for me to say since I don't have domain knowledge in AI to know everything that could or might occur. I know, sort of in my own case, that most people don't see any of that. I just assumed it was motivational, you know, but it's also sort of, it's hardwiring. What are you wired or programmed to be finding or looking for? It's not what happens every day. That's not interesting, frankly. I mean, that's what people mostly do. I do a bunch of that too because, you know, that's what you do in normal life. But I've always been completely fascinated by the stuff that doesn't fit. Or the other way of thinking about it, it's determining what people want without them saying it. That's a different kind of pattern. You can see everything they're doing. There's a missing piece. They don't know it's missing. You think it's missing given the other facts. You know about them and you deliver that and then that becomes, you know, sort of very easy to sell to them. To linger on this point a little bit, you've mentioned that in your family, when you were growing up, nobody raised their voice in anger or otherwise. And you said that this allows you to learn to listen and hear some interesting things. Can you elaborate as you have been on that idea, what do you hear about the world if you listen? Well, you have to listen really intensely to understand what people are saying as well as what people are intending because it's not necessarily the same thing. And people mostly give themselves away no matter how clever they think they are. Particularly if you have the full array of inputs. In other words, if you look at their face, you look at their eyes, which are the window on the soul, it's very difficult to conceal what you're thinking. You look at facial expressions and posture. You listen to their voice, which changes. You know, when you're talking about something you're comfortable with or not, are you speaking faster? Is the amplitude of what you're saying higher? Most people just give away what's really on their mind. You know, they're not that clever. They're busy spending their time thinking about what they're in the process of saying. And so if you just observe that, not in a hostile way, but just in an evocative way and just let them talk for a while, they'll more or less tell you almost completely what they're thinking, even the stuff they don't want you to know. And once you know that, of course, it's sort of easy to play that kind of game because they've already told you everything you need to know. And so it's easy to get to a conclusion if there's meant to be one, an area of common interest, since you know almost exactly what's on their mind. And so that's an enormous advantage as opposed to just walking in someplace and somebody telling you something and you believing what they're saying. There are so many different levels of communication. So a powerful approach to life you discuss in the book on the topic of listening and really hearing people is figuring out what the biggest problem, bothering a particular individual or group is and coming up with a solution to that problem and presenting them with a solution, right? In fact, you brilliantly describe a lot of simple things that most people just don't do. It's kind of obvious, find the problem that's bothering somebody deeply. And as you said, I think you've implied that they will usually tell you what the problem is, but can you talk about this process of seeing what the biggest problem for a person is, trying to solve it, and maybe a particularly memorable example? Sure, if you know you're gonna meet somebody, there are two types of situations, chance meetings, and the second is you know you're gonna meet somebody. So let's take the easiest one, which is you know you're gonna meet somebody. And you start trying to make pretend you're them. It's really easy. What's on their mind? What are they thinking about in their daily life? What are the big problems they're facing? So if they're, you know, to make it a really easy example, you know, make pretend, you know, they're like president of the United States. Doesn't have to be this president, could be any president. So you sort of know what's more or less on their mind because the press keeps reporting it. And you see it on television, you hear it. People discuss it. So you know if you're gonna be running into somebody in that kind of position. You sort of know what they look like already. You know what they sound like. You know what their voice is like. And you know what they're focused on. And so if you're gonna meet somebody like that, what you should do is take the biggest unresolved issue that they're facing and come up with a few interesting solutions that basically haven't been out there. Or that you haven't heard anybody else always thinking about. So just to give you an example, I was sort of in the early 1990s and I was invited to something at the White House which was a big deal for me because I was like, you know, a person from no place. And you know, I had met the president once before because it was President Bush because his son was in my dormitory. So I had met him at Parents Day. I mean it's just like the oddity of things. So I knew I was gonna see him because that's where the invitation came from. And so there was something going on and I just thought about two or three ways to approach that issue. And you know, at that point I was separated and so I had brought a date to the White House and so I saw the president and we sort of went over in a corner for about 10 minutes and discussed whatever this issue was. And I later went back to my date. It was a little rude but it was meant to be confidential conversation and I barely knew her. And you know, she said, what were you talking about all that time? I said, well, you know, there's something going on in the world and I've thought about different ways of perhaps approaching that and he was interested. And the answer is of course he was interested. Why wouldn't he be interested? There didn't seem to be an easy outcome. And so, you know, conversations of that type, once somebody knows you're really thinking about what's good for them and good for the situation, it has nothing to do with me. I mean, it's really about being in service, you know, to the situation. Then people trust you and they'll tell you other things because they know your motives are basically very pure. You're just trying to resolve a difficult situation or help somebody do it. So these types of things, you know, that's a planned situation, that's easy. Sometimes you just come upon somebody and they start talking and you know, that requires, you know, like different skills. You know, you can ask them, what have you been working on lately? What are you thinking about? You can ask them, you know, has anything been particularly difficult? And you know, you can ask most people if they trust you for some reason, they'll tell you. And then you have to instantly go to work on it. And you know, that's not as good as having some advanced planning, but you know, almost everything going on is like out there. And people who are involved with interesting situations, they're playing in the same ecosystem. They just have different roles in the ecosystem. And you know, you could do that with somebody who owns a pro football team that loses all the time. We specialize in those in New York. And you know, you already have analyzed why they're losing, right? Inevitably, it's because they don't have a great quarterback, they don't have a great coach, and they don't have a great general manager who knows how to hire the best talent. Those are the three reasons why a team fails, right? Because there are salary caps, so every team pays a certain amount of money for all their players. So it's gotta be those three positions. So if you're talking with somebody like that, inevitably, even though it's not structured, you'll know how their team's doing and you'll know pretty much why. And if you start asking questions about that, they're typically very happy to talk about it because they haven't solved that problem. In some cases, they don't even know that's the problem. It's pretty easy to see it. So, you know, I do stuff like that, which I find is intuitive as a process, but, you know, leads to really good results. Well, the funny thing is when you're smart, for smart people, it's hard to escape their own ego and the space of their own problems, which is what's required to think about other people's problems. It requires for you to let go of the fact that your own problems are all important and then to talk about your, I think while it seems obvious and I think quite brilliant, it's just a difficult leap for many people, especially smart people, to empathize with, truly empathize with the problems of others. Well, I have a competitive advantage, which is, I don't think I'm so smart. So, you know, it's not a problem for me. Well, the truly smartest people I know say that exact same thing. Yeah, being humble is really useful, competitive advantage, as you said. How do you stay humble? Well, I haven't changed much. Since? Since I was in my mid teens. You know, I was raised partly in the city and partly in the suburbs. And, you know, whatever the values I had at that time, those are still my values. I call them like middle class values, that's how I was raised. And I've never changed, why would I? That's who I am. And so the accoutrement of, you know, the rest of your life has gotta be put on the same, you know, like solid foundation of who you are. Because if you start losing who you really are, who are you? So I've never had the desire to be somebody else. I just do other things now that I wouldn't do as a, you know, sort of as a middle class kid from Philadelphia. I mean, my life has morphed on a certain level. But part of the strength of having integrity of personality is that you can remain in touch with everybody who comes from that kind of background. And, you know, even though I do some things that aren't like that, you know, in terms of people I meet or situations I'm in, I always look at it through the same lens. And that's very psychologically comfortable and doesn't require me to make any real adjustments in my life and I just keep plowing ahead. There's a lot of activity in progress in recent years around effective altruism. I wanted to bring this topic with you because it's an interesting one from your perspective. You can put it in any kind of terms, but it's philanthropy that focuses on maximizing impact. How do you see the goal of philanthropy, both from a personal motivation perspective and the societal big picture impact perspective? Yeah, I don't think about philanthropy the way you would expect me to, okay? I look at, you know, sort of solving big issues, addressing big issues, starting new organizations to do it, much like we do in our business. You know, we keep growing our business not by taking the original thing and making it larger, but continually seeing new things and building those. And, you know, sort of marshaling financial resources, human resources, and in our case, because we're in the investment business, we find something new that looks like it's gonna be terrific and we do that and it works out really well. All I do in what you would call philanthropy is look at other opportunities to help society. And I end up starting something new, marshaling people, marshaling a lot of money, and then at the end of that kind of creative process, somebody typically asks me to write a check. I don't wake up and say, how can I give large amounts of money away? I look at issues that are important for people. In some cases, I do smaller things. Because it's important to a person, and, you know, I can relate to that person. There's some unfairness that's happened to them. And so in situations like that, I'd give money anonymously and help them out. And, you know, it's like a miniature version of addressing something really big. So, you know, at MIT, I'm a little bit you know, at MIT, I've done a big thing, you know, helping to start this new school of computing. And I did that because, you know, I saw that, you know, there's sort of like a global race on in AI, quantum, and other major technologies. And I thought that the US could use more enhancement from a competitive perspective. And I also, because I get to China a lot and I travel around a lot compared to a regular person, you know, I can see the need to have control of these types of technologies. So when they're introduced, we don't create a mess like we did with the internet and with social media. Unintended consequence, you know, that's creating all kinds of issues and freedom of speech and the functioning of liberal democracies. So with AI, it was pretty clear that there was enormous difference of views around the world by the relatively few practitioners in the world who really knew what was going on. And by accident, I knew a bunch of these people, you know, who were like big famous people. And I could talk to them and say, why do you think this is a force for bad? And someone else, why do you feel this is a force for good? And how do we move forward with the technology by the same time, make sure that whatever is potentially, you know, sort of on the bad side of this technology with, you know, for example, disruption of workforces and things like that, that could happen much faster than the industrial revolution. What do we do about that? And how do we keep that under control so that the really good things about these technologies, which will be great things, not good things are allowed to happen? So to me, you know, this was one of the great issues facing society. The number of people who were aware of it were very small. I just accidentally got sucked into it. And as soon as I saw it, I went, oh my God, this is mega, both on a competitive basis globally, but also in terms of protecting society and benefiting society. So that's how I got involved. And at the end, you know, sort of the right thing that we figured out was, you know, sort of double MIT's computer science faculty and basically create the first AI enabled university in the world. And, you know, in effect, be an example, a beacon to the rest of the research community around the world academically, and create, you know, a much more robust U.S. situation, competitive situation among the universities. Because if MIT was going to raise a lot of money and double its faculty, well, you could bet that, you know, a number of other universities were going to do the same thing. At the end of it, it would be great for knowledge creation, you know, great for the United States, great for the world. And so I like to do things that I think are really positive, things that other people aren't acting on, that I see for whatever the reason. First, it's just people I meet and what they say, and I can recognize when something really profound is about to happen or needs to. And I do it, and at the end of the situation, somebody says, can you write a check to help us? And then the answer is sure. I mean, because if I don't, the vision won't happen. But it's the vision of whatever I do that is compelling. And essentially, I love that idea of whether it's small at the individual level or really big, like the gift to MIT to launch the College of Computing. It starts with a vision, and you see philanthropy as, the biggest impact you can have is by launching something new, especially on an issue that others aren't really addressing. And I also love the notion, and you're absolutely right, that there's other universities, Stanford, CMU, I'm looking at you, that would essentially, the seed will create other, it'll have a ripple effect that potentially might help US be a leader or continue to be a leader in AI. It's potentially a very transformative research direction. Just to linger on that point a little bit, what is your hope long term for the impact the college here at MIT might have in the next five, 10, even 20, or let's get crazy, 30, 50 years? Well, it's very difficult to predict the future when you're dealing with knowledge production and creativity. MIT has, obviously, some unique aspects. Globally, there's four big academic surveys. I forget whether it was QS, there's the Times in London, the US News, and whatever. And one of these recently, MIT, was ranked number one in the world. So leave aside whether you're number three somewhere else, in the great sweep of humanity, this is pretty amazing. So you have a really remarkable aggregation of human talent here. And where it goes, it's hard to tell. You have to be a scientist to have the right feel. But what's important is you have a critical mass of people. And I think it breaks into two buckets. One is scientific advancement. And if the new college can help either serve as a convening force within the university or help coordination and communication among people, that's a good thing, absolute good thing. The second thing is in the AI ethics area, which is, in a way, equally important. Because if the science side creates blowback so that science is a bit crippled in terms of going forward because society's reaction to knowledge advancement in this field becomes really hostile, then you've sort of lost the game in terms of scientific progress and innovation. And so the AI ethics piece is super important because in a perfect world, MIT would serve as a global convener. Because what you need is you need the research universities. You need the companies that are driving AI and quantum work. You need governments who will ultimately be regulating certain elements of this. And you also need the media to be knowledgeable and trained so we don't get overreactions to one situation, which then goes viral and it ends up shutting down avenues that are perfectly fine to be walking down or running down that avenue. But if enough discordant information, not even correct necessarily, sort of gets pushed around society, then you can end up with a really hostile regulatory environment and other things. So you have four drivers that have to be sort of integrated. And so if the new school of computing can be really helpful in that regard, then that's a real service to science. And it's a service to MIT. So that's why I wanted to get involved for both areas. And the hope is for me, for others, for everyone, for the world, is for this particular college of computing to be a beacon and a connector for these ideas. Yeah, that's right. I mean, I think MIT is perfectly positioned to do that. So you've mentioned the media, social media, the internet as this complex network of communication with flaws, perhaps, perhaps you can speak to them. But I personally think that science and technology has its flaws, but ultimately is, one, sexy, exciting. It's the way for us to explore and understand the mysteries of our world. And two, perhaps more importantly for some people, it's a huge way to, a really powerful way to grow the economy, to improve the quality of life for everyone. So how do we get, how do you see the media, social media, the internet as a society having a healthy discourse about science, first of all, one that's factual and two, one that finds science exciting, that invests in science, that pushes it forward, especially in this science fiction, fear filled field of artificial intelligence? Well, I think that's a little above my pay grade because trying to control social media to make it do what you want to do appears to be beyond almost anybody's control. And the technology is being used to create what I call the tyranny of the minorities. A minority is defined as two or three people on a street corner. Doesn't matter what they look like. Doesn't matter where they came from. They're united by that one issue that they care about. And their job is to enforce their views on the world. And in the political world, people just are manufacturing truth. And they throw it all over. And it affects all of us. And sometimes people are just hired to do that. It's amazing. And you think it's one person. It's really just sort of a front for a particular point of view. And this has become exceptionally disruptive for society. And it's dangerous. And it's undercutting the ability of liberal democracies to function. And I don't know how to get a grip on this. And I was really surprised when we was up here for the announcement last spring of the College of Computing. And they had all these famous scientists, some of whom were involved with the invention of the internet. And almost every one of them got up and said, I think I made a mistake. And as a non scientist, I never thought I'd hear anyone say that. And what they said is, more or less, to make it simple, we thought this would be really cool inventing the internet. We could connect everyone in the world. We can move knowledge around. It was instantaneous. It's a really amazing thing. He said, I don't know that there was anyone who ever thought about social media coming out of that and the actual consequences for people's lives. There's always some younger person. I just saw one of these yesterday. It's reported on the national news who killed himself when people use social media to basically sort of ridicule him or something of that type. This is dead. This is dangerous. And so I don't have a solution for that other than going forward, you can end up with this type of outcome using AI. To make this kind of mistake twice is unforgivable. So interestingly, at least in the West and parts of China, people are quite sympathetic to the whole concept of AI ethics and what gets introduced when and cooperation within your own country, within your own industry, as well as globally to make sure that the technology is a force for good. And that really interesting topic. Since 2007, you've had a relationship with senior leadership with a lot of people in China and an interest in understanding modern China, their culture, their world, much like with Russia. I'm from Russia originally. Americans are told a very narrow, one sided story about China that I'm sure misses a lot of fascinating complexity, both positive and negative. What lessons about Chinese culture, its ideas as a nation, its future do you think Americans should know about, deliberate on, think about? Well, it's sort of a wide question that you're asking about. China is a pretty unusual place. First, it's huge. It's physically huge. It's got a billion three people. And the character of the people isn't as well understood in the United States. Chinese people are amazingly energetic. If you're one of a billion three people, one of the things you've got to be focused on is how do you make your way through a crowd of a billion 2.99999 other people. No, the word for that is competitive. Yes, they are individually highly energetic, highly focused, always looking for some opportunity for themselves because they need to, because there's an enormous amount of just literally people around. And so what I've found is they'll try and find a way to win for themselves. And their country is complicated because it basically doesn't have the same kind of functional laws that we do in the United States and the West. And the country is controlled really through a web of relationships you have with other people and the relationships that those other people have with other people. So it's an incredibly dynamic culture where if somebody knocks somebody up on the top who's three levels above you and is, in effect, protecting you, then you're like a floating molecule there without tethering except the one or two layers above you. But that's going to get affected. So it's a very dynamic system. And getting people to change is not that easy because if there aren't really functioning laws, it's only the relationships that everybody has. And so when you decide to make a major change and you sign up for it, something is changing in your life. There won't necessarily be all the same people on your team. And that's a very high risk enterprise. So when you're dealing with China, it's important to know almost what everybody's relationship is with somebody. So when you suggest doing something differently, you line up these forces. In the West, it's usually you talk to a person and they figure out what's good for them. It's a lot easier. And in that sense, in a funny way, it's easier to make change in the West, just the opposite of what people think. But once the Chinese system adjusts to something that's new, everybody's on the team. It's hard to change them. But once they're changed, they are incredibly focused in a way that it's hard for the West to do in a more individualistic culture. So there are all kinds of fascinating things. One thing that might interest the people who are listening who are more technologically based than some other group. I was with one of the top people in the government a few weeks ago, and he was telling me that every school child in China is going to be taught computer science. Now, imagine 100% of these children. This is such a large number of human beings. Now, that doesn't mean that every one of them will be good at computer science. But if it's sort of like in the West, if it's like math or English, everybody's going to take it. Not everybody's great at English. They don't write books. They don't write poetry. And not everybody's good at math. Somebody like myself, I sort of evolved to the third grade, and I'm still doing flashcards. I didn't make it further in math. But imagine everybody in their society is going to be involved with computer science. I'd just even pause on that. I think computer science involves, at the basic beginner level, programming. And the idea that everybody in the society would have some ability to program a computer is incredible. For me, it's incredibly exciting, and I think that should give the United States pause and consider what... Talking about sort of philanthropy and launching things, there's nothing like launching, sort of investing in young youth, the education system, because that's where everything launches. Yes. Well, we've got a complicated system because we have over 3,000 school districts around the country. China doesn't worry about that as a concept. They make a decision at the very top of the government that that's what they want to have happen, and that is what will happen. And we're really handicapped by this distributed power in the education area, although some people involved with that area will think it's great. But you would know better than I do what percent of American children have computer science exposure. My guess, no knowledge, would be 5% or less. And if we're going to be going into a world where the other major economic power, sort of like ourselves, has got like 100% and we got 5%, and the whole computer science area is the future, then we're purposely or accidentally actually handicapping ourselves, and our system doesn't allow us to adjust quickly to that. So, you know, issues like this I find fascinating. And, you know, if you're lucky enough to go to other countries, which I do, and you learn what they're thinking, then it informs what we ought to be doing in the United States. So the current administration, Donald Trump, has released an executive order on artificial intelligence. Not sure if you're familiar with it. In 2019, looking several years ahead, how does America sort of, we've mentioned in terms of the big impact, we hope your investment in MIT will have a ripple effect, but from a federal perspective, from a government perspective, how does America establish, with respect to China, leadership in the world at the top for research and development in AI? I think that you have to get the federal government in the game in a big way, and that this leap forward technologically, which is going to happen with or without us, you know, really should be with us, and it's an opportunity, in effect, for another moonshot kind of mobilization by the United States. I think the appetite actually is there to do that. At the moment, what's getting in the way is the kind of poisonous politics we have, but if you go below the lack of cooperation, which is almost the defining element of American democracy right now in the Congress, if you talk to individual members, they get it, and they would like to do something. Another part of the issue is we're running huge deficits. We're running trillion dollar plus deficits. So how much money do you need for this initiative? Where does it come from? Who's prepared to stand up for it? Because if it involves taking away resources from another area, our political system is not real flexible. To do that, if you're creating this kind of initiative, which we need, where does the money come from? And trying to get money when you've got trillion dollar deficits, in a way, could be easy. What's the difference of a trillion and a trillion and a little more? But, you know, it's hard with the mechanisms of Congress. But what's really important is this is not an issue that is unknown, and it's viewed as a very important issue. And there's almost no one in the Congress when you sit down and explain what's going on who doesn't say, we've got to do something. Let me ask the impossible question. You didn't endorse Donald Trump, but after he was elected, you have given him advice, which seems to me a great thing to do, no matter who the president is, to positively contribute to this nation by giving advice. And yet, you've received a lot of criticism for this. So on the previous topic of science and technology and government, how do we have a healthy discourse, give advice, get excited conversation with the government about science and technology without it becoming politicized? Well, it's very interesting. So when I was young, before there was a moonshot, we had a president named John F. Kennedy from Massachusetts here. And in his inaugural address as president, he asked not what your country can do for you, but what you can do for your country. We had a generation of people my age, basically people, who grew up with that credo. And sometimes you don't need to innovate. You can go back to basic principles. And that's good basic principle. What can we do? Americans have GDP per capita of around $60,000. It's not equally distributed, but it's big. And people have, I think, an obligation to help their country. And I do that. And apparently, I take some grief from some people who project on me things I don't even vaguely believe. But I'm quite simple. I tried to help the previous president, President Obama. He was a good guy. And he was a different party. And I tried to help President Bush. And he's a different party. And I sort of don't care that much about what the parties are. I care about, even though I'm a big donor for the Republicans, but what motivates me is, what are the problems we're facing? Can I help people get to a good outcome that will stand any test? But we live in a world now where the filters and the hostility is so unbelievable. In the 1960s, when I went to school and university, I went to Yale, we had so much stuff going on. We had a war called the Vietnam War. We had sort of black power starting. And we had a sexual revolution with the birth control pill. And there was one other major thing going on, the drug revolution. There hasn't been a generation that had more stuff going on in a four year period than my era. Yet, there wasn't this kind of instant hostility if you believed something different. Everybody lived together and respected the other person. And I think that this type of change needs to happen. And it's got to happen from the leadership of our major institutions. And I don't think that leaders can be bullied by people who are against sort of the classical version of free speech and letting open expression and inquiry. That's what universities are for, among other things, Socratic methods. And so I have, in the midst of this onslaught of oddness, I believe in still the basic principles. And we're going to have to find a way to get back to that. And that doesn't start with the people sort of in the middle to the bottom who are using these kinds of screens to shout people down and create an uncooperative environment. It's got to be done at the top with core principles that are articulated. And ironically, if people don't sign on to these kind of core principles where people are equal and speech can be heard and you don't have these enormous shout down biases subtly or out loud, then they don't belong at those institutions. They're violating the core principles. And that's how you end up making change. But you have to have courageous people who are willing to lay that out for the benefit of not just their institutions, but for society as a whole. So I believe that will happen. But it needs the commitment of senior people to make it happen. Courage. And I think for such great leaders, great universities, there's a huge hunger for it. So I am too very optimistic that it will come. I'm now personally taking a step into building a startup first time, hoping to change the world, of course. There are thousands, maybe more, maybe millions of other first time entrepreneurs like me. What advice? You've gone through this process. You've talked about the suffering, the emotional turmoil it all might entail. What advice do you have for those people taking that step? I'd say it's a rough ride. And you have to be psychologically prepared for things going wrong with frequency. You have to be prepared to be put in situations where you're being asked to solve problems you didn't even know those problems existed. For example, renting space, it's not really a problem unless you've never done it. You have no idea what a lease looks like. You don't even know the relevant rent in a market. So everything is new. Everything has to be learned. What you realize is that it's good to have other people with you who've had some experience in areas where you don't know what you're doing. Unfortunately, an entrepreneur starting doesn't know much of anything. So everything is something new. And I think it's important not to be alone, because it's sort of overwhelming. And you need somebody to talk to other than a spouse or a loved one, because even they get bored with your problems. And so getting a group, if you look at Alibaba, Jack Ma was telling me they basically were like at financial death's door at least twice. And the fact that it wasn't just Jack. I mean, people think it is, because he became the sort of public face and the driver. But a group of people who can give advice, share situations to talk about, that's really important. And that's not just referring to the small details like renting space. No. It's also the psychological burden. Yeah, and because most entrepreneurs at some point question what they're doing, because it's not going so well. Or they're screwing it up, and they don't know how to unscrew it up, because we're all learning. And it's hard to be learning when there are like 25 variables going on. If you're missing four big ones, you can really make a mess. And so the ability to, in effect, have either an outsider who's really smart that you can rely on for certain type of things, or other people who are working with you on a daily basis, most people who haven't had experience believe in the myth of the one person, one great person, makes outcomes, creates outcomes that are positive. Most of us, it's not like that. If you look back over a lot of the big successful tech companies, it's not typically one person. And you will know these stories better than I do, because it's your world, not mine. But even I know that almost every one of them had two people. If you look at Google, that's what they had. And that was the same at Microsoft at the beginning. And it was the same at Apple. People have different skills. And they need to play off of other people. So the advice that I would give you is make sure you understand that so you don't head off in some direction as a lone wolf and find that either you can't invent all the solutions or you make bad decisions on certain types of things. This is a team sport. Entrepreneur means you're alone, in effect. And that's the myth. But it's mostly a myth. Yeah, I think, and you talk about this in your book, and I could talk to you about it forever, the harshly self critical aspect to your personality and to mine as well in the face of failure. It's a powerful tool, but it's also a burden that's very interesting to walk that line. But let me ask in terms of people around you, in terms of friends, in the bigger picture of your own life, where do you put the value of love, family, friendship in the big picture journey of your life? Well, ultimately, all journeys are alone. It's great to have support. And when you go forward and say your job is to make something work, and that's your number one priority, and you're going to work at it to make it work, it's like superhuman effort. People don't become successful as part time workers. It doesn't work that way. And if you're prepared to make that 100% to 120% effort, you're going to need support, and you're going to have to have people involved with your life who understand that that's really part of your life. Sometimes you're involved with somebody, and they don't really understand that. And that's a source of conflict and difficulty. But if you're involved with the right people, whether it's a dating relationship or a spousal relationship, you have to involve them in your life, but not burden them with every minor triumph or mistake. They actually get bored with it after a while. And so you have to set up different types of ecosystems. You have your home life. You have your love life. You have children. And that's the enduring part of what you do. And then on the other side, you've got the unpredictable nature of this type of work. What I say to people at my firm who are younger, usually, well, everybody's younger, but people who are of an age where they're just having their first child, or maybe they have two children, that it's important to make sure they go away with their spouse at least once every two months to just some lovely place where there are no children, no issues, sometimes once a month if they're sort of energetic and clever. And that Escape the craziness of it all. Yeah, and reaffirm your values as a couple. And you have to have fun. If you don't have fun with the person you're with, and all you're doing is dealing with issues, then that gets pretty old. And so you have to protect the fun element of your life together. And the way to do that isn't by hanging around the house and dealing with sort of more problems. You have to get away and reinforce and reinvigorate your relationship. And whenever I tell one of our younger people about that, they sort of look at me, and it's like the scales are falling off of their eyes. And they're saying, jeez, I hadn't thought about that. I'm so enmeshed in all these things. But that's a great idea. And that's something, as an entrepreneur, you also have to do. You just can't let relationships slip because you're half overwhelmed. Beautifully put. And I think there's no better place to end it. Steve, thank you so much. I really appreciate it. It was an honor to talk to you. My pleasure. Thanks for listening to this conversation with Stephen Schwarzman. And thank you to our sponsors, ExpressVPN and MasterClass. Please consider supporting the podcast by signing up to MasterClass at masterclass.com slash lex and getting ExpressVPN at expressvpn.com slash lexpod. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at lexfriedman. And now, let me leave you with some words from Stephen Schwarzman's book, What It Takes. It's as hard to start and run a small business as it is to start a big one. You will suffer the same toll financially and psychologically as you bludgeon it into existence. It's hard to raise the money and to find the right people. So if you're going to dedicate your life to a business, which is the only way it will ever work, you should choose one with the potential to be huge. Thank you for listening and hope to see you next time.
Stephen Schwarzman: Going Big in Business, Investing, and AI | Lex Fridman Podcast #96
The following is a conversation with Sirtesh Karaman, a professor at MIT, co founder of the autonomous vehicle company, Optimus Ride, and is one of the top roboticists in the world, including robots that drive and robots that fly. To me personally, he has been a mentor, a colleague and a friend. He's one of the smartest, most generous people I know. So it was a pleasure and honor to finally sit down with him for this recorded conversation. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use the code LEX PODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to send and receive money digitally, let me mention a surprising fact about physical money. It costs 2.4 cents to produce a single penny. In fact, I think it costs $85 million annually to produce them. That's a crazy little fact about physical money. So again, if you get Cash App from the App Store, Google Play, and use the code LEX PODCAST, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Sirtesh Karaman. Since you have worked extensively on both, what is the more difficult task? Autonomous flying or autonomous driving? That's a good question. I think that autonomous flying, just doing it for consumer drones and so on, the kinds of applications that we're looking at right now, is probably easier. And so I think that that's maybe one of the reasons why it took off literally a little earlier than the autonomous cars. But I think if you look ahead, I would think that the real benefits of autonomous flying, unleashing them in transportation, logistics, and so on, I think it's a lot harder than autonomous driving. So I think my guess is that we've seen a few kind of machines fly here and there, but we really haven't yet seen any kind of machine, like at massive scale, large scale being deployed and flown and so on. And I think that's going to be after we kind of resolve some of the large scale deployments of autonomous driving. So what's the hard part? What's your intuition behind why at scale, when consumer facing drones are tough? So I think in general, at scale is tough. Like for example, when you think about it, we have actually deployed a lot of robots in the, let's say the past 50 years. We as academics or we business entrepreneurs? I think we as humanity. Humanity? A lot of people working on it. So we humans deployed a lot of robots. And I think that, well, when you think about it, you know, robots, they're autonomous. They work and they work on their own, but they are either like in isolated environments or they are in sort of, you know, they may be at scale, but they're really confined to a certain environment that they don't interact so much with humans. And so, you know, they work in, I don't know, factory floors, warehouses, they work on Mars, you know, they are fully autonomous over there. But I think that the real challenge of our time is to take these vehicles and put them into places where humans are present. So now I know that there's a lot of like human robot interaction type of things that need to be done. And so that's one thing, but even just from the fundamental algorithms and systems and the business cases, or maybe the business models, even like architecture, planning, societal issues, legal issues, there's a whole bunch of pack of things that are related to us putting robotic vehicles into human present environments. And as humans, you know, they will not potentially be even trained to interact with them. They may not even be using the services that are provided by these vehicles. They may not even know that they're autonomous. They're just doing their thing, living in environments that are designed for humans, not for robots. And that I think is one of the biggest challenges, I think, of our time to put vehicles there. And you know, to go back to your question, I think doing that at scale, meaning, you know, you go out in a city and you have, you know, like thousands or tens of thousands of autonomous vehicles that are going around. It is so dense to the point where if you see one of them, you look around, you see another one. It is that dense. And that density, we've never done anything like that before. And I would bet that that kind of density will first happen with autonomous cars because I think, you know, we can bend the environment a little bit. We can, especially kind of making them safe is a lot easier when they're like on the ground. When they're in the air, it's a little bit more complicated. But I don't see that there's going to be a big separation. I think that, you know, there will come a time that we're going to quickly see these things unfold. Do you think there will be a time where there's tens of thousands of delivery drones that fill the sky? You know, I think, I think it's possible to be honest. Delivery drones is one thing, but you know, you can imagine for transportation, like an important use case is, you know, we're in Boston, you want to go from Boston to New York and you want to do it from the top of this building to the top of another building in Manhattan. And you're going to do it in one and a half hours. And that's, that's a big opportunity, I think. Personal transport. So like you and me be a friend, like almost like an Uber. So like four people, six people, eight people. In our work in autonomous vehicles, I see that. So there's kind of like a bit of a need for, you know, one person transport, but also like, like a few people. So you and I could take that trip together. We could have lunch, you know, I think kind of sounds crazy, maybe even sounds a bit cheesy, but I think that those kinds of things are some of the real opportunities. And I think, you know it's not like the typical airplane and the airport would disappear very quickly, but I would think that, you know many people would feel like they would spend an extra hundred dollars on doing that and cutting that four hour travel down to one and a half hours. So how feasible are flying cars has been the dream. That's like when people imagine the future for 50 plus years, they think flying cars, it's a, it's like all technologies. It's cheesy to think about now because it seems so far away, but overnight it can change. But just technically speaking in your view, how feasible is it to make that happen? I'll get to that question, but just one thing is that I think, you know, sometimes we think about what's going to happen in the next 50 years. It's just really hard to guess, right? Next 50 years. I don't know. I mean, we could get what's going to happen in transportation in the next 50, we could get flying saucers. I could bet on that. I think there's a 50, 50 chance that, you know, like you can build machines that can ionize the air around them and push it down with magnets and they would fly like a flying saucer that is possible. And it might happen in the next 50 years. So it's a bit hard to guess like when you think about 50 years before, but I would think that, you know, there's this, this, this kind of a notion where there's a certain type of airspace that we call the agile airspace. And there's, there's good amount of opportunities in that airspace. So that would be the space that is kind of a little bit higher than the place where you can throw a stone because that's a tough thing when you think about it, you know, it takes a kid on a stone to take an aircraft down and then what happens. But you know, imagine the airspace that's high enough so that you cannot throw the stone, but it is low enough that you're not interacting with the, with the very large aircraft that are, you know, flying several thousand feet above. And that airspace is underutilized or it's actually kind of not utilized at all. Yeah, that's right. You know, there's like recreational people kind of fly every now and then, but it's very few. Like if you look up in the sky, you may not see any of them at any given time, every now and then you'll see one airplane kind of utilizing that space and you'll be surprised. And the moment you're outside of an airport a little bit, like it just kind of flies off and then it goes out. And I think utilizing that airspace, the technical challenges there is, you know, building an autonomy and ensuring that that kind of autonomy is safe. Ultimately, I think it is going to be building in complex software or complicated so that it's maybe a few orders of magnitude more complicated than what we have on aircraft today. And at the same time, ensuring just like we ensure on aircraft, ensuring that it's safe. And so that becomes like building that kind of complicated hardware and software becomes a challenge, especially when, you know, you build that hardware, I mean, you build that software with data. And so, you know, it's, of course there's some rule based software in there that kind of do a certain set of things, but then, you know, there's a lot of training there. Do you think machine learning will be key to these kinds of, to delivering safe vehicles in the future, especially flight? Not maybe the safe part, but I think the intelligent part. I mean, there are certain things that we do it with machine learning and it's just, there's like right now, no other way. And I don't know how else they could be done. And you know, there's always this conundrum, I mean, we could like, could we like, we could maybe gather billions of programmers, humans who program perception algorithms that detect things in the sky and whatever, or, you know, we, I don't know, we maybe even have robots like learn in a simulation environment and transfer. And they might be learning a lot better in a simulation environment than a billion humans put their brains together and try to program. Humans pretty limited. So what's, what's the role of simulations with drones? You've done quite a bit of work there. How promising, just the very thing you said just now, how promising is the possibility of training and developing a safe flying robot in simulation and deploying it and having that work pretty well in the real world? I think that, you know, a lot of people, when they hear simulation, they will focus on training immediately. But I think one thing that you said, which was interesting, it's developing. I think simulation environments are actually could be key and great for development. And that's not new. Like for example, you know, there's people in the automotive industry have been using dynamic simulation for like decades now. And it's pretty standard that, you know, you would build and you would simulate. If you want to build an embedded controller, you plug that kind of embedded computer into another computer, that other computer would simulate dynamic and so on. And I think, you know, fast forward these things, you can create pretty crazy simulation environments. Like for instance, one of the things that has happened recently and that, you know, we can do now is that we can simulate cameras a lot better than we used to simulate them. We were able to simulate them before. And that's, I think we just hit the elbow on that kind of improvement. I would imagine that with improvements in hardware, especially, and with improvements in machine learning, I think that we would get to a point where we can simulate cameras very, very well. Simulate cameras means simulate how a real camera would see the real world. Therefore you can explore the limitations of that. You can train perception algorithms on that in simulation, all that kind of stuff. Exactly. So, you know, it's, it's, it has been easier to simulate what we would call introspective sensors like internal sensors. So for example, inertial sensing has been easy to simulate. It has also been easy to simulate dynamics, like physics that are governed by ordinary differential equations. I mean, like how a car goes around, maybe how it rolls on the road, how it interacts with the road, or even an aircraft flying around, like the dynamic physics of that. What has been really hard has been to simulate extra septive sensors, sensors that kind of like look out from the vehicle. And that's a new thing that's coming like laser range finders that are a little bit easier. Because radars are a little bit tougher. I think once we nail that down, the next challenge I think in simulation will be to simulate human behavior. That's also extremely hard. Even when you imagine like how a human driven car would act around, even that is hard. But imagine trying to simulate, you know, a model of a human just doing a bunch of gestures and so on. And you know, it's, it's actually simulated. It's not captured like with motion capture, but it is simulated. That's very hard. In fact, today I get involved a lot with like sort of this kind of very high end rendering projects and I have like this test that I pass it to my friends or my mom, you know, I send like two photos, two kind of pictures and I say rendered, which one is rendered, which one is real. And it's pretty hard to distinguish, except I realized, except when we put humans in there, it's possible that our brains are trained in a way that we recognize humans extremely well. We don't so much recognize the built environments because built environments sort of came after per se we evolved into sort of being humans, but humans were always there. Same thing happens, for example, you look at like monkeys and you can't distinguish one from another, but they sort of do. And it's very possible that they look at humans. It's kind of pretty hard to distinguish one from another, but we do. And so our eyes are pretty well trained to look at humans and understand if something is off, we will get it. We may not be able to pinpoint it. So in my typical friend test or mom test, what would happen is that we'd put like a human walking in anything and they say, you know, this is not right. Something is off in this video. I don't know what, but I can tell you it's the human. I can take the human and I can show you like inside of a building or like an apartment and it will look like if we had time to render it, it will look great. And this should be no surprise. A lot of movies that people are watching, it's all computer generated. You know, even nowadays, even you watch a drama movie and like, there's nothing going on action wise, but it turns out it's kind of like cheaper, I guess, to render the background. And so they would. But how do we get there? How do we get a human that's would pass the mom slash friend test, a simulation of a human walking? So do you think that's something we can creep up to by just doing kind of a comparison learning where you have humans annotate what's more realistic and not just by watching, like what's the path? Cause it seems totally mysterious how we simulate human behavior. It's hard because a lot of the other things that I mentioned to you, including simulating cameras, right? It is, the thing there is that, you know, we know the physics, we know how it works like in the real world and we can write some rules and we can do that. Like for example, simulating cameras, there's this thing called ray tracing. I mean, you literally just kind of imagine it's very similar to, it's not exactly the same, but it's very similar to tracing photon by photon. They're going around, bouncing on things and come into your eye, but human behavior, developing a dynamic, like a model of that, that is mathematical so that you can put it into a processor that would go through that, that's going to be hard. And so what else do you got? You can collect data, right? And you can try to match the data. Or another thing that you can do is that, you know, you can show the friend test, you know, you can say this or that and this or that, and that will be labeling. Anything that requires human labeling, ultimately we're limited by the number of humans that, you know, we have available at our disposal and the things that they can do, you know, they have to do a lot of other things than also labeling this data. So that modeling human behavior part is, is I think going, we're going to realize it's very tough. And I think that also affects, you know, our development of autonomous vehicles. I see them in self driving as well. Like you want to use, so you're building self driving, you know, at the first time, like right after urban challenge, I think everybody focused on localization, mapping and localization, you know, slam algorithms came in, Google was just doing that. And so building these HD maps, basically that's about knowing where you are. And then five years later in 2012, 2013 came the kind of coding code AI revolution. And that started telling us where everybody else is, but we're still missing what everybody else is going to do next. And so you want to know where you are. You want to know what everybody else is. Hopefully you know that what you're going to do next, and then you want to predict what other people are going to do. And that last bit has, has been a real, real challenge. What do you think is the role, your own of your, of your, the ego vehicle, the robot, the you, the robotic you in controlling and having some control of how the future unrolls of what's going to happen in the future. That seems to be a little bit ignored in trying to predict the future is how you yourself can affect that future by being either aggressive or less aggressive or signaling in some kind of way. So this kind of game theoretic dance seems to be ignored for the moment. It's yeah, it's, it's totally ignored. I mean, it's, it's quite interesting actually, like how we how we interact with things versus we interact with humans. Like so if, if you see a vehicle that's completely empty and it's trying to do something, all of a sudden it becomes a thing. So interacted with like you interact with this table and so you can throw your backpack or you can kick your, kick it, put your feet on it and things like that. But when it's a human, there's all kinds of ways of interacting with a human. So if you know, like you and I are face to face, we're very civil. You know, we talk, we understand each other for the most part. We'll see you just, you never know what's going to happen. But the thing is that like, for example, you and I might interact through YouTube comments and, you know, the conversation may go at a totally different angle. And so I think people kind of abusing as autonomous vehicles is a real issue in some sense. And so when you're an ego vehicle, you're trying to, you know, coordinate your way, make your way, it's actually kind of harder than being a human. You know, it's like, it's you, you, you not only need to be as smart as, as kind of humans are, but you also, you're a thing. So they're going to abuse you a little bit. So you need to make sure that you can get around and do something. So I, in general, believe in that sort of game theoretic aspects. I've actually personally have done, you know, quite a few papers, both on that kind of game theory and also like this, this kind of understanding people's social value orientation, for example, you know, some people are aggressive, some people not so much. And, and, you know, like a robot could understand that by just looking at how people drive. And as they kind of come in approach, you can actually understand, like if someone is going to be aggressive or, or not as a robot and you can make certain decisions. Well, in terms of predicting what they're going to do, the hard question is you as a robot, should you be aggressive or not when faced with an aggressive robot? Right now it seems like aggressive is a very dangerous thing to do because it's costly from a societal perspective, how you're perceived. People are not very accepting of aggressive robots in modern society. I think that's accurate. So that is really is. And so I'm not entirely sure like how to have to go about, but I know, I know for a fact that how these robots interact with other people in there is going to be, and then interaction is always going to be there. I mean, you could be interacting with other vehicles or other just people kind of like walking around. And like I said, the moment there's like nobody in the seat, it's like an empty thing just rolling off the street. It becomes like no different than like any other thing that's not human. And so people, and maybe abuse is the wrong word, but people maybe rightfully even they feel like this is a human present environment designed for humans to be, and they kind of they want to own it. And then the robots, they would need to understand it and they would need to respond in a certain way. And I think that this actually opens up like quite a few interesting societal questions for us as we deploy, like we talk robots at large scale. So what would happen when we try to deploy robots at large scale, I think is that we can design systems in a way that they're very efficient or we can design them that they're very sustainable, but ultimately the sustainability efficiency trade offs, like they're going to be right in there and we're going to have to make some choices. Like we're not going to be able to just kind of put it aside. So for example, we can be very aggressive and we can reduce transportation delays, increase capacity of transportation, or we can be a lot nicer and allow other people to kind of quote unquote own the environment and live in a nice place and then efficiency will drop. So when you think about it, I think sustainability gets attached to energy consumption or environmental impact immediately. And those are there, but like livability is another sustainability impact. So you create an environment that people want to live in. And if, if, if robots are going around being aggressive and you don't want to live in that environment, maybe, however, you should note that if you're not being aggressive, then, you know, you're probably taking up some, some delays in transportation and this and that. So you're always balancing that. And I think this, this choice has always been there in transportation, but I think the more autonomy comes in, the more explicit the choice becomes. Yeah. And when it becomes explicit, then we can start to optimize it and then we'll get to ask the very difficult societal questions of what do we value more, efficiency or sustainability? It's kind of interesting. I think we're going to have to like, I think that the interesting thing about like the whole autonomous vehicles question, I think is also kind of, um, I think a lot of times, you know, we have, we have focused on technology development, like hundreds of years and you know, the products somehow followed and then, you know, we got to make these choices and things like that. So this is, this is a good time that, you know, we even think about, you know, autonomous taxi type of deployments and the systems that would evolve from there. And you realize the business models are different. The impact on architecture is different, urban planning, you get into like regulations, um, and then you get into like these issues that you didn't think about before, but like sustainability and ethics is like right in the middle of it. I mean, even testing autonomous vehicles, like think about it, you're testing autonomous vehicles in human present environments. I mean, uh, the risk may be very small, but still, you know, it's, it's a, it's a, it's, it's a, you know, strictly greater than zero risk that you're putting people into. And so then you have that innovation, you know, risk trade off that you're, you're in that somewhere. Um, and we, we understand that pretty now that pretty well now is that if we don't test the, at least the, the development will be slower. I mean, it doesn't mean that we're not going to be able to develop. I think it's going to be pretty hard actually. Maybe we can, we don't, we don't, I don't know. But the thing is that those kinds of trade offs we already are making and as these systems become more ubiquitous, I think those trade offs will just really hit. So you are one of the founders of Optimus Ride and autonomous vehicle company. We'll talk about it, but let me on that point ask maybe a good examples, keeping Optimus Ride out, out of this question, uh, sort of exemplars of different strategies on the spectrum of innovation and safety or caution. So like Waymo, Google self driving car Waymo represents maybe a more cautious approach. And then you have Tesla on the other side headed by Elon Musk that represents a more, however, which adjective you want to use, aggressive, innovative, I don't know. But uh, what, what do you think about the difference in the two strategies in your view? What's more likely, what's needed and is more likely to succeed in the short term and in the long term? Definitely some sort of a balance is, is kind of the right way to go. But I do think that the thing that is the most important is actually like an informed public. So I don't, I don't mind, you know, I personally, like if I were in some place, I wouldn't mind so much like taking a certain amount of risk, um, some other people might. And so I think the key is for people to be informed and so that they can, ideally they can make a choice. In some cases, that kind of choice, um, making that unanimously is of course very hard. But I don't think it's actually that hard to inform people. So I think in, in, in one case, like for example, even the Tesla approach, um, I don't know, it's hard to judge how informed it is, but it is somewhat informed. I mean, you know, things kind of come out. I think people know what they're taking and things like that and so on. But I think the, the underlying, um, I do think that these two companies are a little bit kind of representing like the, of course they, you know, one of them seems a bit safer or the other one, or, you know, um, whatever the objective for that is, and the other one seems more aggressive or whatever the objective for that is. But, but I think, you know, when you turn the tables, they're actually, there are two other orthogonal dimensions that these two are focusing on. On the one hand for Waymo, I can see that, you know, they're, I mean, um, they, I think they a little bit see it as research as well. So they kind of, they don't, I'm not sure if they're like really interested in like an immediate, um, product, um, you know, they, they talk about it. Um, sometimes there's some pressure to talk about it. So they, they kind of go for it, but I think, um, I think that they're thinking, um, maybe in the back of their minds, maybe they don't put it this way, but I think they, they realize that we're building like a new engine. It's kind of like call it the AI engine or whatever that is. And you know, an autonomous vehicles is a very interesting embodiment of that engine that allows you to understand where the ego vehicle is, the ego thing is where everything else is, what everything else is going to do and how do you react, how do you actually, you know, interact with humans the right way? How do you build these systems? And I think, uh, they, they want to know that they want to understand that. And so they keep going and doing that. And so on the other dimension, Tesla is doing something interesting. I mean, I think that they have a good product. People use it. I think that, you know, like it's, it's not for me, um, but I can totally see people, people like it and, and people, I think they have a good product outside of automation, but I was just referring to the, the, the automation itself. I mean, you know, like it, it kind of drives itself. You still have to be kind of, um, you still have to pay attention to it, right? Well, you know, um, people seem to use it. So it works for something. And so people, I think people are willing to pay for it. People are willing to buy it. I think it, uh, it's, it's one of the other reasons why people buy a Tesla car. Maybe one of those reasons is Elon Musk is the CEO and you know, he seems like a visionary person. That's what people think. He's a great person. And so that adds like 5k to the value of the car and then maybe another 5k is the autopilot and, and you know, it's, it's useful. I mean, it's, um, useful in the sense that like people are using it. And so I can see Tesla and sure, of course they want to be visionary. They want to kind of put out a certain approach and they may actually get there. Um, but I think that there's also a primary benefit of doing all these updates and rolling it out because, you know, people pay for it and it's, it's, you know, it's basic, you know, demand, supply market and people like it. They're happy to pay another 5k, 10k for that novelty or whatever that is, um, they, and they use it. It's not like they get it and they try it a couple of times as a novelty, but they use it a lot of the time. And so I think that's what Tesla is doing. It's actually pretty different. Like they, they are on pretty orthogonal dimensions of what kind of things that they're building. They are using the same AI engine. So it's very possible that, you know, they're both going to be, um, sort of one day, um, kind of using a similar, almost like an internal internal combustion engine. It's a very bad metaphor, but similar internal combustion engine, and maybe one of them is building like a car. The other one is building a truck or something. So ultimately the use case is very different. So you, like I said, are one of the founders of Optimus, right? Let's take a step back. That's one of the success stories in the autonomous vehicle space. It's a great autonomous vehicle company. Let's go from the very beginning. What does it take to start an autonomous vehicle company? How do you go from idea to deploying vehicles like you are in a few, a bunch of places, including New York? I would say that I think that, you know, what happened to us is it was, was the following. I think, um, we realized a lot of kind of talk in the autonomous vehicle industry back in like 2014, even when we wanted to kind of get started. Um, and, and I don't know, like I, I kind of, I would hear things like fully autonomous vehicles, two years from now, three years from now, I kind of never bought it. Um, you know, I was a part of, um, MIT's urban challenge entry. Um, it kind of like, it has an interesting history. So, um, I did in, in, in college and in high school, sort of a lot of mathematically oriented work. I mean, I kind of, you know, at some point, uh, it kind of hit me. I wanted to build something. And so I came to MIT's mechanical engineering program and I now realize, I think my advisor hired me because I could do like really good math, but I told him that, no, no, no, I want to work on that urban challenge car. I want to build the autonomous car. And I think that was, that was kind of like a process where we really learned, I mean, what the challenges are and what kind of limitations are we up against, you know, like having the limitations of computers or understanding human behavior, there's so many of these things. And I think it just kind of didn't. And so, so we said, Hey, you know, like, why don't we take a more like a market based approach? So we focus on a certain kind of market and we build a system for that. What we're building is not so much of like an autonomous vehicle only, I would say. So we build full autonomy into the vehicles. But, you know, the way we kind of see it is that we think that the approach should actually involve humans operating them, not just, just not sitting in the vehicle. And I think today, what we have is today, we have one person operate one vehicle, no matter what that vehicle, it could be a forklift, it could be a truck, it could be a car, whatever that is. And we want to go from that to 10 people operate 50 vehicles. How do we do that? If you're referring to a world of maybe perhaps teleoperation, so can you just say what it means for 10? It might be confusing for people listening. What does it mean for 10 people to control 50 vehicles? That's a good point. So I think it's, I very deliberately didn't call it teleoperation because what people think then is that people think, away from the vehicle sits a person, sees like maybe puts on goggles or something, VR and drives the car. So that's not at all what we mean, but we mean the kind of intelligence whereby humans are in control, except in certain places, the vehicles can execute on their own. And so imagine like, like a room where people can see what the other vehicles are doing and everything. And you know, there will be some people who are more like, more like air traffic controllers, call them like AV controllers. And so these AV controllers would actually see kind of like a whole map and they would understand where vehicles are really confident and where they kind of need a little bit more help. And the help shouldn't be for safety. Help should be for efficiency. Vehicles should be safe no matter what. If you had zero people, they could be very safe, but they'd be going five miles an hour. And so if you want them to go around 25 miles an hour, then you need people to come in and, and for example, you know, the vehicle come to an intersection and the vehicle can say, you know, I can wait. I can inch forward a little bit, show my intent, or I can turn left. And right now it's clear I can turn, I know that, but before you give me the go, I won't. And so that's one example. This doesn't mean necessarily we're doing that actually. I think, I think if you go down all the, all that much detail that every intersection you're kind of expecting a person to press a button, then I don't think you'll get the efficiency benefits you want. You need to be able to kind of go around and be able to do these things. But, but I think you need people to be able to set high level behavior to vehicles. That's the other thing with autonomous vehicles, you know, I think a lot of people kind of think about it as follows. I mean, this happens with technology a lot. You know, you think, all right, so I know about cars and I heard robots. So I think how this is going to work out is that I'm going to buy a car, press a button and it's going to drive itself. And when is that going to happen? You know, and people kind of tend to think about it that way, but when you think about what really happens is that something comes in in a way that you didn't even expect. If asked, you might have said, I don't think I need that, or I don't think it should be that and so on. And then, and then that, that becomes the next big thing, coding code. And so I think that this kind of different ways of humans operating vehicles could be really powerful. I think that sooner than later, we might open our eyes up to a world in which you go around walk in a mall and there's a bunch of security robots that are exactly operated in this way. You go into a factory or a warehouse, there's a whole bunch of robots that are playing exactly in this way. You go to a, you go to the Brooklyn Navy Yard, you see a whole bunch of autonomous vehicles, Optimus Ride, and they're operated maybe in this way. But I think people kind of don't see that. I sincerely think that there's a possibility that we may almost see like a whole mushrooming of this technology in all kinds of places that we didn't expect before. And that may be the real surprise. And then one day when your car actually drives itself, it may not be all that much of a surprise at all because you see it all the time. You interact with them, you take the Optimus Ride, hopefully that's your choice. And then you hear a bunch of things, you go around, you interact with them. I don't know. Like you have a little delivery vehicle that goes around the sidewalks and delivers you things and then you take it, it says thank you. And then you get used to that and one day your car actually drives itself and the regulation goes by and you can hit the button of sleep and it wouldn't be a surprise at all. I think that may be the real reality. So there's going to be a bunch of applications that pop up around autonomous vehicles, some of which, maybe many of which we don't expect at all. So if we look at Optimus Ride, what do you think, you know, the viral application, the one that like really works for people in mobility, what do you think Optimus Ride will connect with in the near future first? I think that the first places that I like to target honestly is like these places where transportation is required within an environment, like people typically call it geofence. So you can imagine like roughly two mile by two mile could be bigger, could be smaller type of an environment. And there's a lot of these kinds of environments that are typically transportation deprived. The Brooklyn Navy Yard that, you know, we're in today, we're in a few different places, but that was the one that was last publicized and that's a good example. So there's not a lot of transportation there and you wouldn't expect like, I don't know, I think maybe operating an Uber there ends up being sort of a little too expensive or when you compare it with operating Uber elsewhere, elsewhere becomes the priority and these places become totally transportation deprived. And then what happens is that, you know, people drive into these places and to go from point A to point B inside this place within that day, they use their cars. And so we end up building more parking for them to, for example, take their cars and go to the lunch place. And I think that one of the things that can be done is that, you know, you can put in efficient, safe, sustainable transportation systems into these types of places first. And I think that, you know, you could deliver mobility in an affordable way, affordable, accessible, you know, sustainable way. But I think what also enables is that this kind of effort, money, area, land that we spend on parking, you could reclaim some of that. And that is on the order of like, even for a small environment like two mile by two mile, it doesn't have to be smack in the middle of New York. I mean, anywhere else you're talking tens of millions of dollars. If you're smack in the middle of New York, you're looking at billions of dollars of savings just by doing that. And that's the economic part of it. And there's a societal part, right? I mean, just look around. I mean the places that we live are like built for cars. It didn't look like this just like a hundred years ago, like today, no one walks in the middle of the street. It's for cars. No one tells you that growing up, but you grow into that reality. And so sometimes they close the road. It happens here, you know, like the celebration, they close the road. Still people don't walk in the middle of the road, like just walk in the middle and people don't. But I think it has so much impact, the car in the space that we have. And I think we talked about sustainability, livability. I mean, ultimately these kinds of places that parking spots at the very least could change into something more useful or maybe just like park areas, recreational. And so I think that's the first thing that we're targeting. And I think that we're getting like a really good response, both from an economic societal point of view, especially places that are a little bit forward looking. And like, for example, Brooklyn Navy Yard, they have tenants. There's distinct direct call like new lab. It's kind of like an innovation center. There's a bunch of startups there. And so, you know, you get those kinds of people and, you know, they're really interested in sort of making that environment more livable. And these kinds of solutions that Optimus Ride provides almost kind of comes in and becomes that. And many of these places that are transportation deprived, you know, they have, they actually rent shuttles. And so, you know, you can ask anybody, the shuttle experience is like terrible. People hate shuttles. And I can tell you why. Because, you know, like the driver is very expensive in a shuttle business. So what makes sense is to attach 20, 30 seats to a driver. And a lot of people have this misconception. They think that shuttles should be big. Sometimes we get that at Optimus Ride. We tell them, we're going to give you like four seaters, six seaters. And we get asked like, how about like 20 seaters? I'm like, you know, you don't need 20 seaters. You want to split up those seats so that they can travel faster and the transportation delays would go down. That's what you want. If you make it big, not only you will get delays in transportation, but you won't have an agile vehicle. It will take a long time to speed up, slow down and so on. You need to climb up to the thing. So it's kind of like really hard to interact with. And scheduling too, perhaps when you have more smaller vehicles, it becomes closer to Uber where you can actually get a personal, I mean, just the logistics of getting the vehicle to you becomes easier when you have a giant shuttle. There's fewer of them and it probably goes on a route, a specific route that is supposed to hit. And when you go on a specific route and all seats travel together versus, you know, you have a whole bunch of them. You can imagine the route you can still have, but you can imagine you split up the seats and instead of, you know, them traveling, like, I don't know, a mile apart, they could be like, you know, half a mile apart if you split them into two. That basically would mean that your delays, when you go out, you won't wait for them for a long time. And that's one of the main reasons, or you don't have to climb up. The other thing is that I think if you split them up in a nice way, and if you can actually know where people are going to be somehow, you don't even need the app. A lot of people ask us the app, we say, why don't you just walk into the vehicle? How about you just walk into the vehicle, it recognizes who you are and it gives you a bunch of options of places that you go and you just kind of go there. I mean, people kind of also internalize the apps. Everybody needs an app. It's like, you don't need an app. You just walk into the thing. But I think one of the things that, you know, we really try to do is to take that shuttle experience that no one likes and tilt it into something that everybody loves. And so I think that's another important thing. I would like to say that carefully, just like teleoperation, like we don't do shuttles. You know, we're really kind of thinking of this as a system or a network that we're designing. But ultimately, we go to places that would normally rent a shuttle service that people wouldn't like as much and we want to tilt it into something that people love. So you've mentioned this earlier, but how many Optimus ride vehicles do you think would be needed for any person in Boston or New York, if they step outside, there will be, this is like a mathematical question, there'll be two Optimus ride vehicles within line of sight. Is that the right number to, well, at least one. For example, that's the density. So meaning that if you see one vehicle, you look around, you see another one too. Imagine like, you know, Tesla would tell you they collect a lot of data. Do you see that with Tesla? Like you just walk around and you look around, you see Tesla? Probably not. Very specific areas of California, maybe. You're right. Like there's a couple of zip codes that, you know, but I think that's kind of important because you know, like maybe the couple of zip codes, the one thing that we kind of depend on and I'll get to your question in a second, but now like we're taking a lot of tensions today. And so I think that this is actually important. People call this data density or data velocity. So it's very good to collect data in a way that, you know, you see the same place so many times. Like you can drive 10,000 miles around the country or you drive 10,000 miles in a confined environment. You'll see the same intersection hundreds of times. And when it comes to predicting what people are going to do in that specific intersection, you become really good at it versus if you draw in like 10,000 miles around the country, you've seen that only once. And so trying to predict what people do becomes hard. And I think that, you know, you said what is needed, it's tens of thousands of vehicles. You know, you really need to be like a specific fractional vehicle. Like for example, in good times in Singapore, you can go and you can just grab a cab and they are like, you know, 10%, 20% of traffic, those taxis. Ultimately that's where you need to get to. So that, you know, you get to a certain place where you really, the benefits really kick off in like orders of magnitude type of a point. But once you get there, you actually get the benefits. And you can certainly carry people. I think that's one of the things people really don't like to wait for themselves. But for example, they can wait a lot more for the goods if they order something. Like you're sitting at home and you want to wait half an hour. That sounds great. People will say it's great. You want to, you're going to take a cab, you're waiting half an hour. Like that's crazy. You don't want to wait that much. But I think, you know, you can, I think really get to a point where the system at peak times really focuses on kind of transporting humans around. And then it's really, it's a good fraction of the traffic to the point where, you know, you go, you look around and there's something there and you just kind of basically get in there and it's already waiting for you or something like that. And then you take it. If you do it at that scale, like today, for instance, Uber, if you talk to a driver, right? I mean, Uber takes a certain cut. It's a small cut. Or drivers would argue that it's a large cut, but you know, it's when you look at the grand scheme of things, most of that money that you pay Uber kind of goes to the driver. And if you talk to the driver, the driver will claim that most of it is their time. You know, it's not spent on gas. They think it's not spent on the car per se as much. It's like their time. And if you didn't have a person driving, or if you're in a scenario where, you know, like 0.1 person is driving the car, a fraction of a person is kind of operating the car because you know, you want to operate several. If you're in that situation, you realize that the internal combustion engine type of cars are very inefficient. You know, we build them to go on highways, they pass crash tests. They're like really heavy. They really don't need to be like 25 times the weight of its passengers or, you know, like area wise and so on. But if you get through those inefficiencies and if you really build like urban cars and things like that, I think the economics really starts to check out. Like to the point where, I mean, I don't know, you may be able to get into a car and it may be less than a dollar to go from A to B. As long as you don't change your destination, you just pay 99 cents and go there. If you share it, if you take another stop somewhere, it becomes a lot better. You know, these kinds of things, at least for models, at least for mathematics and theory, they start to really check out. So I think it's really exciting what Optimus Ride is doing in terms of it feels the most reachable, like it'll actually be here and have an impact. Yeah, that is the idea. And if we contrast that, again, we'll go back to our old friends, Waymo and Tesla. So Waymo seems to have sort of technically similar approaches as Optimus Ride, but a different, they're not as interested as having impact today. They have a longer term sort of investments, almost more of a research project still, meaning they're trying to solve, as far as I understand, maybe you can differentiate, but they seem to want to do more unrestricted movement, meaning move from A to B where A to B is all over the place versus Optimus Ride is really nicely geofenced and really sort of established mobility in a particular environment before you expand it. And then Tesla is like the complete opposite, which is, you know, the entirety of the world actually is going to be automated. Highway driving, urban driving, every kind of driving, you know, you kind of creep up to it by incrementally improving the capabilities of the autopilot system. So when you contrast all of these, and on top of that, let me throw a question that nobody likes, but is a timeline. When do you think each of these approaches, loosely speaking, nobody can predict the future, will see mass deployment? So Elon Musk predicts the craziest approach is, I've heard figures like at the end of this year, right? So that's probably wildly inaccurate, but how wildly inaccurate is it? I mean, first thing to lay out, like everybody else, it's really hard to guess. I mean, I don't know where Tesla can look at or Elon Musk can look at and say, hey, you know, it's the end of this year. I mean, I don't know what you can look at. You know, even the data that, I mean, if you look at the data, even kind of trying to extrapolate the end state without knowing what exactly is going to go, especially for like a machine learning approach. I mean, it's just kind of very hard to predict. But I do think the following does happen. I think a lot of people, you know, what they do is that there's something that I called a couple times time dilation in technology prediction happens. Let me try to describe a little bit. There's a lot of things that are so far ahead, people think they're close. And there's a lot of things that are actually close. People think it's far ahead. People try to kind of look at a whole landscape of technology development, admittedly, it's chaos. Anything can happen in any order at any time. And there's a whole bunch of things in there. People take it, clamp it, and put it into the next three years. And so then what happens is that there's some things that maybe can happen by the end of the year or next year and so on. And they push that into like few years ahead, because it's just hard to explain. And there are things that are like, we're looking at 20 years more, maybe, you know, hopefully in my lifetime type of things, because, you know, we don't know. I mean, we don't know how hard it is even. Like that's a problem. We don't know like if some of these problems are actually AI complete, like, we have no idea what's going on. And you know, we take all of that and then we clump it. And then we say three years from now. And then some of us are more optimistic. So they're shooting at the end of the year and some of us are more realistic. They say like five years, but you know, we all, I think it's just hard to know. And I think trying to predict like products ahead two, three years, it's hard to know in the following sense. You know, like we typically say, okay, this is a technology company, but sometimes, sometimes really you're trying to build something where the technology does, like there's a technology gap, you know, like, and Tesla had that with electric vehicles, you know, like when they first started, they would look at a chart much like a Moore's law type of chart. And they would just kind of extrapolate that out and they'd say, we want to be here. What's the technology to get that? We don't know. It goes like this. We're just going to, you know, keep going with AI that goes into the cars. We don't even have that. Like we don't, we can't, I mean, what can you quantify, like what kind of chart are you looking at? You know? But so, but so I think when there's that technology gap, it's just kind of really hard to predict. So now I realize I talked like five minutes and avoid your question. I didn't tell you anything about that and it was very skillfully done. That was very well done. And I don't think you, I think you've actually argued that it's not a use, even any answer you provide now is not that useful. It's going to be very hard. There's one thing that I really believe in and, um, and you know, this is not my idea and it's been, you know, discussed several times, but, but this, um, this, this kind of like something like a startup, um, or, or a kind of an innovative company, um, including definitely may one, may Waymo, Tesla, maybe even some of the other big companies that are kind of trying things. This kind of like iterated learning is very important. The fact that we're over there and we're trying things and so on, I think that's, um, that that's important. We try to understand. And, and I think that, you know, the code in code Silicon Valley has done that with business models pretty well. And now I think we're trying to get to do it, but there's a literal technology gap. I mean, before, like, you know, you're trying to build, I'm not trying to, you know, I think these companies are building great technology to, for example, enable internet search to do it so quickly. And that kind of didn't, didn't, wasn't there so much, but at least like it was a kind of a technology that you could predict to some degree and so on. And now we're just kind of trying to build, you know, things that it's kind of hard to quantify what kind of a metric are we looking at? So psychologically as a sort of a, as a leader of graduate students and at Optimus ride a bunch of brilliant engineers, just curiosity, psychologically, do you think it's good to think that, you know, whatever technology gap we're talking about can be closed by the end of the year or do you, you know, cause we don't know. So the way, do you want to say that everything is going to improve exponentially to yourself and to others around you as a leader, or do you want to be more sort of maybe not cynical, but I don't want to use realistic cause it's hard to predict, but yeah, maybe more cynical, pessimistic about the ability to close that gap. Yeah. I think that, you know, going back, I think that iterated learning is like key that, you know, you're out there, you're running experiments to learn. And that doesn't mean sort of like, you know, like, like your Optimus ride, you're kind of doing something, but like in an environment, but like what Tesla is doing, I think is also kind of like this, this kind of notion. And, and, you know, people can go around and say like, you know, this year, next year, the other year and so on. But, but I think that the nice thing about it is that they're out there, they're pushing this technology in. I think what they should do more of, I think that kind of informed people about what kind of technology that they're providing, you know, the good and the bad. And then, you know, not just sort of, you know, it works very well, but I think, you know, I'm not saying they're not doing bad and informing, I think they're, they're kind of trying, they, you know, they put up certain things or at the very least YouTube videos comes out on, on how the summon function works every now and then, and, and, you know, people get informed and so that, that kind of cycle continues, but I, you know, I, I admire it. I think they're kind of go out there and they, they do great things. They do their own kind of experiment. I think we do our own and I think we're closing some similar technology gaps, but some also some are orthogonal as well. You know, I think like, like we talked about, you know, people being remote, like it's something or in the kind of environments that we're in or think about a Tesla car, maybe, maybe you can enable it one day. Like there's, you know, low traffic, like you're kind of the stop on go motion, you just hit the button and the, you can release, or maybe there's another lane that you can pass into, you go in that. I think they can enable these kinds of, I believe it. And so I think that that part, that is really important and that is really key. And beyond that, I think, you know, when is it exactly going to happen and, and, and so on. I mean it's like I said, it's very hard to predict. And I would, I would imagine that it would be good to do some sort of like a, like a one or two year plan when it's a little bit more predictable that, you know, the technology gaps you close and, and the, and the kind of sort of product that would ensue. So I know that from Optimus ride or, you know, other companies that I get involved in. I mean, at some point you find yourself in a situation where you're trying to build a product and, and people are investing in that, in that, you know, building effort and those investors that they do want to know as they compare the investments they want to make, they do want to know what happens in the next one or two years. And I think that's good to communicate that. But I think beyond that, it becomes, it becomes a vision that we want to get to someday and saying five years, 10 years, I don't think it means anything. But iterative learning is key to do and learn. I think that is key. You know, I got to sort of throw back right at you criticism in terms of, you know, like Tesla or somebody communicating, you know, how someone works and so on. I got a chance to visit Optimus ride and you guys are doing some awesome stuff and yet the internet doesn't know about it. So you should also communicate more showing off, you know, showing off some of the awesome stuff, the stuff that works and stuff that doesn't work. I mean, it's just the stuff I saw with the tracking of different objects and pedestrians. So I mean, incredible stuff going on there. Maybe it's just the nerd in me, but I think the world would love to see that kind of stuff. Yeah. That's, that's well taken. Um, you know, I, I should say that it's not like, you know, we, we, we weren't able to, I think we made a decision at some point, um, that decision did involve me quite a bit on kind of, um, uh, sort of doing this in kind of coding code stealth mode for a bit. Um, but I think that, you know, we'll, we'll open it up quite a lot more. And I think that we are also at Optimus ride kind of hitting, um, when you have new era, um, you know, we're, we're, we're big now, we're doing a lot of interesting things and I think, you know, some of the deployments that we've kind of announced were some of the first bits, bits of, um, information that we kind of put out into the world. We'll also put out our technology, a lot of the things that we've been developing is really amazing. And then, you know, we're, we're gonna, we're gonna start putting that out now. We're especially interested in sort of like, um, being able to work with the best people. And I think, and I think it's, it's good to not just kind of show them when they come to our office for an interview, but just put it out there in terms of like, you know, get people excited about what we're doing. So on the autonomous vehicle space, let me ask one last question. So Elon Musk famously said that lighter is a crutch. So I've talked to a bunch of people about it, got to ask you, you use that crutch quite a bit in the DARPA days. So, uh, uh, you know, and his, his idea in general, sort of, you know, more provocative and fun, I think than a technical discussion, but the idea is that camera based, primarily camera based systems is going to be what defines the future of autonomous vehicles. So what do you think of this idea? Lighter is a crutch versus primarily, uh, camera based systems. First things first, I think, you know, I'm a big believer in just camera based autonomous vehicle systems. Um, I think that, you know, you can put in a lot of autonomy and, and you can do great things. And, and it's, it's, it's very possible that at the time scales, like I said, we can't predict 20 years from now, like you may be able to do, do things that we're doing today only with LIDAR and then you may be able to do them just with cameras. And I think that, um, you know, you, you can just, um, I, I, I think that I will put my name on it too. You know, there will be a time when you can only use cameras and you'll be fine. Um, at that time though, it's very possible that, you know, you find the LIDAR system as another robustifier or, or it's so affordable that it's stupid not to, you know, just kind of put it there. And I think, um, and I think we may be looking at a future like that. You think we're over relying on LIDAR right now, because we understand the better it's more reliable in many ways in terms of, from a safety perspective. It's easier to build with. That's the other, that's the other thing. I think to be very frank with you, I mean, um, you know, we've seen a lot of sort of autonomous vehicles companies come and go and the approach has been, you know, you slap a LIDAR on a car and it's kind of easy to build with when you have a LIDAR, you know, you just kind of code it up and, and you hit the button and you do a demo. So I think there's admittedly, there's a lot of people, they focus on the LIDAR cause it's easier to build with. That doesn't mean that, you know, without the camera, just cameras, you can, uh, you cannot do what they're doing, but it's just kind of a lot harder. And so you need to have certain kinds of expertise to exploit that. What we believe in and, you know, you may be seeing some of it is that, um, we believe in computer vision. We certainly work on computer vision and Optimus ride, uh, by a lot, like, um, and, and we've been doing that from day one. And we also believe in sensor fusion. So, you know, we, we do, we have a relatively minimal use of LIDARs, but, but we do use them. And I think, you know, in the future, I really believe that the following sequence of events may happen. First things first, number one, there may be a future in which, you know, there's like cars with LIDARs and everything and the cameras, but you know, this in this 50 year ahead future, they can just drive with cameras as well. Especially in some isolated environments and cameras, they go and they do the thing in the same future. It's very possible that, you know, the LIDARs are so cheap and frankly make the software maybe, um, a little less compute intensive, uh, at the very least, or maybe less complicated so that they can be certified or, or insured, they're of their safety and things like that, that it's kind of stupid not to put the LIDAR, like, imagine this, you either put, pay money for the LIDAR or you pay money for the compute. And if you don't put the LIDAR, it's a more expensive system because you have to put in a lot of compute. Like, this is another possibility. Um, I do think that a lot of the, um, sort of initial deployments of self driving vehicles, I think they will involve LIDARs and especially either low range or short, um, either short range or low resolution LIDARs are actually not that hard to build in solid state. Uh, they're still scanning, but like MEMS type of scanning LIDARs and things like that, they're like, they're actually not that hard. I think they will maybe kind of playing with the spectrum and the phase arrays that they're a little bit harder, but, but I think, um, like, you know, putting a MEMS mirror in there that kind of scans the environment, it's not hard. The only thing is that, you know, you, just like with a lot of the things that we do nowadays in developing technology, you hit fundamental limits of the universe, um, the speed of light becomes a problem in when you're trying to scan the environment. So you don't get either good resolution or you don't get range. Um, but, but you know, it's still, it's something that you can put in there affordably. So let me jump back to, uh, drones. You've, uh, you have a role in the Lockheed Martin Alpha Pilot Innovation Challenge. Where, uh, teams compete in drone racing and super cool, super intense, interesting application of AI. So can you tell me about the very basics of the challenge and where you fit in, what your thoughts are on this problem? And it's sort of echoes of the early DARPA challenge in the, through the desert that we're seeing now, now with drone racing. Yeah. I mean, one interesting thing about it is that, you know, people, the drone racing exists as an eSport. And so it's much like you're playing a game, but there's a real drone going in an environment. A human being is controlling it with goggles on. So there's no, it is a robot, but there's no AI. There's no AI. Yeah. Human being is controlling it. And so that's already there. And, um, and I've been interested in this problem for quite a while, actually, um, from a roboticist point of view. And that's what's happening in Alpha Pilot, which, which problem of aggressive flight of aggressive flight, fully autonomous, aggressive flight. Um, the problem that I'm interested, I mean, you asked about Alpha Pilot and I'll, I'll get there in a second, but the problem that I'm interested in, I'd love to build autonomous vehicles like, like drones that can go far faster than any human possibly can. I think we should recognize that we as humans have, you know, limitations in how fast we can process information. And those are some biological limitations. Like we think about this AI this way too. I mean, this has been discussed a lot and this is not sort of my idea per se, but a lot of people kind of think about human level AI and they think that, you know, AI is not human level. One day it'll be human level and humans and AI's, they kind of interact. Um, versus I think that the situation really is that humans are at a certain place and AI keeps improving and at some point it just crosses off and then, you know, it gets smarter and smarter and smarter. And so drone racing, the same issue. Just play this game and you know, you have to like react in milliseconds and there's really, you know, you see something with your eyes and then that information just flows through your brain, into your hands so that you can command it. And there's some also delays on, you know, getting information back and forth, but suppose those delays didn't exist. You just, just the delay between your eye and your fingers is a delay that a robot doesn't have to have. Um, so we end up building in my research group, like systems that, you know, see things at a kilohertz, like a human eye would barely hit a hundred Hertz. So imagine things that see stuff in slow motion, like 10 X slow motion. Um, it will be very useful. Like we talked a lot about autonomous cars. So, um, you know, we don't get to see it, but a hundred lives are lost every day, just in the United States on traffic accidents. And many of them are like known cases, you know, like the, uh, you're coming through like, uh, like a ramp going into a highway, you hit somebody and you're off, or, you know, like you kind of get confused. You try to like swerve into the next lane, you go off the road and you crash, whatever. And um, I think if you had enough compute in a car and a very fast camera right at the time of an accident, you could use all compute you have, like you could shut down the infotainment system and use that kind of computing resources instead of rendering, you use it for the kind of artificial intelligence that goes in there, the autonomy. And you can, you can either take control of the car and bring it to a full stop. But even, even if you can't do that, you can deliver what the human is trying to do. Human is trying to change the lane, but goes off the road, not being able to do that with motor skills and the eyes. And you know, you can get in there and I was, there's so many other things that you can enable with what I would call high throughput computing. You know, data is coming in extremely fast and in real time you have to process it. And the current CPUs, however fast you clock it are typically not enough. You need to build those computers from the ground up so that they can ingest all that data that I'm really interested in. Just on that point, just really quick is the currently what's the bottom, like you mentioned the delays in humans, is it the hardware? So you work a lot with Nvidia hardware. Is it the hardware or is it the software? I think it's both. I think it's both. In fact, they need to be co developed I think in the future. I mean, that's a little bit what Nvidia does sort of like they almost like build the hardware and then they build the neural networks and then they build the hardware back and the neural networks back and it goes back and forth, but it's that co design. And I think that, you know, like we try to way back, we try to build a fast drone that could use a camera image to like track what's moving in order to find where it is in the world. This typical sort of, you know, visual inertial state estimation problems that we would solve. And you know, we just kind of realized that we're at the limit sometimes of, you know, doing simple tasks. We're at the limit of the camera frame rate because you know, if you really want to track things, you want the camera image to be 90% kind of like, or some somewhat the same from one frame to the next. And why are we at the limit of the camera frame rate? It's because camera captures data. It puts it into some serial connection. It could be USB or like there's something called camera serial interface that we use a lot. It puts into some serial connection and copper wires can only transmit so much data. And you hit the channel limit on copper wires and you know, you, you hit yet another kind of universal limit that you can transfer the data. So you have to be much more intelligent on how you capture those pixels. You can take compute and put it right next to the pixels. People are building those. How hard is it to do? How hard is it to get past the bottleneck of the copper wire? Yeah, you need to, you need to do a lot of parallel processing, as you can imagine. The same thing happens in the GPUs, you know, like the data is transferred in parallel somehow. It gets into some parallel processing. I think that, you know, like now we're really kind of diverted off into so many different dimensions, but. Great. So it's aggressive flight. How do we make drones see many more frames a second in order to enable aggressive flight? That's a super interesting problem. That's an interesting problem. So, but like, think about it. You have, you have CPUs. You clock them at, you know, several gigahertz. We don't clock them faster, largely because, you know, we run into some heating issues and things like that. But the whole thing is that three gigahertz clock light travels kind of like on the order of a few inches or an inch. That's the size of a chip. And so you pass a clock cycle and as the clock signal is going around in the chip, you pass another one. And so trying to coordinate that, the design of the complexity of the chip becomes so hard. I mean, we have hit the fundamental limits of the universe in so many things that we're designing. I don't know if people realize that. Like, we can't make transistors smaller because like quantum effects, the electrons start to tunnel around. We can't clock it faster. One of the reasons why is because like information doesn't travel faster in the universe and we're limited by that. Same thing with the laser scanner. But so then it becomes clear that, you know, the way you organize the chip into a CPU or even a GPU, you now need to look at how to redesign that. If you're going to stick with Silicon, you could go do other things too. I mean, there's that too, but you really almost need to take those transistors, put them in a different way so that the information travels on those transistors in a different way, in a much more way that is specific to the high speed cameras coming in. And so that's one of the things that we talk about quite a bit. So drone racing kind of really makes that embodies that and that's why it's exciting. It's exciting for people, you know, students like it. It embodies all those problems. But going back, we're building, quote, unquote, another engine. And that engine, I hope one day will be just like how impactful seat belts were in driving. I hope so. Or it could enable, you know, next generation autonomous air taxis and things like that. I mean, it sounds crazy, but one day we may need to perch land these things. If you really want to go from Boston to New York in more than a half hours, you may want to fix wing aircraft. Most of these companies that are kind of doing quote unquote flying cars, they're focusing on that. But then how do you land it on top of a building? You may need to pull off like kind of fast maneuvers for a robot, like perch land. It's going to go perch into a building. If you want to do that, like you need these kinds of systems. And so drone racing, you know, it's being able to go way faster than any human can comprehend. Take an aircraft, forget the quadcopter, you take your fixed wing, while you're at it, you might as well put some like rocket engines in the back and you just light it. You go through the gate and a human looks at it and just said, what just happened? And they would say, it's impossible for me to do that. And that's closing the same technology gap that would, you know, one day steer cars out of accidents. So but then let's get back to the practical, which is sort of just getting the thing to to work in a race environment, which is kind of what the is another kind of exciting thing, which the DARPA challenge to the desert did, you know, theoretically, we had autonomous vehicles, but making them successfully finish a race, first of all, which nobody finished the first year, and then the second year just to get, you know, to finish and go at a reasonable time is really difficult engineering, practically speaking challenge. So that let me ask about the the the Alpha pilot challenge is a, I guess, a big prize potentially associated with it. But let me ask, reminiscent of the DARPA days, predictions, you think anybody will finish? Well, not, not soon. I think that depends on how you set up the race course. And so if the race course is a solo course, I think people will kind of do it. But can you set up some course, like literally some core, you get to design it is the algorithm developer, can you set up some course, so that you can be the best human? When is that going to happen? Like that's not very easy, even just setting up some course, if you let the human that you're competing with set up the course, it becomes a lot easier, a lot harder. So how many in the space of all possible courses are, would humans win and would machines win? Great question. Let's get to that. I want to answer your other question, which is like, the DARPA challenge days, right? What was really hard? I think, I think we understand, we understood what we wanted to build, but still building things, that experimentation, that iterated learning, that takes up a lot of time actually. And so in my group, for example, in order for us to be able to develop fast, we build like VR environments, we'll take an aircraft, we'll put it in a motion capture room, big, huge motion capture room, and we'll fly it in real time, we'll render other images and beam it back to the drone. That sounds kind of notionally simple, but it's actually hard because now you're trying to fit all that data through the air into the drone. And so you need to do a few crazy things to make that happen. But once you do that, then at least you can try things. If you crash into something, you didn't actually crash. So it's like the whole drone is in VR. We can do augmented reality and so on. And so I think at some point testing becomes very important. One of the nice things about Alpha Pilot is that they built the drone and they build a lot of drones and it's okay to crash. In fact, I think maybe the viewers may kind of like to see things that crash. That potentially could be the most exciting part. It could be the exciting part. And I think as an engineer, it's a very different situation to be in. Like in academia, a lot of my colleagues who are actually in this race and they're really great researchers, but I've seen them trying to do similar things whereby they built this one drone and somebody with like a face mask and a gloves are going right behind the drone. They're trying to hold it. If it falls down, imagine you don't have to do that. I think that's one of the nice things about Alpha Pilot Challenge where we have these drones and we're going to design the courses in a way that we'll keep pushing people up until the crashes start to happen. And we'll hopefully sort of, I don't think you want to tell people crashing is okay. Like we want to be careful here, but because we don't want people to crash a lot, but certainly we want them to push it so that everybody crashes once or twice and they're really pushing it to their limits. That's where iterated learning comes in, because every crash is a lesson. Is a lesson. Exactly. So in terms of the space of possible courses, how do you think about it in the war of humans versus machines, where do machines win? We look at that quite a bit. I mean, I think that you will see quickly that you can design a course and in certain courses like in the middle somewhere, if you kind of run through the course once, the machine gets beaten pretty much consistently by slightly. But if you go through the course like 10 times, humans get beaten very slightly, but consistently. So humans at some point, you get confused, you get tired and things like that versus this machine is just executing the same line of code tirelessly, just going back to the beginning and doing the same thing exactly. I think that kind of thing happens and I realized sort of as humans, there's the classical things that everybody has realized. Like if you put in some sort of like strategic thinking, that's a little bit harder for machines that I think sort of comprehend. Machine is easy to do, so that's what they excel in. And also sort of repeatability is easy to do. That's what they excel in. You can build machines that excel in strategy as well and beat humans that way too, but that's a lot harder to build. I have a million more questions, but in the interest of time, last question. What is the most beautiful idea you've come across in robotics? Is it a simple equation, experiment, a demo, a simulation, a piece of software? What just gives you pause? That's an interesting question. I have done a lot of work myself in decision making, so I've been interested in that area. So you know, in robotics, somehow the field has split into like, you know, there's people who would work on like perception, how robots perceive the environment, then how do you actually make like decisions and there's people also like how do you interact, people interact with robots, there's a whole bunch of different fields. And you know, I have admittedly worked a lot on the more control and decision making than the others. And I think that, you know, the one equation that has always kind of baffled me is Bellman's equation. And so it's this person who have realized like way back, you know, more than half a century ago on like, how do you actually sit down? And if you have several variables that you're kind of jointly trying to determine, how do you determine that? And there's one beautiful equation that, you know, like today people do reinforcement and we still use it. And it's baffling to me because it both kind of tells you the simplicity, because it's a single equation that anyone can write down. You can teach it in the first course on decision making. At the same time, it tells you how computationally, how hard the problem is. I feel like my, like a lot of the things that I've done at MIT for research has been kind of just this fight against computational efficiency things. Like how can we get it faster to the point where we now got to like, let's just redesign this chip. Like maybe that's the way, but I think it talks about how computationally hard certain problems can be by nowadays what people call curse of dimensionality. And so as the number of variables kind of grow, the number of decisions you can make grows rapidly. Like if you have, you know, a hundred variables, each one of them take 10 values, all possible assignments is more than the number of atoms in the universe. It's just crazy. And that kind of thinking is just embodied in that one equation that I really like. And the beautiful balance between it being theoretically optimal and somehow practically speaking, given the curse of dimensionality, nevertheless in practice works among, you know, despite all those challenges, which is quite incredible. Which is quite incredible. So, you know, I would say that it's kind of like quite baffling actually, you know, in a lot of fields that we think about how little we know, you know, like, and so I think here too. We know that in the worst case, things are pretty hard, but you know, in practice, generally things work. So it's just kind of, it's kind of baffling decision making, how little we know. Just like how little we know about the beginning of time, how little we know about, you know, our own future. Like if you actually go into like from Bellman's equation all the way down, I mean, there's also how little we know about like mathematics. I mean, we don't even know if the axioms are like consistent. It's just crazy. I think a good lesson there, just like as you said, we tend to focus on the worst case or the boundaries of everything we're studying and then the average case seems to somehow work out. If you think about life in general, we mess it up a bunch. You know, we freak out about a bunch of the traumatic stuff, but in the end it seems to work out okay. Yeah. It seems like a good metaphor. So Tashi, thank you so much for being a friend, a colleague, a mentor. I really appreciate it. It's an honor to talk to you. Thank you so much for your advice. Thank you Lex. Thanks for listening to this conversation with Sertaj Karaman and thank you to our presenting sponsor Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some words from Hal9000 from the movie 2001 A Space Odyssey. I'm putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do. Thank you for listening and hope to see you next time.
Sertac Karaman: Robots That Fly and Robots That Drive | Lex Fridman Podcast #97
The following is a conversation with Kate Darling, a researcher at MIT, interested in social robotics, robot ethics, and generally how technology intersects with society. She explores the emotional connection between human beings and lifelike machines, which for me is one of the most exciting topics in all of artificial intelligence. As she writes in her bio, she is a caretaker of several domestic robots, including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti. She is one of the funniest and brightest minds I've ever had the fortune to talk to. This conversation was recorded recently, but before the outbreak of the pandemic. For everyone feeling the burden of this crisis, I'm sending love your way. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. Quick summary of the ads. Two sponsors, Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex Pod. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all access pass to watch courses from, to list some of my favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and communication, Will Wright, creator of SimCity and Sims, love those games, on game design, Carlos Santana on guitar, Garry Kasparov on chess, Daniel Nagrano on poker, and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. By the way, you can watch it on basically any device. Once again, sign up on masterclass.com slash Lex to get a discount and to support this podcast. This show is sponsored by ExpressVPN. Get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy to use, press the big power on button, and your privacy is protected. And, if you like, you can make it look like your location is anywhere else in the world. I might be in Boston now, but I can make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of obvious benefits. Certainly, it allows you to access international versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I use it on Linux. Shout out to Ubuntu, 2004, Windows, Android, but it's available everywhere else too. Once again, get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast. And now, here's my conversation with Kate Darling. You co taught robot ethics at Harvard. What are some ethical issues that arise in the world with robots? Yeah, that was a reading group that I did when I, like, at the very beginning, first became interested in this topic. So, I think if I taught that class today, it would look very, very different. Robot ethics, it sounds very science fictiony, especially did back then, but I think that some of the issues that people in robot ethics are concerned with are just around the ethical use of robotic technology in general. So, for example, responsibility for harm, automated weapon systems, things like privacy and data security, things like, you know, automation and labor markets. And then personally, I'm really interested in some of the social issues that come out of our social relationships with robots. One on one relationship with robots. Yeah. I think most of the stuff we have to talk about is like one on one social stuff. That's what I love. I think that's what you're, you love as well and are expert in. But a societal level, there's like, there's a presidential candidate now, Andrew Yang running, concerned about automation and robots and AI in general taking away jobs. He has a proposal of UBI, universal basic income of everybody gets 1000 bucks as a way to sort of save you if you lose your job from automation to allow you time to discover what it is that you would like to or even love to do. Yes. So I lived in Switzerland for 20 years and universal basic income has been more of a topic there separate from the whole robots and jobs issue. So it's so interesting to me to see kind of these Silicon Valley people latch onto this concept that came from a very kind of left wing socialist, kind of a different place in Europe. But on the automation labor markets topic, I think that it's very, so sometimes in those conversations, I think people overestimate where robotic technology is right now. And we also have this fallacy of constantly comparing robots to humans and thinking of this as a one to one replacement of jobs. So even like Bill Gates a few years ago said something about, maybe we should have a system that taxes robots for taking people's jobs. And it just, I mean, I'm sure that was taken out of context, he's a really smart guy, but that sounds to me like kind of viewing it as a one to one replacement versus viewing this technology as kind of a supplemental tool that of course is going to shake up a lot of stuff. It's going to change the job landscape, but I don't see, you know, robots taking all the jobs in the next 20 years. That's just not how it's going to work. Right. So maybe drifting into the land of more personal relationships with robots and interaction and so on. I got to warn you, I go, I may ask some silly philosophical questions. I apologize. Oh, please do. Okay. Do you think humans will abuse robots in their interactions? So you've had a lot of, and we'll talk about it sort of anthropomorphization and this intricate dance, emotional dance between human and robot, but there seems to be also a darker side where people, when they treat the other as servants, especially, they can be a little bit abusive or a lot abusive. Do you think about that? Do you worry about that? Yeah, I do think about that. So, I mean, one of my main interests is the fact that people subconsciously treat robots like living things. And even though they know that they're interacting with a machine and what it means in that context to behave violently. I don't know if you could say abuse because you're not actually abusing the inner mind of the robot. The robot doesn't have any feelings. As far as you know. Well, yeah. It also depends on how we define feelings and consciousness. But I think that's another area where people kind of overestimate where we currently are with the technology. Right. The robots are not even as smart as insects right now. And so I'm not worried about abuse in that sense. But it is interesting to think about what does people's behavior towards these things mean for our own behavior? Is it desensitizing the people to be verbally abusive to a robot or even physically abusive? And we don't know. Right. It's a similar connection from like if you play violent video games, what connection does that have to desensitization to violence? I haven't read literature on that. I wonder about that. Because everything I've heard, people don't seem to any longer be so worried about violent video games. Correct. The research on it is, it's a difficult thing to research. So it's sort of inconclusive, but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When it's something on a screen and you're shooting a bunch of characters or running over people with your car, that doesn't necessarily translate to you doing that in real life. We do, however, have some concerns about children playing violent video games. And so we do restrict it there. I'm not sure that's based on any real evidence either, but it's just the way that we've kind of decided we want to be a little more cautious there. And the reason I think robots are a little bit different is because there is a lot of research showing that we respond differently to something in our physical space than something on a screen. We will treat it much more viscerally, much more like a physical actor. And so it's totally possible that this is not a problem. And it's the same thing as violence in video games. Maybe restrict it with kids to be safe, but adults can do what they want. But we just need to ask the question again because we don't have any evidence at all yet. Maybe there's an intermediate place too. I did my research on Twitter. By research, I mean scrolling through your Twitter feed. You mentioned that you were going at some point to an animal law conference. So I have to ask, do you think there's something that we can learn from animal rights that guides our thinking about robots? Oh, I think there is so much to learn from that. I'm actually writing a book on it right now. That's why I'm going to this conference. So I'm writing a book that looks at the history of animal domestication and how we've used animals for work, for weaponry, for companionship. And one of the things the book tries to do is move away from this fallacy that I talked about of comparing robots and humans because I don't think that's the right analogy. But I do think that on a social level, even on a social level, there's so much that we can learn from looking at that history because throughout history, we've treated most animals like tools, like products. And then some of them we've treated differently and we're starting to see people treat robots in really similar ways. So I think it's a really helpful predictor to how we're going to interact with the robots. Do you think we'll look back at this time like 100 years from now and see what we do to animals as like similar to the way we view like the Holocaust in World War II? That's a great question. I mean, I hope so. I am not convinced that we will. But I often wonder, you know, what are my grandkids going to view as, you know, abhorrent that my generation did that they would never do? And I'm like, well, what's the big deal? You know, it's a fun question to ask yourself. It always seems that there's atrocities that we discover later. So the things that at the time people didn't see as, you know, you look at everything from slavery to any kinds of abuse throughout history to the kind of insane wars that were happening to the way war was carried out and rape and the kind of violence that was happening during war that we now, you know, we see as atrocities, but at the time perhaps didn't as much. And so now I have this intuition that I have this worry, maybe you're going to probably criticize me, but I do anthropomorphize robots. I don't see a fundamental philosophical difference between a robot and a human being in terms of once the capabilities are matched. So the fact that we're really far away doesn't, in terms of capabilities and then that from natural language processing, understanding and generation to just reasoning and all that stuff. I think once you solve it, I see though, this is a very gray area and I don't feel comfortable with the kind of abuse that people throw at robots. Subtle, but I can see it becoming, I can see basically a civil rights movement for robots in the future. Do you think, let me put it in the form of a question, do you think robots should have some kinds of rights? Well, it's interesting because I came at this originally from your perspective. I was like, you know what, there's no fundamental difference between technology and like human consciousness. Like we, we can probably recreate anything. We just don't know how yet. And so there's no reason not to give machines the same rights that we have once, like you say, they're kind of on an equivalent level. But I realized that that is kind of a far future question. I still think we should talk about it because I think it's really interesting. But I realized that it's actually, we might need to ask the robot rights question even sooner than that while the machines are still, quote unquote, really dumb and not on our level because of the way that we perceive them. And I think one of the lessons we learned from looking at the history of animal rights and one of the reasons we may not get to a place in a hundred years where we view it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because historically we've always protected those things that we relate to the most. So one example is whales. No one gave a shit about the whales. Am I allowed to swear? Yeah, no one gave a shit about freedom. Yeah, no one gave a shit about the whales until someone recorded them singing. And suddenly people were like, oh, this is a beautiful creature and now we need to save the whales. And that started the whole Save the Whales movement in the 70s. So as much as I, and I think a lot of people want to believe that we care about consistent biological criteria, that's not historically how we formed our alliances. Yeah, so what, why do we, why do we believe that all humans are created equal? Killing of a human being, no matter who the human being is, that's what I meant by equality, is bad. And then, because I'm connecting that to robots and I'm wondering whether mortality, so the killing act is what makes something, that's the fundamental first right. So I am currently allowed to take a shotgun and shoot a Roomba. I think, I'm not sure, but I'm pretty sure it's not considered murder, right. Or even shutting them off. So that's, that's where the line appears to be, right? Is this mortality a critical thing here? I think here again, like the animal analogy is really useful because you're also allowed to shoot your dog, but people won't be happy about it. So we give, we do give animals certain protections from like, you're not allowed to torture your dog and set it on fire, at least in most states and countries, but you're still allowed to treat it like a piece of property in a lot of other ways. And so we draw these arbitrary lines all the time. And, you know, there's a lot of philosophical thought on why viewing humans as something unique is not, is just speciesism and not, you know, based on any criteria that would actually justify making a difference between us and other species. Do you think in general people, most people are good? Do you think, or do you think there's evil and good in all of us? Is that's revealed through our circumstances and through our interactions? I like to view myself as a person who like believes that there's no absolute evil and good and that everything is, you know, gray. But I do think it's an interesting question. Like when I see people being violent towards robotic objects, you said that bothers you because the robots might someday, you know, be smart. And is that why? Well, it bothers me because it reveals, so I personally believe, because I've studied way too, so I'm Jewish. I studied the Holocaust and World War II exceptionally well. I personally believe that most of us have evil in us. That what bothers me is the abuse of robots reveals the evil in human beings. And it's, I think it doesn't just bother me. It's, I think it's an opportunity for roboticists to make, help people find the better sides, the angels of their nature, right? That abuse isn't just a fun side thing. That's a, you revealing a dark part that you shouldn't, that should be hidden deep inside. Yeah. I mean, you laugh, but some of our research does indicate that maybe people's behavior towards robots reveals something about their tendencies for empathy generally, even using very simple robots that we have today that like clearly don't feel anything. So, you know, Westworld is maybe, you know, not so far off and it's like, you know, depicting the bad characters as willing to go around and shoot and rape the robots and the good characters is not wanting to do that. Even without assuming that the robots have consciousness. So there's a opportunity, it's interesting, there's opportunity to almost practice empathy. The, on robots is an opportunity to practice empathy. I agree with you. Some people would say, why are we practicing empathy on robots instead of, you know, on our fellow humans or on animals that are actually alive and experienced the world? And I don't agree with them because I don't think empathy is a zero sum game. And I do think that it's a muscle that you can train and that we should be doing that. But some people disagree. So the interesting thing, you've heard, you know, raising kids sort of asking them or telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during the requests. I don't know if, I'm a huge fan of that idea because yeah, that's towards the idea of practicing empathy. I feel like politeness, I'm always polite to all the, all the systems that we build, especially anything that's speech interaction based. Like when we talk to the car, I'll always have a pretty good detector for please to, I feel like there should be a room for encouraging empathy in those interactions. Yeah. Okay. So I agree with you. So I'm going to play devil's advocate. Sure. So what is the, what is the devil's advocate argument there? The devil's advocate argument is that if you are the type of person who has abusive tendencies or needs to get some sort of like behavior like that out, needs an outlet for it, that it's great to have a robot that you can scream at so that you're not screaming at a person. And we just don't know whether that's true, whether it's an outlet for people or whether it just kind of, as my friend once said, trains their cruelty muscles and makes them more cruel in other situations. Oh boy. Yeah. And that expands to other topics, which I, I don't know, you know, there's a, there's a topic of sex, which is weird one that I tend to avoid it from robotics perspective. And most of the general public doesn't, they talk about sex robots and so on. Is that an area you've touched at all research wise? Like the way, cause that's what people imagine sort of any kind of interaction between human and robot that shows any kind of compassion. They immediately think from a product perspective in the near term is sort of expansion of what pornography is and all that kind of stuff. Yeah. Do researchers touch this? Well that's kind of you to like characterize it as though there's thinking rationally about product. I feel like sex robots are just such a like titillating news hook for people that they become like the story. And it's really hard to not get fatigued by it when you're in the space because you tell someone you do human robot interaction. Of course, the first thing they want to talk about is sex robots. Yeah, it happens a lot. And it's, it's unfortunate that I'm so fatigued by it because I do think that there are some interesting questions that become salient when you talk about, you know, sex with robots. See what I think would happen when people get sex robots, like if it's some guys, okay, guys get female sex robots. What I think there's an opportunity for is an actual, like, like they'll actually interact. What I'm trying to say, they won't outside of the sex would be the most fulfilling part. Like the interaction, it's like the folks who there's movies and this, right, who pray, pay a prostitute and then end up just talking to her the whole time. So I feel like there's an opportunity. It's like most guys and people in general joke about this, the sex act, but really people are just lonely inside and they're looking for connection. Many of them. And it'd be unfortunate if that connection is established through the sex industry. I feel like it should go into the front door of like, people are lonely and they want a connection. Well, I also feel like we should kind of de, you know, de stigmatize the sex industry because, you know, even prostitution, like there are prostitutes that specialize in disabled people who don't have the same kind of opportunities to explore their sexuality. So it's, I feel like we should like de stigmatize all of that generally. But yeah, that connection and that loneliness is an interesting topic that you bring up because while people are constantly worried about robots replacing humans and oh, if people get sex robots and the sex is really good, then they won't want their, you know, partner or whatever. But we rarely talk about robots actually filling a hole where there's nothing and what benefit that can provide to people. Yeah, I think that's an exciting, there's a whole giant, there's a giant hole that's unfillable by humans. It's asking too much of your, of people, your friends and people you're in a relationship with in your family to fill that hole. There's, because, you know, it's exploring the full, like, you know, exploring the full complexity and richness of who you are. Like who are you really? Like people, your family doesn't have enough patience to really sit there and listen to who are you really. And I feel like there's an opportunity to really make that connection with robots. I just feel like we're complex as humans and we're capable of lots of different types of relationships. So whether that's, you know, with family members, with friends, with our pets, or with robots, I feel like there's space for all of that and all of that can provide value in a different way. Yeah, absolutely. So I'm jumping around. Currently most of my work is in autonomous vehicles. So the most popular topic among the general public is the trolley problem. So most, most, most roboticists kind of hate this question, but what do you think of this thought experiment? What do you think we can learn from it outside of the silliness of the actual application of it to the autonomous vehicle? I think it's still an interesting ethical question. And that in itself, just like much of the interaction with robots has something to teach us. But from your perspective, do you think there's anything there? Well, I think you're right that it does have something to teach us because, but I think what people are forgetting in all of these conversations is the origins of the trolley problem and what it was meant to show us, which is that there is no right answer. And that sometimes our moral intuition that comes to us instinctively is not actually what we should follow if we care about creating systematic rules that apply to everyone. So I think that as a philosophical concept, it could teach us at least that, but that's not how people are using it right now. These are friends of mine and I love them dearly and their project adds a lot of value. But if we're viewing the moral machine project as what we can learn from the trolley problems, the moral machine is, I'm sure you're familiar, it's this website that you can go to and it gives you different scenarios like, oh, you're in a car, you can decide to run over these two people or this child. What do you choose? Do you choose the homeless person? Do you choose the person who's jaywalking? And so it pits these like moral choices against each other and then tries to crowdsource the quote unquote correct answer, which is really interesting and I think valuable data, but I don't think that's what we should base our rules in autonomous vehicles on because it is exactly what the trolley problem is trying to show, which is your first instinct might not be the correct one if you look at rules that then have to apply to everyone and everything. So how do we encode these ethical choices in interaction with robots? For example, autonomous vehicles, there is a serious ethical question of do I protect myself? Does my life have higher priority than the life of another human being? Because that changes certain control decisions that you make. So if your life matters more than other human beings, then you'd be more likely to swerve out of your current lane. So currently automated emergency braking systems that just brake, they don't ever swerve. So swerving into oncoming traffic or no, just in a different lane can cause significant harm to others, but it's possible that it causes less harm to you. So that's a difficult ethical question. Do you have a hope that like the trolley problem is not supposed to have a right answer, right? Do you hope that when we have robots at the table, we'll be able to discover the right answer for some of these questions? Well, what's happening right now, I think, is this question that we're facing of what ethical rules should we be programming into the machines is revealing to us that our ethical rules are much less programmable than we probably thought before. And so that's a really valuable insight, I think, that these issues are very complicated and that in a lot of these cases, it's you can't really make that call, like not even as a legislator. And so what's going to happen in reality, I think, is that car manufacturers are just going to try and avoid the problem and avoid liability in any way possible. Or like they're going to always protect the driver because who's going to buy a car if it's programmed to kill someone? Yeah. Kill you instead of someone else. So that's what's going to happen in reality. But what did you mean by like once we have robots at the table, like do you mean when they can help us figure out what to do? No, I mean when robots are part of the ethical decisions. So no, no, no, not they help us. Well. Oh, you mean when it's like, should I run over a robot or a person? Right. That kind of thing. So what, no, no, no. So when you, it's exactly what you said, which is when you have to encode the ethics into an algorithm, you start to try to really understand what are the fundamentals of the decision making process you make to make certain decisions. Should you, like capital punishment, should you take a person's life or not to punish them for a certain crime? Sort of, you can use, you can develop an algorithm to make that decision, right? And the hope is that the act of making that algorithm, however you make it, so there's a few approaches, will help us actually get to the core of what is right and what is wrong under our current societal standards. But isn't that what's happening right now? And we're realizing that we don't have a consensus on what's right and wrong. You mean in politics in general? Well, like when we're thinking about these trolley problems and autonomous vehicles and how to program ethics into machines and how to, you know, make AI algorithms fair and equitable, we're realizing that this is so complicated and it's complicated in part because there doesn't seem to be a one right answer in any of these cases. Do you have a hope for, like one of the ideas of the moral machine is that crowdsourcing can help us converge towards, like democracy can help us converge towards the right answer. Do you have a hope for crowdsourcing? Well, yes and no. So I think that in general, you know, I have a legal background and policymaking is often about trying to suss out, you know, what rules does this particular society agree on and then trying to codify that. So the law makes these choices all the time and then tries to adapt according to changing culture. But in the case of the moral machine project, I don't think that people's choices on that website necessarily reflect what laws they would want in place. I think you would have to ask them a series of different questions in order to get at what their consensus is. I agree, but that has to do more with the artificial nature of, I mean, they're showing some cute icons on a screen. That's almost, so if you, for example, we do a lot of work in virtual reality. And so if you put those same people into virtual reality where they have to make that decision, their decision would be very different, I think. I agree with that. That's one aspect. And the other aspect is it's a different question to ask someone, would you run over the homeless person or the doctor in this scene? Or do you want cars to always run over the homeless people? I think, yeah. So let's talk about anthropomorphism. To me, anthropomorphism, if I can pronounce it correctly, is one of the most fascinating phenomena from like both the engineering perspective and the psychology perspective, machine learning perspective, and robotics in general. Can you step back and define anthropomorphism, how you see it in general terms in your work? Sure. So anthropomorphism is this tendency that we have to project human like traits and behaviors and qualities onto nonhumans. And we often see it with animals, like we'll project emotions on animals that may or may not actually be there. We often see that we're trying to interpret things according to our own behavior when we get it wrong. But we do it with more than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in the headlights of cars. And we do it with robots very, very extremely. You think that can be engineered? Can that be used to enrich an interaction between an AI system and the human? Oh, yeah, for sure. And do you see it being used that way often? Like, I don't, I haven't seen, whether it's Alexa or any of the smart speaker systems, often trying to optimize for the anthropomorphization. You said you haven't seen? I haven't seen. They keep moving away from that. I think they're afraid of that. They actually, so I only recently found out, but did you know that Amazon has like a whole team of people who are just there to work on Alexa's personality? So I know that depends on what you mean by personality. I didn't know that exact thing. But I do know that how the voice is perceived is worked on a lot, whether if it's a pleasant feeling about the voice, but that has to do more with the texture of the sound and the audio and so on. But personality is more like... It's like, what's her favorite beer when you ask her? And the personality team is different for every country too. Like there's a different personality for German Alexa than there is for American Alexa. That said, I think it's very difficult to, you know, use the, really, really harness the anthropomorphism with these voice assistants because the voice interface is still very primitive. And I think that in order to get people to really suspend their disbelief and treat a robot like it's alive, less is sometimes more. You want them to project onto the robot and you want the robot to not disappoint their expectations for how it's going to answer or behave in order for them to have this kind of illusion. And with Alexa, I don't think we're there yet, or Siri, that they're just not good at that. But if you look at some of the more animal like robots, like the baby seal that they use with the dementia patients, it's a much more simple design. It doesn't try to talk to you. It can't disappoint you in that way. It just makes little movements and sounds and people stroke it and it responds to their touch. And that is like a very effective way to harness people's tendency to kind of treat the robot like a living thing. Yeah. So you bring up some interesting ideas in your paper chapter, I guess, Anthropomorphic Framing Human Robot Interaction that I read the last time we scheduled this. Oh my God, that was a long time ago. Yeah. What are some good and bad cases of anthropomorphism in your perspective? Like when is the good ones and bad? Well, I should start by saying that, you know, while design can really enhance the anthropomorphism, it doesn't take a lot to get people to treat a robot like it's alive. Like people will, over 85% of Roombas have a name, which I'm, I don't know the numbers for your regular type of vacuum cleaner, but they're not that high, right? So people will feel bad for the Roomba when it gets stuck, they'll send it in for repair and want to get the same one back. And that's, that one is not even designed to like make you do that. So I think that some of the cases where it's maybe a little bit concerning that anthropomorphism is happening is when you have something that's supposed to function like a tool and people are using it in the wrong way. And one of the concerns is military robots where, so gosh, 2000, like early 2000s, which is a long time ago, iRobot, the Roomba company made this robot called the Pacbot that was deployed in Iraq and Afghanistan with the bomb disposal units that were there. And the soldiers became very emotionally attached to the robots. And that's fine until a soldier risks his life to save a robot, which you really don't want. But they were treating them like pets. Like they would name them, they would give them funerals with gun salutes, they would get really upset and traumatized when the robot got broken. So in situations where you want a robot to be a tool, in particular, when it's supposed to like do a dangerous job that you don't want a person doing, it can be hard when people get emotionally attached to it. That's maybe something that you would want to discourage. Another case for concern is maybe when companies try to leverage the emotional attachment to exploit people. So if it's something that's not in the consumer's interest, trying to like sell them products or services or exploit an emotional connection to keep them paying for a cloud service for a social robot or something like that might be, I think that's a little bit concerning as well. Yeah, the emotional manipulation, which probably happens behind the scenes now with some like social networks and so on, but making it more explicit. What's your favorite robot? Fictional or real? No, real. Real robot, which you have felt a connection with or not like, not anthropomorphic connection, but I mean like you sit back and say, damn, this is an impressive system. Wow. So two different robots. So the, the PLEO baby dinosaur robot that is no longer sold that came out in 2007, that one I was very impressed with. It was, but, but from an anthropomorphic perspective, I was impressed with how much I bonded with it, how much I like wanted to believe that it had this inner life. Can you describe PLEO, can you describe what it is? How big is it? What can it actually do? Yeah. PLEO is about the size of a small cat. It had a lot of like motors that gave it this kind of lifelike movement. It had things like touch sensors and an infrared camera. So it had all these like cool little technical features, even though it was a toy. And the thing that really struck me about it was that it, it could mimic pain and distress really well. So if you held it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing and it would start to squirm and cry out. If you hit it too hard, it would start to cry. So it was very impressive in design. And what's the second robot that you were, you said there might've been two that you liked. Yeah. So the Boston Dynamics robots are just impressive feats of engineering. Have you met them in person? Yeah. I recently got a chance to go visit and I, you know, I was always one of those people who watched the videos and was like, this is super cool, but also it's a product video. Like, I don't know how many times that they had to shoot this to get it right. Yeah. But visiting them, I, you know, I'm pretty sure that I was very impressed. Let's put it that way. Yeah. And in terms of the control, I think that was a transformational moment for me when I met Spot Mini in person. Yeah. Because, okay, maybe this is a psychology experiment, but I anthropomorphized the, the crap out of it. So I immediately, it was like my best friend, right? I think it's really hard for anyone to watch Spot move and not feel like it has agency. Yeah. This movement, especially the arm on Spot Mini really obviously looks like a head. Yeah. That they say, no, wouldn't mean it that way, but it obviously, it looks exactly like that. And so it's almost impossible to not think of it as a, almost like the baby dinosaur, but slightly larger. And this movement of the, of course, the intelligence is, their whole idea is that it's not supposed to be intelligent. It's a platform on which you build higher intelligence. It's actually really, really dumb. It's just a basic movement platform. Yeah. But even dumb robots can, like, we can immediately respond to them in this visceral way. What are your thoughts about Sophia the robot? This kind of mix of some basic natural language processing and basically an art experiment. Yeah. An art experiment is a good way to characterize it. I'm much less impressed with Sophia than I am with Boston Dynamics. She said she likes you. She said she admires you. Yeah. She followed me on Twitter at some point. Yeah. She tweets about how much she likes you. So what does that mean? I have to be nice or? No, I don't know. I was emotionally manipulating you. No. How do you think of that? I think of the whole thing that happened with Sophia is quite a large number of people kind of immediately had a connection and thought that maybe we're far more advanced with robotics than we are or actually didn't even think much. I was surprised how little people cared that they kind of assumed that, well, of course AI can do this. Yeah. And then if they assume that, I felt they should be more impressed. Well, people really overestimate where we are. And so when something, I don't even think Sophia was very impressive or is very impressive. I think she's kind of a puppet, to be honest. But yeah, I think people are a little bit influenced by science fiction and pop culture to think that we should be further along than we are. So what's your favorite robots in movies and fiction? WALLI. WALLI. What do you like about WALLI? The humor, the cuteness, the perception control systems operating on WALLI that makes it all work? Just in general? The design of WALLI the robot, I think that animators figured out, starting in the 1940s, how to create characters that don't look real, but look like something that's even better than real, that we really respond to and think is really cute. They figured out how to make them move and look in the right way. And WALLI is just such a great example of that. You think eyes, big eyes or big something that's kind of eyeish. So it's always playing on some aspect of the human face, right? Often. Yeah. So big eyes. Well, I think one of the first animations to really play with this was Bambi. And they weren't originally going to do that. They were originally trying to make the deer look as lifelike as possible. They brought deer into the studio and had a little zoo there so that the animators could work with them. And then at some point they were like, if we make really big eyes and a small nose and big cheeks, kind of more like a baby face, then people like it even better than if it looks real. Do you think the future of things like Alexa in the home has possibility to take advantage of that, to build on that, to create these systems that are better than real, that create a close human connection? I can pretty much guarantee you without having any knowledge that those companies are going to make these things. And companies are working on that design behind the scenes. I'm pretty sure. I totally disagree with you. Really? So that's what I'm interested in. I'd like to build such a company. I know a lot of those folks and they're afraid of that because how do you make money off of it? Well, but even just making Alexa look a little bit more interesting than just a cylinder would do so much. It's an interesting thought, but I don't think people are from Amazon perspective are looking for that kind of connection. They want you to be addicted to the services provided by Alexa, not to the device. So the device itself, it's felt that you can lose a lot because if you create a connection and then it creates more opportunity for frustration for negative stuff than it does for positive stuff is I think the way they think about it. That's interesting. Like I agree that it's very difficult to get right and you have to get it exactly right. Otherwise you wind up with Microsoft's Clippy. Okay, easy now. What's your problem with Clippy? You like Clippy? Is Clippy your friend? Yeah, I like Clippy. I was just, I just talked to, we just had this argument and they said Microsoft's CTO and they said, he said he's not bringing Clippy back. They're not bringing Clippy back and that's very disappointing. I think it was Clippy was the greatest assistance we've ever built. It was a horrible attempt, of course, but it's the best we've ever done because it was a real attempt to have like a actual personality. I mean, it was obviously technology was way not there at the time of being able to be a recommender system for assisting you in anything and typing in Word or any kind of other application, but still it was an attempt of personality that was legitimate, which I thought was brave. Yes, yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy. And I know I have like an army of people behind me who also miss Clippy. Really? I want to meet these people. Who are these people? It's the people who like to hate stuff when it's there and miss it when it's gone. So everyone. It's everyone. Exactly. All right. So Enki and Jibo, the two companies, the two amazing companies, the social robotics companies that have recently been closed down. Yes. Why do you think it's so hard to create a personal robotics company? So making a business out of essentially something that people would anthropomorphize, have a deep connection with. Why is it so hard to make it work? Is the business case not there or what is it? I think it's a number of different things. I don't think it's going to be this way forever. I think at this current point in time, it takes so much work to build something that only barely meets people's minimal expectations because of science fiction and pop culture giving people this idea that we should be further than we already are. Like when people think about a robot assistant in the home, they think about Rosie from the Jetsons or something like that. And Enki and Jibo did such a beautiful job with the design and getting that interaction just right. But I think people just wanted more. They wanted more functionality. I think you're also right that the business case isn't really there because there hasn't been a killer application that's useful enough to get people to adopt the technology in great numbers. I think what we did see from the people who did get Jibo is a lot of them became very emotionally attached to it. But that's not, I mean, it's kind of like the Palm Pilot back in the day. Most people are like, why do I need this? Why would I? They don't see how they would benefit from it until they have it or some other company comes in and makes it a little better. Yeah. Like how far away are we, do you think? How hard is this problem? It's a good question. And I think it has a lot to do with people's expectations and those keep shifting depending on what science fiction that is popular. But also it's two things. It's people's expectation and people's need for an emotional connection. Yeah. And I believe the need is pretty high. Yes. But I don't think we're aware of it. That's right. There's like, I really think this is like the life as we know it. So we've just kind of gotten used to it of really, I hate to be dark because I have close friends, but we've gotten used to really never being close to anyone. Right. And we're deeply, I believe, okay, this is hypothesis. I think we're deeply lonely, all of us, even those in deep fulfilling relationships. In fact, what makes those relationship fulfilling, I think is that they at least tap into that deep loneliness a little bit. But I feel like there's more opportunity to explore that, that doesn't inter, doesn't interfere with the human relationships you have. It expands more on the, that, yeah, the rich deep unexplored complexity that's all of us, weird apes. Okay. I think you're right. Do you think it's possible to fall in love with a robot? Oh yeah, totally. Do you think it's possible to have a longterm committed monogamous relationship with a robot? Well, yeah, there are lots of different types of longterm committed monogamous relationships. I think monogamous implies like, you're not going to see other humans sexually or like you basically on Facebook have to say, I'm in a relationship with this person, this robot. I just don't like, again, I think this is comparing robots to humans when I would rather compare them to pets. Like you get a robot, it fulfills this loneliness that you have in maybe not the same way as a pet, maybe in a different way that is even supplemental in a different way. But I'm not saying that people won't like do this, be like, oh, I want to marry my robot or I want to have like a sexual relation, monogamous relationship with my robot. But I don't think that that's the main use case for them. But you think that there's still a gap between human and pet. So between a husband and pet, there's a different relationship. It's engineering. So that's a gap that can be closed through. I think it could be closed someday, but why would we close that? Like, I think it's so boring to think about recreating things that we already have when we could create something that's different. I know you're thinking about the people who like don't have a husband and like, what could we give them? Yeah. But I guess what I'm getting at is maybe not. So like the movie Her. Yeah. Right. So a better husband. Well, maybe better in some ways. Like it's, I do think that robots are going to continue to be a different type of relationship, even if we get them like very human looking or when, you know, the voice interactions we have with them feel very like natural and human like, I think there's still going to be differences. And there were in that movie too, like towards the end, it kind of goes off the rails. But it's just a movie. So your intuition is that, because you kind of said two things, right? So one is why would you want to basically replicate the husband? Yeah. Right. And the other is kind of implying that it's kind of hard to do. So like anytime you try, you might build something very impressive, but it'll be different. I guess my question is about human nature. It's like, how hard is it to satisfy that role of the husband? So we're moving any of the sexual stuff aside is the, it's more like the mystery, the tension, the dance of relationships you think with robots, that's difficult to build. What's your intuition? I think that, well, it also depends on are we talking about robots now in 50 years in like indefinite amount of time. I'm thinking like five or 10 years. Five or 10 years. I think that robots at best will be like, it's more similar to the relationship we have with our pets than relationship that we have with other people. I got it. So what do you think it takes to build a system that exhibits greater and greater levels of intelligence? Like it impresses us with this intelligence. Arumba, so you talk about anthropomorphization that doesn't, I think intelligence is not required. In fact, intelligence probably gets in the way sometimes, like you mentioned. But what do you think it takes to create a system where we sense that it has a human level intelligence? So something that, probably something conversational, human level intelligence. How hard do you think that problem is? It'd be interesting to sort of hear your perspective, not just purely, so I talk to a lot of people, how hard is the conversational agents? How hard is it to pass the torrent test? But my sense is it's easier than just solving, it's easier than solving the pure natural language processing problem. Because I feel like you can cheat. Yeah. So how hard is it to pass the torrent test in your view? Well, I think again, it's all about expectation management. If you set up people's expectations to think that they're communicating with, what was it, a 13 year old boy from the Ukraine? Yeah, that's right. Then they're not going to expect perfect English, they're not going to expect perfect, you know, understanding of concepts or even like being on the same wavelength in terms of like conversation flow. So it's much easier to pass in that case. Do you think, you kind of alluded this too with audio, do you think it needs to have a body? I think that we definitely have, so we treat physical things with more social agency, because we're very physical creatures. I think a body can be useful. Does it get in the way? Is there a negative aspects like... Yeah, there can be. So if you're trying to create a body that's too similar to something that people are familiar with, like I have this robot cat at home that has robots. I have a robot cat at home that has roommates. And it's very disturbing to watch because I'm constantly assuming that it's going to move like a real cat and it doesn't because it's like a $100 piece of technology. So it's very like disappointing and it's very hard to treat it like it's alive. So you can get a lot wrong with the body too, but you can also use tricks, same as, you know, the expectation management of the 13 year old boy from the Ukraine. If you pick an animal that people aren't intimately familiar with, like the baby dinosaur, like the baby seal that people have never actually held in their arms, you can get away with much more because they don't have these preformed expectations. Yeah, I remember you thinking of a Ted talk or something that clicked for me that nobody actually knows what a dinosaur looks like. So you can actually get away with a lot more. That was great. So what do you think about consciousness and mortality being displayed in a robot? So not actually having consciousness, but having these kind of human elements that are much more than just the interaction, much more than just, like you mentioned with a dinosaur moving kind of in an interesting ways, but really being worried about its own death and really acting as if it's aware and self aware and identity. Have you seen that done in robotics? What do you think about doing that? Is that a powerful good thing? Well, I think it can be a design tool that you can use for different purposes. So I can't say whether it's inherently good or bad, but I do think it can be a powerful tool. The fact that the pleo mimics distress when you quote unquote hurt it is a really powerful tool to get people to engage with it in a certain way. I had a research partner that I did some of the empathy work with named Palash Nandi and he had built a robot for himself that had like a lifespan and that would stop working after a certain amount of time just because he was interested in whether he himself would treat it differently. And we know from Tamagotchis, those little games that we used to have that were extremely primitive, that people respond to this idea of mortality and you can get people to do a lot with little design tricks like that. Now, whether it's a good thing depends on what you're trying to get them to do. Have a deeper relationship, have a deeper connection, sign a relationship. If it's for their own benefit, that sounds great. Okay. You could do that for a lot of other reasons. I see. So what kind of stuff are you worried about? So is it mostly about manipulation of your emotions for like advertisement and so on, things like that? Yeah, or data collection or, I mean, you could think of governments misusing this to extract information from people. It's, you know, just like any other technological tool, it just raises a lot of questions. If you look at Facebook, if you look at Twitter and social networks, there's a lot of concern of data collection now. What's from the legal perspective or in general, how do we prevent the violation of sort of these companies crossing a line? It's a great area, but crossing a line, they shouldn't in terms of manipulating, like we're talking about and manipulating our emotion, manipulating our behavior, using tactics that are not so savory. Yeah. It's really difficult because we are starting to create technology that relies on data collection to provide functionality. And there's not a lot of incentive, even on the consumer side, to curb that because the other problem is that the harms aren't tangible. They're not really apparent to a lot of people because they kind of trickle down on a societal level. And then suddenly we're living in like 1984, which, you know, sounds extreme, but that book was very prescient and I'm not worried about, you know, these systems. I have, you know, Amazon's Echo at home and tell Alexa all sorts of stuff. And it helps me because, you know, Alexa knows what brand of diaper we use. And so I can just easily order it again. So I don't have any incentive to ask a lawmaker to curb that. But when I think about that data then being used against low income people to target them for scammy loans or education programs, that's then a societal effect that I think is very severe and, you know, legislators should be thinking about. But yeah, the gray area is the removing ourselves from consideration of like, of explicitly defining objectives and more saying, well, we want to maximize engagement in our social network. Yeah. And then just, because you're not actually doing a bad thing. It makes sense. You want people to keep a conversation going, to have more conversations, to keep coming back again and again, to have conversations. And whatever happens after that, you're kind of not exactly directly responsible. You're only indirectly responsible. So I think it's a really hard problem. Are you optimistic about us ever being able to solve it? You mean the problem of capitalism? It's like, because the problem is that the companies are acting in the company's interests and not in people's interests. And when those interests are aligned, that's great. But the completely free market doesn't seem to work because of this information asymmetry. But it's hard to know how to, so say you were trying to do the right thing. I guess what I'm trying to say is it's not obvious for these companies what the good thing for society is to do. Like, I don't think they sit there with, I don't know, with a glass of wine and a cat, like petting a cat, evil cat. And there's two decisions and one of them is good for society. One is good for the profit and they choose the profit. I think they actually, there's a lot of money to be made by doing the right thing for society. Because Google, Facebook have so much cash that they actually, especially Facebook, would significantly benefit from making decisions that are good for society. It's good for their brand. But I don't know if they know what's good for society. I don't think we know what's good for society in terms of how we manage the conversation on Twitter or how we design, we're talking about robots. Like, should we emotionally manipulate you into having a deep connection with Alexa or not? Yeah. Yeah. Do you have optimism that we'll be able to solve some of these questions? Well, I'm going to say something that's controversial, like in my circles, which is that I don't think that companies who are reaching out to ethicists and trying to create interdisciplinary ethics boards, I don't think that that's totally just trying to whitewash the problem and so that they look like they've done something. I think that a lot of companies actually do, like you say, care about what the right answer is. They don't know what that is, and they're trying to find people to help them find them. Not in every case, but I think it's much too easy to just vilify the companies as, like you say, sitting there with their cat going, her, her, her, $1 million. That's not what happens. A lot of people are well meaning even within companies. I think that what we do absolutely need is more interdisciplinarity, both within companies, but also within the policymaking space because we've hurtled into the world where technological progress is much faster, it seems much faster than it was, and things are getting very complex. And you need people who understand the technology, but also people who understand what the societal implications are, and people who are thinking about this in a more systematic way to be talking to each other. There's no other solution, I think. You've also done work on intellectual property, so if you look at the algorithms that these companies are using, like YouTube, Twitter, Facebook, so on, I mean that's kind of, those are mostly secretive. The recommender systems behind these algorithms. Do you think about an IP and the transparency of algorithms like this? Like what is the responsibility of these companies to open source the algorithms or at least reveal to the public how these algorithms work? So I personally don't work on that. There are a lot of people who do though, and there are a lot of people calling for transparency. In fact, Europe's even trying to legislate transparency, maybe they even have at this point, where like if an algorithmic system makes some sort of decision that affects someone's life, that you need to be able to see how that decision was made. It's a tricky balance because obviously companies need to have some sort of competitive advantage and you can't take all of that away or you stifle innovation. But yeah, for some of the ways that these systems are already being used, I think it is pretty important that people understand how they work. What are your thoughts in general on intellectual property in this weird age of software, AI, robotics? Oh, that it's broken. I mean, the system is just broken. So can you describe, I actually, I don't even know what intellectual property is in the space of software, what it means to, I mean, so I believe I have a patent on a piece of software from my PhD. You believe? You don't know? No, we went through a whole process. Yeah, I do. You get the spam emails like, we'll frame your patent for you. Yeah, it's much like a thesis. But that's useless, right? Or not? Where does IP stand in this age? What's the right way to do it? What's the right way to protect and own ideas when it's just code and this mishmash of something that feels much softer than a piece of machinery? Yeah. I mean, it's hard because there are different types of intellectual property and they're kind of these blunt instruments. It's like patent law is like a wrench. It works really well for an industry like the pharmaceutical industry. But when you try and apply it to something else, it's like, I don't know, I'll just hit this thing with a wrench and hope it works. So software, you have a couple of different options. Any code that's written down in some tangible form is automatically copyrighted. So you have that protection, but that doesn't do much because if someone takes the basic idea that the code is executing and just does it in a slightly different way, they can get around the copyright. So that's not a lot of protection. Then you can patent software, but that's kind of, I mean, getting a patent costs, I don't know if you remember what yours cost or like, was it through an institution? Yeah, it was through a university. It was insane. There were so many lawyers, so many meetings. It made me feel like it must've been hundreds of thousands of dollars. It must've been something crazy. Oh yeah. It's insane the cost of getting a patent. And so this idea of protecting the inventor in their own garage who came up with a great idea is kind of, that's the thing of the past. It's all just companies trying to protect things and it costs a lot of money. And then with code, it's oftentimes by the time the patent is issued, which can take like five years, probably your code is obsolete at that point. So it's a very, again, a very blunt instrument that doesn't work well for that industry. And so at this point we should really have something better, but we don't. Do you like open source? Yeah. Is open source good for society? You think all of us should open source code? Well, so at the Media Lab at MIT, we have an open source default because what we've noticed is that people will come in, they'll write some code and they'll be like, how do I protect this? And we're like, that's not your problem right now. Your problem isn't that someone's going to steal your project. Your problem is getting people to use it at all. There's so much stuff out there. We don't even know if you're going to get traction for your work. And so open sourcing can sometimes help, you know, get people's work out there, but ensure that they get attribution for it, for the work that they've done. So like, I'm a fan of it in a lot of contexts. Obviously it's not like a one size fits all solution. So what I gleaned from your Twitter is, you're a mom. I saw a quote, a reference to baby bot. What have you learned about robotics and AI from raising a human baby bot? Well, I think that my child has made it more apparent to me that the systems we're currently creating aren't like human intelligence. Like there's not a lot to compare there. It's just, he has learned and developed in such a different way than a lot of the AI systems we're creating that that's not really interesting to me to compare. But what is interesting to me is how these systems are going to shape the world that he grows up in. And so I'm like even more concerned about kind of the societal effects of developing systems that, you know, rely on massive amounts of data collection, for example. So is he going to be allowed to use like Facebook or Facebook? Facebook is over. Kids don't use that anymore. Snapchat. What do they use? Instagram? Snapchat's over too. I don't know. I just heard that TikTok is over, which I've never even seen. So I don't know. No. We're old. We don't know. I need to, I'm going to start gaming and streaming my, my gameplay. So what do you see as the future of personal robotics, social robotics, interaction with other robots? Like what are you excited about if you were to sort of philosophize about what might happen in the next five, 10 years that would be cool to see? Oh, I really hope that we get kind of a home robot that makes it, that's a social robot and not just Alexa. Like it's, you know, I really love the Anki products. I thought Jibo was, had some really great aspects. So I'm hoping that a company cracks that. Me too. So Kate, it was a wonderful talking to you today. Likewise. Thank you so much. It was fun. Thanks for listening to this conversation with Kate Darling. And thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash LexPod. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple podcast, support it on Patreon, or simply connect with me on Twitter at Lex Friedman. And now let me leave you with some tweets from Kate Darling. First tweet is the pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of the cereal bowl. Second tweet is I came on here to complain that I had a really bad day and saw that a bunch of you are hurting too. Love to everyone. Thank you for listening. I hope to see you next time.
Kate Darling: Social Robotics | Lex Fridman Podcast #98
The following is a conversation with Carl Friston, one of the greatest neuroscientists in history. Cited over 245,000 times, known for many influential ideas in brain imaging, neuroscience, and theoretical neurobiology, including especially the fascinating idea of the free energy principle for action and perception. Carl's mix of humor, brilliance, and kindness, to me, are inspiring and captivating. This was a huge honor and a pleasure. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you, and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends by Bitcoin, and invest in the stock market with as little as $1. Since Cash App allows you to send and receive money digitally, let me mention a surprising fact related to physical money. Of all the currency in the world, roughly 8% of it is actual physical money. The other 92% of money only exists digitally. So again, if you get Cash App from the App Store, Google Play, and use the code LEXPODCAST, you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Carl Friston. How much of the human brain do we understand from the low level of neuronal communication to the functional level to the highest level, maybe the psychiatric disorder level? Well, we're certainly in a better position than we were last century. How far we've got to go, I think, is almost an unanswerable question. So you'd have to set the parameters, you know, what constitutes understanding, what level of understanding do you want? I think we've made enormous progress in terms of broad brush principles. Whether that affords a detailed cartography of the functional anatomy of the brain and what it does, right down to the microcircuitry and the neurons, that's probably out of reach at the present time. So the cartography, so mapping the brain, do you think mapping of the brain, the detailed, perfect imaging of it, does that get us closer to understanding of the mind, of the brain? So how far does it get us if we have that perfect cartography of the brain? I think there are lower bounds on that. It's a really interesting question. And it would determine the sort of scientific career you'd pursue if you believe that knowing every dendritic connection, every sort of microscopic, synaptic structure right down to the molecular level was gonna give you the right kind of information to understand the computational anatomy, then you'd choose to be a microscopist and you would study little cubic millimeters of brain for the rest of your life. If on the other hand you were interested in holistic functions and a sort of functional anatomy of the sort that a neuropsychologist would understand, you'd study brain lesions and strokes, just looking at the whole person. So again, it comes back to at what level do you want understanding? I think there are principled reasons not to go too far. If you commit to a view of the brain as a machine that's performing a form of inference and representing things, that level of understanding is necessarily cast in terms of probability densities and ensemble densities, distributions. And what that tells you is that you don't really want to look at the atoms to understand the thermodynamics of probabilistic descriptions of how the brain works. So I personally wouldn't look at the molecules or indeed the single neurons in the same way if I wanted to understand the thermodynamics of some non equilibrium steady state of a gas or an active material, I wouldn't spend my life looking at the individual molecules that constitute that ensemble. I'd look at their collective behavior. On the other hand, if you go too coarse grain, you're gonna miss some basic canonical principles of connectivity and architectures. I'm thinking here this bit colloquial, but this current excitement about high field magnetic resonance imaging at seven Tesla, why? Well, it gives us for the first time the opportunity to look at the brain in action at the level of a few millimeters that distinguish between different layers of the cortex that may be very important in terms of evincing generic principles of conical microcircuitry that are replicated throughout the brain that may tell us something fundamental about message passing in the brain and these density dynamics or neuronal or some more population dynamics that underwrite our brain function. So somewhere between a millimeter and a meter. Lingering for a bit on the big questions if you allow me, what to use the most beautiful or surprising characteristic of the human brain? I think it's hierarchical and recursive aspect. It's recurrent aspect. Of the structure or of the actual representation of power of the brain? Well, I think one speaks to the other. I was actually answering in a dull minded way from the point of view of purely its anatomy and its structural aspects. I mean, there are many marvelous organs in the body. Let's take your liver for example. Without it, you wouldn't be around for very long and it does some beautiful and delicate biochemistry and homeostasis and evolved with a finesse that would easily parallel the brain but it doesn't have a beautiful anatomy. It has a simple anatomy which is attractive in a minimalist sense but it doesn't have that crafted structure of sparse connectivity and that recurrence and that specialization that the brain has. So you said a lot of interesting terms here. So the recurrence, the sparsity, but you also started by saying hierarchical. So I've never thought of our brain as hierarchical. Sort of I always thought it's just like a giant mess, interconnected mess where it's very difficult to figure anything out. But in what sense do you see the brain as hierarchical? Well, I see it, it's not a magic soup. Which of course is what I used to think before I studied medicine and the like. So a lot of those terms imply each other. So hierarchies, if you just think about the nature of a hierarchy, how would you actually build one? And what you would have to do is basically carefully remove the right connections that destroy the completely connected soups that you might have in mind. So a hierarchy is in and of itself defined by a sparse and particular connectivity structure. I'm not committing to any particular form of hierarchy. But your sense is there is some. Oh, absolutely, yeah. In virtue of the fact that there is a sparsity of connectivity, not necessarily of a qualitative sort, but certainly of a quantitative sort. So it is demonstrably so that the further apart two parts of the brain are, the less likely they are to be wired, to possess axonal processes, neuronal processes that directly communicate one message or messages from one part of that brain to the other part of the brain. So we know there's a sparse connectivity. And furthermore, on the basis of anatomical connectivity in traces studies, we know that that sparsity underwrites a hierarchical and very structured sort of connectivity that might be best understood like a little bit like an onion. There is a concentric, sometimes referred to as centripetal by people like Marcel Masulam, hierarchical organization to the brain. So you can think of the brain as in a rough sense, like an onion, and all the sensory information and all the afferent outgoing messages that supply commands to your muscles or to your secretory organs come from the surface. So there's a massive exchange interface with the world out there on the surface. And then underneath, there's a little layer that sits and looks at the exchange on the surface. And then underneath that, there's a layer right the way down to the very center, to the deepest part of the onion. That's what I mean by a hierarchical organization. There's a discernible structure defined by the sparsity of connections that lends the architecture a hierarchical structure that tells one a lot about the kinds of representations and messages. Coming back to your earlier question, is this about the representational capacity or is it about the anatomy? Well, one underwrites the other. If one simply thinks of the brain as a message passing machine, a process that is in the service of doing something, then the circuitry and the connectivity that shape that message passing also dictate its function. So you've done a lot of amazing work in a lot of directions. So let's look at one aspect of that, of looking into the brain and trying to study this onion structure. What can we learn about the brain by imaging it? Which is one way to sort of look at the anatomy of it. Broadly speaking, what are the methods of imaging, but even bigger, what can we learn about it? Right, so well, most human neuroimaging that you might see in science journals that speaks to the way the brain works, measures brain activity over time. So that's the first thing to say, that we're effectively looking at fluctuations in neuronal responses, usually in response to some sensory input or some instruction, some task. Not necessarily, there's a lot of interest in just looking at the brain in terms of resting state, endogenous, or intrinsic activity. But crucially, at every point, looking at these fluctuations, either induced or intrinsic in the neural activity, and understanding them at two levels. So normally, people would recourse to two principles of brain organization that are complementary. One, functional specialization or segregation. So what does that mean? It simply means that there are certain parts of the brain that may be specialized for certain kinds of processing. For example, visual motion, our ability to recognize or to perceive movement in the visual world. And furthermore, that specialized processing may be spatially or anatomically segregated, leading to functional segregation. Which means that if I were to compare your brain activity during a period of viewing a static image, and then compare that to the responses of fluctuations in the brain when you were exposed to a moving image, say a flying bird, we'd expect to see restricted, segregated differences in activity. And those are basically the hotspots that you see in the statistical parametric maps that test for the significance of the responses that are circumscribed. So now, basically, we're talking about some people have perhaps unkindly called a neocartography. This is a phrenology augmented by modern day neuroimaging, basically finding blobs or bumps on the brain that do this or do that, and trying to understand the cartography of that functional specialization. So how much is there such, this is such a beautiful sort of ideal to strive for. We humans, scientists, would like this, to hope that there's a beautiful structure to this where it's, like you said, there's segregated regions that are responsible for the different function. How much hope is there to find such regions in terms of looking at the progress of studying the brain? Oh, I think enormous progress has been made in the past 20 or 30 years. So this is beyond incremental. At the advent of brain imaging, the very notion of functional segregation was just a hypothesis based upon a century, if not more, of careful neuropsychology, looking at people who had lost via insult or traumatic brain injury particular parts of the brain, and then saying, well, they can't do this or they can't do that. For example, losing the visual cortex and not being able to see, or losing particular parts of the visual cortex or regions known as V5 or the middle temporal region, MT, and noticing that they selectively could not see moving things. And so that created the hypothesis that perhaps visual movement processing was located in this functionally segregated area. And you could then go and put invasive electrodes in animal models and say, yes, indeed, we can excite activity here. We can form receptive fields that are sensitive to or defined in terms of visual motion. But at no point could you exclude the possibility that everywhere else in the brain was also very interested in visual motion. By the way, I apologize to interrupt, but a tiny little tangent. You said animal models, just out of curiosity, from your perspective, how different is the human brain versus the other animals in terms of our ability to study the brain? Well, clearly, the further away you go from a human brain, the greater the differences, but not as remarkable as you might think. So people will choose their level of approximation to the human brain, depending upon the kinds of questions that they want to answer. So if you're talking about sort of canonical principles of microcircuitry, it might be perfectly okay to look at a mouse, indeed. You could even look at flies, worms. If, on the other hand, you wanted to look at the finer details of organization of visual cortex and V1, V2, these are designated patches of cortex that may do different things, indeed, do. You'd probably want to use a primate that looked a little bit more like a human, because there are lots of ethical issues in terms of the use of nonhuman primates to answer questions about human anatomy. But I think most people assume that most of the important principles are conserved in a continuous way, right from, well, yes, worms right through to you and me. So now returning to, so that was the early sort of ideas of studying the functional regions of the brain by if there's some damage to it, to try to infer that that part of the brain might be somewhat responsible for this type of function. So where does that lead us? What are the next steps beyond that? Right, well, I'll just actually just reverse a bit, come back to your sort of notion that the brain is a magic soup. That was actually a very prominent idea at one point, notions such as Lashley's law of mass action inherited from the observation that for certain animals, if you just took out spoonfuls of the brain, it didn't matter where you took these spoonfuls out, they always showed the same kinds of deficits. So it was very difficult to infer functional specialization purely on the basis of lesion deficit studies. But once we had the opportunity to look at the brain lighting up and it's literally it's sort of excitement, neuronal excitement when looking at this versus that, one was able to say, yes, indeed, these functionally specialized responses are very restricted and they're here or they're over there. If I do this, then this part of the brain lights up. And that became doable in the early 90s. In fact, shortly before with the advent of positron emission tomography. And then functional magnetic resonance imaging came along in the early 90s. And since that time, there has been an explosion of discovery, refinement, confirmation. There are people who believe that it's all in the anatomy. If you understand the anatomy, then you understand the function at some level. And many, many hypotheses were predicated on a deep understanding of the anatomy and the connectivity, but they were all confirmed and taken much further with neuroimaging. So that's what I meant by we've made an enormous amount of progress in this century indeed, and in relation to the previous century, by looking at these functionally selective responses. But that wasn't the whole story. So there's this sort of near phrenology, but finding bumps and hot spots in the brain that did this or that. The bigger question was, of course, the functional integration. How all of these regionally specific responses were orchestrated, how they were distributed, how did they relate to distributed processing and indeed representations in the brain. So then you turn to the more challenging issue of the integration, the connectivity. And then we come back to this beautiful, sparse, recurrent, hierarchical connectivity that seems characteristic of the brain and probably not many other organs. But nevertheless, we come back to this challenge of trying to figure out how everything is integrated. But what's your feeling? What's the general consensus? Have we moved away from the magic soup view of the brain? So there is a deep structure to it. And then maybe a further question. You said some people believe that the structure is most of it, that you can really get at the core of the function by just deeply understanding the structure. Where do you sit on that, do you? I think it's got some mileage to it, yes, yeah. So it's a worthy pursuit of going, of studying through imaging and all the different methods to actually study the structure. No, absolutely, yeah, yeah. Sorry, I'm just noting, you were accusing me of using lots of long words and then you introduced one there, which is deep, which is interesting. Because deep is the sort of millennial equivalent of hierarchical. So if you've put deep in front of anything, not only are you very millennial and very trending, but you're also implying a hierarchical architecture. So it is a depth, which is, for me, the beautiful thing. That's right, the word deep kind of, yeah, exactly, it implies hierarchy. I didn't even think about that. That indeed, the implicit meaning of the word deep is hierarchy. Yep. Yeah. So deep inside the onion is the center of your soul. Beautifully put. Maybe briefly, if you could paint a picture of the kind of methods of neuroimaging, maybe the history which you were a part of, from statistical parametric mapping. I mean, just what's out there that's interesting for people maybe outside the field to understand of what are the actual methodologies of looking inside the human brain? Right, well, you can answer that question from two perspectives. Basically, it's the modality. What kind of signal are you measuring? And they can range from, and let's limit ourselves to sort of imaging based noninvasive techniques. So you've essentially got brain scanners, and brain scanners can either measure the structural attributes, the amount of water, the amount of fat, or the amount of iron in different parts of the brain, and you can make lots of inferences about the structure of the organ of the sort that you might have produced from an X ray, but a very nuanced X ray that is looking at this kind of property or that kind of property. So looking at the anatomy noninvasively would be the first sort of neuroimaging that people might want to employ. Then you move on to the kinds of measurements that reflect dynamic function, and the most prevalent of those fall into two camps. You've got these metabolic, sometimes hemodynamic, blood related signals. So these metabolic and or hemodynamic signals are basically proxies for elevated activity and message passing and neuronal dynamics in particular parts of the brain. Characteristically though, the time constants of these hemodynamic or metabolic responses to neural activity are much longer than the neural activity itself. And this is referring, forgive me for the dumb questions, but this would be referring to blood, like the flow of blood. Absolutely, absolutely. So there's a ton of, it seems like there's a ton of blood vessels in the brain. Yeah. So what's the interaction between the flow of blood and the function of the neurons? Is there an interplay there or? Yup, yup, and that interplay accounts for several careers of world renowned scientists, yes, absolutely. So this is known as neurovascular coupling, is exactly what you said. It's how does the neural activity, the neuronal infrastructure, the actual message passing that we think underlies our capacity to perceive and act, how is that coupled to the vascular responses that supply the energy for that neural processing? So there's a delicate web of large vessels, arteries and veins, that gets progressively finer and finer in detail until it perfuses at a microscopic level, the machinery where little neurons lie. So coming back to this sort of onion perspective, we were talking before using the onion as a metaphor for a deep hierarchical structure, but also I think it's just anatomically quite a useful metaphor. All the action, all the heavy lifting in terms of neural computation is done on the surface of the brain, and then the interior of the brain is constituted by fatty wires, essentially, axonal processes that are enshrouded by myelin sheaths. And these, when you dissect them, they look fatty and white, and so it's called white matter, as opposed to the actual neuro pill, which does the computation constituted largely by neurons, and that's known as gray matter. So the gray matter is a surface or a skin that sits on top of this big ball, now we are talking magic soup, but a big ball of connections like spaghetti, very carefully structured with sparse connectivity that preserves this deep hierarchical structure, but all the action takes place on the surface, on the cortex of the onion, and that means that you have to supply the right amount of blood flow, the right amount of nutrient, which is rapidly absorbed and used by neural cells that don't have the same capacity that your leg muscles would have to basically spend their energy budget and then claim it back later. So one peculiar thing about cerebral metabolism, brain metabolism, is it really needs to be driven in the moment, which means you basically have to turn on the taps. So if there's lots of neural activity in one part of the brain, a little patch of a few millimeters, even less possibly, you really do have to water that piece of the garden now and quickly, and by quickly I mean within a couple of seconds. So that contains a lot of, hence the imaging could tell you a story of what's happening. Absolutely, but it is slightly compromised in terms of the resolution. So the deployment of these little microvessels that water the garden to enable the neural activity to play out, the spatial resolution is in order of a few millimeters, and crucially, the temporal resolution is the order of a few seconds. So you can't get right down and dirty into the actual spatial and temporal scale of neural activity in and of itself. To do that, you'd have to turn to the other big imaging modality, which is the recording of electromagnetic signals as they're generated in real time. So here, the temporal bandwidth, if you like, or the low limit on the temporal resolution is incredibly small, talking about milliseconds. And then you can get into the phasic fast responses that is in and of itself the neural activity, and start to see the succession or cascade of hierarchical recurrent message passing evoked by a particular stimulus. But the problem is you're looking at electromagnetic signals that have passed through an enormous amount of magic soup or spaghetti of collectivity, and through the scalp and the skull, and it's become spatially very diffuse. So it's very difficult to know where you are. So you've got this sort of catch 22. You can either use an imaging modality that tells you within millimeters which part of the brain is activated, but you don't know when, or you've got these electromagnetic EEG, MEG setups that tell you to within a few milliseconds when something has responded, but you're not aware. So you've got these two complementary measures, either indirect via the blood flow, or direct via the electromagnetic signals caused by neural activity. These are the two big imaging devices. And then the second level of responding to your question, what are the, from the outside, what are the big ways of using this technology? So once you've chosen the kind of neural imaging that you want to use to answer your set questions, and sometimes it would have to be both, then you've got a whole raft of analyses, time series analyses usually, that you can bring to bear in order to answer your questions or address your hypothesis about those data. And interestingly, they both fall into the same two camps we were talking about before, this dialectic between specialization and integration, differentiation and integration. So it's the cartography, the blobology analyses. I apologize, I probably shouldn't interrupt so much, but just heard a fun word, the blah. Blobology. Blobology. It's a neologism, which means the study of blobs. So nothing bob. Are you being witty and humorous, or does the word blobology ever appear in a textbook somewhere? It would appear in a popular book. It would not appear in a worthy specialist journal. Yeah, I thought so. It's the fond word for the study of literally little blobs on brain maps showing activations. So the kind of thing that you'd see in the newspapers on ABC or BBC reporting the latest finding from brain imaging. Interestingly though, the maths involved in that stream of analysis does actually call upon the mathematics of blobs. So seriously, they're actually called Euler characteristics and they have a lot of fancy names in mathematics. We'll talk about it, about your ideas in free energy principle. I mean, there's echoes of blobs there when you consider sort of entities, mathematically speaking. Yes, absolutely. Well, circumscribed, well defined, you entities of, well, from the free energy point of view, entities of anything, but from the point of view of the analysis, the cartography of the brain, these are the entities that constitute the evidence for this functional segregation. You have segregated this function in this blob and it is not outside of the blob. And that's basically the, if you were a map maker of America and you did not know its structure, the first thing were you doing constituting or creating a map would be to identify the cities, for example, or the mountains or the rivers. All of these uniquely spatially localizable features, possibly topological features have to be placed somewhere because that requires a mathematics of identifying what does a city look like on a satellite image or what does a river look like or what does a mountain look like? What would it, you know, what data features would evidence that particular top, you know, that particular thing that you wanted to put on the map? And they normally are characterized in terms of literally these blobs or these sort of, another way of looking at this is that a certain statistical measure of the degree of activation crosses a threshold and in crossing that threshold in the spatially restricted part of the brain, it creates a blob. And that's basically what statistical parametric mapping does. It's basically mathematically finessed blobology. Okay, so those are the, you kind of described these two methodologies for, one is temporally noisy, one is spatially noisy and you kind of have to play and figure out what can be useful. It'd be great if you can sort of comment. I got a chance recently to spend a day at a company called Neuralink that uses brain computer interfaces and their dream is to, well, there's a bunch of sort of dreams, but one of them is to understand the brain by sort of, you know, getting in there past the so called sort of factory wall, getting in there and be able to listen, communicate both directions. What are your thoughts about this, the future of this kind of technology of brain computer interfaces to be able to now have a window or direct contact within the brain to be able to measure some of the signals, to be able to sense signals, to understand some of the functionality of the brain? Ambivalent, my sense is ambivalent. So it's a mixture of good and bad and I acknowledge that freely. So the good bits, if you just look at the legacy of that kind of reciprocal but invasive your brain stimulation, I didn't paint a complete picture when I was talking about sort of the ways we understand the brain prior to neuroimaging. It wasn't just lesion deficit studies. Some of the early work, in fact, literally 100 years from where we're sitting at the institution of neurology, was done by stimulating the brain of say dogs and looking at how they responded with their muscles or with their salivation and imputing what that part of the brain must be doing. If I stimulate it and I vote this kind of response, then that tells me quite a lot about the functional specialization. So there's a long history of brain stimulation which continues to enjoy a lot of attention nowadays. Positive attention. Oh yes, absolutely. You know, deep brain stimulation for Parkinson's disease is now a standard treatment and also a wonderful vehicle to try and understand the neuronal dynamics underlying movement disorders like Parkinson's disease. Even interest in magnetic stimulation, stimulating the magnetic fields and will it work in people who are depressed, for example. Quite a crude level of understanding what you're doing, but there is historical evidence that these kinds of brute force interventions do change things. They, you know, it's a little bit like banging the TV when the valves aren't working properly, but it's still, it works. So, you know, there is a long history. Brain computer interfacing or BCI, I think is a beautiful example of that. It's sort of carved out its own niche and its own aspirations and there've been enormous advances within limits. Advances in terms of our ability to understand how the brain, the embodied brain, engages with the world. I'm thinking here of sensory substitution, the augmenting our sensory capacities by giving ourselves extra ways of sensing and sampling the world, ranging from sort of trying to replace lost visual signals through to giving people completely new signals. So, one of the, I think, most engaging examples of this is equipping people with a sense of magnetic fields. So you can actually give them magnetic sensors that enable them to feel, should we say, tactile pressure around their tummy, where they are in relation to the magnetic field of the Earth. And after a few weeks, they take it for granted. They integrate it, they embody this, simulate this new sensory information into the way that they literally feel their world, but now equipped with this sense of magnetic direction. So that tells you something about the brain's plastic potential to remodel and its plastic capacity to suddenly try to explain the sensory data at hand by augmenting the sensory sphere and the kinds of things that you can measure. Clearly, that's purely for entertainment and understanding the nature and the power of our brains. I would imagine that most BCI is pitched at solving clinical and human problems such as locked in syndrome, such as paraplegia, or replacing lost sensory capacities like blindness and deafness. So then we come to the negative part of my ambivalence, the other side of it. So I don't want to be deflationary because much of my deflationary comments is probably large out of ignorance than anything else. But generally speaking, the bandwidth and the bit rates that you get from brain computer interfaces as we currently know them, we're talking about bits per second. So that would be like me only being able to communicate with any world or with you using very, very, very slow Morse code. And it is not even within an order of magnitude near what we actually need for an inactive realization of what people aspire to when they think about sort of curing people with paraplegia or replacing sight despite heroic efforts. So one has to ask, is there a lower bound on the kinds of recurrent information exchange between a brain and some augmented or artificial interface? And then we come back to, interestingly, what I was talking about before, which is if you're talking about function in terms of inference, and I presume we'll get to that later on in terms of the free energy principle, then at the moment, there may be fundamental reasons to assume that is the case. We're talking about ensemble activity. We're talking about basically, for example, let's paint the challenge facing brain computer interfacing in terms of controlling another system that is highly and deeply structured, very relevant to our lives, very nonlinear, that rests upon the kind of nonequilibrium steady states and dynamics that the brain does, the weather, all right? So imagine you had some very aggressive satellites that could produce signals that could perturb some little parts of the weather system. And then what you're asking now is, can I meaningfully get into the weather and change it meaningfully and make the weather respond in a way that I want it to? You're talking about chaos control on a scale which is almost unimaginable. So there may be fundamental reasons why BCI, as you might read about it in a science fiction novel, aspirational BCI may never actually work in the sense that to really be integrated and be part of the system is a requirement that requires you to have evolved with that system, that you have to be part of a very delicately structured, deeply structured, dynamic, ensemble activity that is not like rewiring a broken computer or plugging in a peripheral interface adapter. It is much more like getting into the weather patterns or, come back to your magic soup, getting into the active matter and meaningfully relate that to the outside world. So I think there are enormous challenges there. So I think the example of the weather is a brilliant one. And I think you paint a really interesting picture and it wasn't as negative as I thought. It's essentially saying that it might be incredibly challenging, including the low bound of the bandwidth and so on. I kind of, so just to full disclosure, I come from the machine learning world. So my natural thought is the hardest part is the engineering challenge of controlling the weather, of getting those satellites up and running and so on. And once they are, then the rest is fundamentally the same approaches that allow you to win in the game of Go will allow you to potentially play in this soup, in this chaos. So I have a hope that sort of machine learning methods will help us play in this soup. But perhaps you're right that it is a biology and the brain is just an incredible system that may be almost impossible to get in. But for me, what seems impossible is the incredible mess of blood vessels that you also described without, we also value the brain. You can't make any mistakes, you can't damage things. So to me, that engineering challenge seems nearly impossible. One of the things I was really impressed by at Neuralink is just talking to brilliant neurosurgeons and the roboticists that made me realize that even though it seems impossible, if anyone can do it, it's some of these world class engineers that are trying to take it on. So I think the conclusion of our discussion here of this part is basically that the problem is really hard but hopefully not impossible. Absolutely. So if it's okay, let's start with the basics. So you've also formulated a fascinating principle, the free energy principle. Could we maybe start at the basics and what is the free energy principle? Well, in fact, the free energy principle inherits a lot from the building of these data analytic approaches to these very high dimensional time series you get from the brain. So I think it's interesting to acknowledge that. And in particular, the analysis tools that try to address the other side, which is a functional integration, so the connectivity analyses. So on the one hand, but I should also acknowledge it inherits an awful lot from machine learning as well. So the free energy principle is just a formal statement that the existential imperatives for any system that manages to survive in a changing world can be cast as an inference problem in the sense that you can interpret the probability of existing as the evidence that you exist. And if you can write down that problem of existence as a statistical problem, then you can use all the maths that has been developed for inference to understand and characterize the ensemble dynamics that must be in play in the service of that inference. So technically, what that means is you can always interpret anything that exists in virtue of being separate from the environment in which it exists as trying to minimize variational free energy. And if you're from the machine learning community, you will know that as a negative evidence lower bound or a negative elbow, which is the same as saying you're trying to maximize or it will look as if all your dynamics are trying to maximize the complement of that which is the marginal likelihood or the evidence for your own existence. So that's basically the free energy principle. But to even take a sort of a small step backwards, you said the existential imperative. There's a lot of beautiful poetic words here, but to put it crudely, it's a fascinating idea of basically just of trying to describe if you're looking at a blob, how do you know this thing is alive? What does it mean to be alive? What does it mean to exist? And so you can look at the brain, you can look at parts of the brain, or this is just a general principle that applies to almost any system. That's just a fascinating sort of philosophically at every level question and a methodology to try to answer that question. What does it mean to be alive? So that's a huge endeavor and it's nice that there's at least some, from some perspective, a clean answer. So maybe can you talk about that optimization view of it? So what's trying to be minimized, maximized? A system that's alive, what is it trying to minimize? Right, you've made a big move there. First of all, it's good to make big moves. But you've assumed that the thing exists in a state that could be living or nonliving. So I may ask you, what licenses you to say that something exists? That's why I use the word existential. It's beyond living, it's just existence. So if you drill down onto the definition of things that exist, then they have certain properties if you borrow the maths from nonequilibrium steady state physics that enable you to interpret their existence in terms of this optimization procedure. So it's good you introduced the word optimization. So what the free energy principle in its sort of most ambitious, but also most deflationary and simplest, says is that if something exists, then it must, by the mathematics of nonequilibrium steady state, exhibit properties that make it look as if it is optimizing a particular quantity. And it turns out that particular quantity happens to be exactly the same as the evidence lower bound in machine learning or Bayesian model evidence in Bayesian statistics. Or, and then I can list a whole other list of ways of understanding this key quantity, which is a bound on surprise or self information if you have information theory. There are a number of different perspectives on this quantity. It's just basically the log probability of being in a particular state. I'm telling this story as an honest, an attempt to answer your question. And I'm answering it as if I was pretending to be a physicist who was trying to understand the fundaments of nonequilibrium steady state. And I shouldn't really be doing that because the last time I was taught physics, I was in my 20s. What kind of systems, when you think about the free energy principle, what kind of systems are you imagining as a sort of more specific kind of case study? Yeah, I'm imagining a range of systems, but at its simplest, a single celled organism that can be identified from its eco niche or its environment. So at its simplest, that's basically what I always imagined in my head. And you may ask, well, is there any, how on earth can you even elaborate questions about the existence of a single drop of oil, for example? But there are deep questions there. Why doesn't the oil, why doesn't the thing, the interface between the drop of oil that contains an interior and the thing that is not the drop of oil, which is the solvent in which it is immersed, how does that interface persist over time? Why doesn't the oil just dissolve into solvent? So what special properties of the exchange between the surface of the oil drop and the external states in which it's immersed, if you're a physicist, say it would be the heat bath. You've got a physical system, an ensemble again, we're talking about density dynamics, ensemble dynamics, an ensemble of atoms or molecules immersed in the heat bath. But the question is, how did the heat bath get there? And why does it not dissolve? How is it maintaining itself? Exactly. What actions is it? I mean, it's such a fascinating idea of a drop of oil and I guess it would dissolve in water, it wouldn't dissolve in water. So what? Precisely, so why not? So why not? Why not? And how do you mathematically describe, I mean, it's such a beautiful idea. And also the idea of like, where does the thing, where does the drop of oil end and where does it begin? Right, so I mean, you're asking deep questions, deep in a nonmillennial sense here. In a hierarchical sense. But what you can do, so this is the deflationary part of it. Can I just qualify my answer by saying that normally when I'm asked this question, I answer from the point of view of a psychologist, we talk about predictive processing and predictive coding and the brain as an inference machine, but you haven't asked me from that perspective, I'm answering from the point of view of a physicist. So the question is not so much why, but if it exists, what properties must it display? So that's the deflationary part of the free energy principle. The free energy principle does not supply an answer as to why, it's saying if something exists, then it must display these properties. That's the sort of thing that's on offer. And it so happens that these properties it must display are actually intriguing and have this inferential gloss, this sort of self evidencing gloss that inherits on the fact that the very preservation of the boundary between the oil drop and the not oil drop requires an optimization of a particular function or a functional that defines the presence or the existence of this oil drop, which is why I started with existential imperatives. It is a necessary condition for existence that this must occur because the boundary basically defines the thing that's existing. So it is that self assembly aspect it's that you were hinting at in biology, sometimes known as autopoiesis in computational chemistry with self assembly. It's the, what does it look like? Sorry, how would you describe things that configure themselves out of nothing? The way they clearly demarcate themselves from the states or the soup in which they are immersed. So from the point of view of computational chemistry, for example, you would just understand that as a configuration of a macro molecule to minimize its free energy, its thermodynamic free energy. It's exactly the same principle that we've been talking about that thermodynamic free energy is just the negative elbow. It's the same mathematical construct. So the very emergence of existence, of structure, of form that can be distinguished from the environment or the thing that is not the thing necessitates the existence of an objective function that it looks as if it is minimizing. It's finding a free energy minima. And so just to clarify, I'm trying to wrap my head around. So the free energy principle says that if something exists, these are the properties it should display. Yeah. So what that means is we can't just look, we can't just go into a soup and there's no mechanism. Free energy principle doesn't give us a mechanism to find the things that exist. Is that what's implying, is being implied that you can kind of use it to reason, to think about like, study a particular system and say, does this exhibit these qualities? That's an excellent question. But to answer that, I'd have to return to your previous question about what's the difference between living and nonliving things. Yes, well, actually, sorry. So yeah, maybe we can go there. Maybe we can go there, you kind of drew a line and forgive me for the stupid questions, but you kind of drew a line between living and existing. Is there an interesting sort of distinction? Yeah, I think there is. So things do exist, grains of sand, rocks on the moon, trees, you. So all of these things can be separated from the environment in which they are immersed. And therefore, they must at some level be optimizing their free energy, taking this sort of model evidence interpretation of this quantity that basically means they're self evidencing. Another nice little twist of phrase here is that you are your own existence proof, statistically speaking, which I don't think I said that, somebody did, but I love that phrase. You are your own existence proof. Yeah, so it's so existential, isn't it? I'm gonna have to think about that for a few days. That's a beautiful line. So the step through to answer your question about what's it good for, we'll go along the following lines. First of all, you have to define what it means to exist, which now, as you've rightly pointed out, you have to define what probabilistic properties must the states of something possess so it knows where it finishes. And then you write that down in terms of statistical dependencies, again, sparsity. Again, it's not what's connected or what's correlated or what depends upon, it's what's not correlated and what doesn't depend upon something. Again, it comes down to the deep structures, not in this instance, hierarchical, but the structures that emerge from removing connectivity and dependency. And in this instance, basically being able to identify the surface of the oil drop from the water in which it is immersed. And when you do that, you start to realize, well, there are actually four kinds of states in any given universe that contains anything. The things that are internal to the surface, the things that are external to the surface and the surface in and of itself, which is why I use a metaphor, a little single celled organism that has an interior and exterior and then the surface of the cell. And that's mathematically a Markov blanket. Just to pause, I'm in awe of this concept that there's the stuff outside the surface, stuff inside the surface and the surface itself, the Markov blanket. It's just the most beautiful kind of notion about trying to explore what it means to exist mathematically. I apologize, it's just a beautiful idea. But it came out of California, so that's. I changed my mind. I take it all back. So anyway, so you were just talking about the surface, about the Markov blanket. So this surface or these blanket states that are the, because they are now defined in relation to these independencies and what different states internal blanket or external states can, which ones can influence each other and which cannot influence each other. You can now apply standard results that you would find in non equilibrium physics or steady state or thermodynamics or hydrodynamics, usually out of equilibrium solutions and apply them to this partition. And what it looks like is if all the normal gradient flows that you would associate with any non equilibrium system apply in such a way that part of the Markov blanket and the internal states seem to be hill climbing or doing a gradient descent on the same quantity. And that means that you can now describe the very existence of this oil drop. You can write down the existence of this oil drop in terms of flows, dynamics, equations of motion, where the blanket states or part of them, we call them active states and the internal states now seem to be and must be trying to look as if they're minimizing the same function, which is a low probability of occupying these states. Interesting thing is that what would they be called if you were trying to describe these things? So what we're talking about are internal states, external states and blanket states. Now let's carve the blanket states into two sensory states and active states. Operationally, it has to be the case that in order for this carving up into different sets of states to exist, the active states, the Markov blanket cannot be influenced by the external states. And we already know that the internal states can't be influenced by the external states because the blanket separates them. So what does that mean? Well, it means the active states, the internal states are now jointly not influenced by external states. They only have autonomous dynamics. So now you've got a picture of an oil drop that has autonomy, it has autonomous states, it has autonomous states in the sense that there must be some parts of the surface of the oil drop that are not influenced by the external states and all the interior. And together, those two states endow even a little oil drop with autonomous states that look as if they are optimizing their variational free energy or their negative elbow, their moral evidence. And that would be an interesting intellectual exercise. And you could say, you could even go into the realms of panpsychism, that everything that exists is implicitly making inferences on self evidencing. Now we make the next move, but what about living things? I mean, so let me ask you, what's the difference between an oil drop and a little tadpole or a little lava or a plankton? The picture was just painted of an oil drop. Just immediately in a matter of minutes took me into the world of panpsychism, where you've just convinced me, made me feel like an oil drop is a living, it's certainly an autonomous system, but almost a living system. So it has sensory capabilities and acting capabilities and it maintains something. So what is the difference between that and something that we traditionally think of as a living system? That it could die or it can't, I mean, yeah, mortality, I'm not exactly sure. I'm not sure what the right answer there is because they can move, like movement seems like an essential element to being able to act in the environment, but the oil drop is doing that. So I don't know. Is it? The oil drop will be moved, but does it in and of itself move autonomously? Well, the surface is performing actions that maintain its structure. Yeah, you're being too clever. I was, I had in mind a passive little oil drop that's sitting there at the bottom on the top of a glass of water. Sure, I guess. What I'm trying to say is you're absolutely right. You've nailed it. It's movement. So where does that movement come from? If it comes from the inside, then you've got, I think, something that's living. What do you mean from the inside? What I mean is that the internal states that can influence the active states, where the active states can influence, but they're not influenced by the external states, can cause movement. So there are two types of oil drops, if you like. There are oil drops where the internal states are so random that they average themselves away, and the thing cannot, on average, when you do the averaging, move. So a nice example of that would be the Sun. The Sun certainly has internal states. There's lots of intrinsic autonomous activity going on, but because it's not coordinated, because it doesn't have the deep, in the millennial sense, the hierarchical structure that the brain does, there is no overall mode or pattern or organization that expresses itself on the surface that allows it to actually swim. It can certainly have a very active surface, but en masse, at the scale of the actual surface of the Sun, the average position of that surface cannot, in itself, move, because the internal dynamics are more like a hot gas. They are literally like a hot gas, whereas your internal dynamics are much more structured and deeply structured, and now you can express on your active states with your muscles and your secretory organs, your autonomic nervous system and its effectors, you can actually move, and that's all you can do. And that's something which, if you haven't thought of it like this before, I think it's nice to just realize there is no other way that you can change the universe other than simply moving. Whether that moving is articulating with my voice box or walking around or squeezing juices out of my secretory organs, there's only one way you can change the universe. It's moving. And the fact that you do so nonrandomly makes you alive. Yeah, so it's that nonrandomness. And that would be manifested, we realize, in terms of essentially swimming, essentially moving, changing one's shape, a morphogenesis that is dynamic and possibly adaptive. So that's what I was trying to get at between the difference between the oil drop and the little tadpole. The tadpole is moving around. Its active states are actually changing the external states. And there's now a cycle, an action perception cycle, if you like, a recurrent dynamic that's going on that depends upon this deeply structured autonomous behavior that rests upon internal dynamics that are not only modeling the data impressed upon their surface or the blanket states, but they are actively resampling those data by moving. They're moving towards chemical gradients and chemotaxis. So they've gone beyond just being good little models of the kind of world they live in. For example, an oil droplet could, in a panpsychic sense, be construed as a little being that has now perfectly inferred. It's a passive, nonliving oil drop living in a bowl of water. No problem. But to now equip that oil drop with the ability to go out and test that hypothesis about different states of beings. So it can actually push its surface over there, over there, and test for chemical gradients, or then you start to move to a much more lifelike form. This is all fun, theoretically interesting, but it actually is quite important in terms of reflecting what I have seen since the turn of the millennium, which is this move towards an inactive and embodied understanding of intelligence. And you say you're from machine learning. So what that means, the central importance of movement, I think has yet to really hit machine learning. It certainly has now diffused itself throughout robotics. And perhaps you could say certain problems in active vision where you actually have to move the camera to sample this and that. But machine learning of the data mining deep learning sort simply hasn't contended with this issue. What it's done, instead of dealing with the movement problem and the active sampling of data, it's just said, we don't need to worry about, we can see all the data because we've got big data. So we can ignore movement. So that for me is an important omission in current machine learning. The current machine learning is much more like the oil drop. Yes. But an oil drop that enjoys exposure to nearly all the data that it will ever need to be exposed to, as opposed to the tadpoles swimming out to find the right data. For example, it likes food. That's a good hypothesis. Let's test it out. Let's go and move and ingest food, for example, and see is that evidence that I'm the kind of thing that likes this kind of food. So the next natural question, and forgive this question, but if we think of sort of even artificial intelligence systems, which I just painted a beautiful picture of existence and life. So do you ascribe, do you find within this framework a possibility of defining consciousness or exploring the idea of consciousness? Like what, you know, self awareness and expand it to consciousness? Yeah. How can we start to think about consciousness within this framework? Is it possible? Well, yeah, I think it's possible to think about it, whether you'll get it. Get anywhere is another question. And again, I'm not sure that I'm licensed to answer that question. I think you'd have to speak to a qualified philosopher to get a definitive answer to that. But certainly, there's a lot of interest in using not just these ideas, but related ideas from information theory to try and tie down the maths and the calculus and the geometry of consciousness, either in terms of sort of a minimal consciousness, even less than a minimal selfhood. And what I'm talking about is the ability, effectively, to plan, to have agency. So you could argue that a virus does have a form of agency in virtue of the way that it selectively finds hosts and cells to live in and moves around. But you wouldn't endow it with the capacity to think about planning and moving in a purposeful way where it countenances the future. Whereas you might an ant. You might think an ant's not quite as unconscious as a virus. It certainly seems to have a purpose. It talks to its friends en route during its foraging. It has a different kind of autonomy, which is biotic, but beyond a virus. So there's something about, so there's some line that has to do with the complexity of planning that may contain an answer. I mean, it would be beautiful if we can find a line beyond which we could say a being is conscious. Yes, it will be. These are wonderful lines that we've drawn with existence, life, and consciousness. Yes, it will be very nice. One little wrinkle there, and this is something I've only learned in the past few months, is the philosophical notion of vagueness. So you're saying it would be wonderful to draw a line. I had always assumed that that line at some point would be drawn until about four months ago, and the philosopher taught me about vagueness. So I don't know if you've come across this, but it's a technical concept. And I think most revealingly illustrated with, at what point does a pile of sand become a pile? Is it one grain, two grains, three grains, or four grains? So at what point would you draw the line between being a pile of sand and a collection of grains of sand? In the same way, is it right to ask, where would I draw the line between conscious and unconscious? And it might be a vague concept. Having said that, I agree with you entirely. Systems that have the ability to plan. So just technically, what that means is your inferential self evidencing, by which I simply mean the thermodynamics and gradient flows that underwrite the preservation of your oil droplet like form, can be described as an optimization of log Bayesian model evidence, your elbow. That self evidencing must be evidence for a model of what's causing the sensory impressions on the sensory part of your surface or your Markov blanket. If that model is capable of planning, it must include a model of the future consequences of your active states or your action, just planning. So we're now in the game of planning as inference. Now notice what we've made, though. We've made quite a big move away from big data and machine learning, because again, it's the consequences of moving. It's the consequences of selecting those data or those data or looking over there. And that tells you immediately that even to be a contender for a conscious artifact or a strong AI or generalized, I don't know what that's called nowadays, then you've got to have movement in the game. And furthermore, you've got to have a generative model of the sort you might find in, say, a variational auto encoder that is thinking about the future conditioned upon different courses of action. Now that brings a number of things to the table, which now you start to think, well, those have got all the right ingredients to talk about consciousness. I've now got to select among a number of different courses of action into the future as part of planning. I've now got free will. The act of selecting this course of action or that policy or that policy or that action suddenly makes me into an inference machine, a self evidencing artifact that now looks as if it's selecting amongst different alternative ways forward, as I actively swim here or swim there or look over here, look over there. So I think you've now got to a situation if there is planning in the mix. You're now getting much closer to that line if that line were ever to exist. I don't think it gets you quite as far as self aware, though. And then you have to, I think, grapple with the question, how would formally write down a calculus or a maths of self awareness? I don't think it's impossible to do. But I think there would be pressure on you to actually commit to a formal definition of what you mean by self awareness. I think most people that I know would probably say that a goldfish, a pet fish, was not self aware. They would probably argue about their favorite cat, but would be quite happy to say that their mom was self aware. So. I mean, but that might very well connect to some level of complexity with planning. It seems like self awareness is essential for complex planning. Yeah. Do you want to take that further? Because I think you're absolutely right. Again, the line is unclear, but it seems like integrating yourself into the world, into your planning, is essential for constructing complex plans. Yes. Yeah. So mathematically describing that in the same elegant way as you have with the free energy principle might be difficult. Well, yes and no. I don't think that, well, perhaps we should just, can we just go back? That's a very important answer you gave. And I think if I just unpacked it, you'd see the truisms that you've just exposed for us. But let me, sorry, I'm mindful that I didn't answer your question before. Well, what's the free energy principle good for? Is it just a pretty theoretical exercise to explain nonequilibrium steady states? Yes, it is. It does nothing more for you than that. It can be regarded, it's going to sound very arrogant, but it is of the sort of theory of natural selection, or a hypothesis of natural selection. Beautiful, undeniably true, but tells you absolutely nothing about why you have legs and eyes. It tells you nothing about the actual phenotype, and it wouldn't allow you to build something. So the free energy principle by itself is as vacuous as most tautological theories. And by tautological, of course, I'm talking to the theory of natural, the survival of the fittest. What's the fittest of those that survive? Why do they cycle? It's the fitter. It just goes around in circles. In a sense, the free energy principle has that same deflationary tautology under the hood. It's a characteristic of things that exist. Why do they exist? Because they minimize their free energy. Why do they minimize their free energy? Because they exist. And you just keep on going round and round and round. But the practical thing, which you don't get from natural selection, but you could say has now manifest in things like differential evolution or genetic algorithms and MCMC, for example, in machine learning. The practical thing you can get is, if it looks as if things that exist are trying to have density dynamics and look as though they're optimizing a variational free energy, and a variational free energy has to be a functional of a generative model, a probabilistic description of causes and consequences, causes out there, consequences in the sensorium on the sensory parts of the Markov blanket, then it should, in theory, be possible to write down the generative model, work out the gradients, and then cause it to autonomously self evidence. So you should be able to write down oil droplets. You should be able to create artifacts where you have supplied the objective function that supplies the gradients, that supplies the self organizing dynamics to non equilibrium steady state. So there is actually a practical application of the free energy principle when you can write down your required evidence in terms of, well, when you can write down the generative model that is the thing that has the evidence. The probability of these sensory data or this data, given that model, is effectively the thing that the elbow or the variational free energy bounds or approximates. That means that you can actually write down the model and the kind of thing that you want to engineer, the kind of AGI or artificial general intelligence that you want to manifest probabilistically, and then you engineer, a lot of hard work, but you would engineer a robot and a computer to perform a gradient descent on that objective function. So it does have a practical implication. Now, why am I wittering on about that? It did seem relevant to, yes. So what kinds of, so the answer to, would it be easier or would it be hard? Well, mathematically, it's easy. I've just told you all you need to do is write down your perfect artifact, probabilistically, in the form of a probabilistic generative model, a probability distribution over the causes and consequences of the world in which this thing is immersed. And then you just engineer a computer and a robot to perform a gradient descent on that objective function. No problem. But of course, the big problem is writing down the generative model. So that's where the heavy lifting comes in. So it's the form and the structure of that generative model which basically defines the artifact that you will create or, indeed, the kind of artifact that has self awareness. So that's where all the hard work comes, very much like natural selection doesn't tell you in the slightest why you have eyes. So you have to drill down on the actual phenotype, the actual generative model. So with that in mind, what did you tell me that tells me immediately the kinds of generative models I would have to write down in order to have self awareness? What you said to me was I have to have a model that is effectively fit for purpose for this kind of world in which I operate. And if I now make the observation that this kind of world is effectively largely populated by other things like me, i.e. you, then it makes enormous sense that if I can develop a hypothesis that we are similar kinds of creatures, in fact, the same kind of creature, but I am me and you are you, then it becomes, again, mandated to have a sense of self. So if I live in a world that is constituted by things like me, basically a social world, a community, then it becomes necessary now for me to infer that it's me talking and not you talking. I wouldn't need that if I was on Mars by myself or if I was in the jungle as a feral child. If there was nothing like me around, there would be no need to have an inference at a hypothesis, oh yes, it is me that is experiencing or causing these sounds and it is not you. It's only when there's ambiguity in play induced by the fact that there are others in that world. So I think that the special thing about self aware artifacts is that they have learned to, or they have acquired, or at least are equipped with, possibly by evolution, generative models that allow for the fact there are lots of copies of things like them around, and therefore they have to work out it's you and not me. That's brilliant. I've never thought of that. I never thought of that, that the purpose of the really usefulness of consciousness or self awareness in the context of planning existing in the world is so you can operate with other things like you, and like you could, it doesn't have to necessarily be human. It could be other kind of similar creatures. Absolutely, well, we view a lot of our attributes into our pets, don't we? Or we try to make our robots humanoid. And I think there's a deep reason for that, that it's just much easier to read the world if you can make the simplifying assumption that basically you're me, and it's just your turn to talk. I mean, when we talk about planning, when you talk specifically about planning, the highest, if you like, manifestation or realization of that planning is what we're doing now. I mean, the human condition doesn't get any higher than this talking about the philosophy of existence and the conversation. But in that conversation, there is a beautiful art of turn taking and mutual inference, theory of mind. I have to know when you wanna listen. I have to know when you want to interrupt. I have to make sure that you're online. I have to have a model in my head of your model in your head. That's the highest, the most sophisticated form of generative model, where the generative model actually has a generative model of somebody else's generative model. And I think that, and what we are doing now evinces the kinds of generative models that would support self awareness, because without that, we'd both be talking over each other, or we'd be singing together in a choir. That's not a brilliant analogy for what I'm trying to say, but yeah, we wouldn't have this discourse. We wouldn't have this. Yeah, the dance of it. Yeah, that's right. As I interrupt, I mean, that's beautifully put. I'll re listen to this conversation many times. There's so much poetry in this, and mathematics. Let me ask the silliest, or perhaps the biggest question as a last kind of question. We've talked about living in existence and the objective function under which these objects would operate. What do you think is the objective function of our existence? What's the meaning of life? What do you think is the, for you, perhaps, the purpose, the source of fulfillment, the source of meaning for your existence, as one blob in this soup? I'm tempted to answer that, again, as a physicist, until it's the free energy I expect consequent upon my behavior. So technically, we could get a really interesting conversation about what that comprises in terms of searching for information, resolving uncertainty about the kind of thing that I am. But I suspect that you want a slightly more personal and fun answer, but which can be consistent with that. And I think it's reassuringly simple and hops back to what you were taught as a child, that you have certain beliefs about the kind of creature and the kind of person you are. And all that self evidencing means, all that minimizing variational free energy in an inactive and embodied way, means is fulfilling the beliefs about what kind of thing you are. And of course, we're all given those scripts, those narratives, at a very early age, usually in the form of bedtime stories or fairy stories that I'm a princess and I'm gonna meet a beast who's gonna transform and he's gonna be a prince. And so the narratives are all around you from your parents to the friends to the society feeds these stories. And then your objective function is to fulfill. Exactly, that narrative that has been encultured by your immediate family, but as you say, also the sort of the culture in which you grew up and you create for yourself. I mean, again, because of this active inference, this inactive aspect of self evidencing, not only am I modeling my environment, my eco niche, my external states out there, but I'm actively changing them all the time and doing the same back, we're doing it together. So there's a synchrony that means that I'm creating my own culture over different timescales. So the question now is for me being very selfish, what scripts were I given? It basically was a mixture between Einstein and shark homes. So I smoke as heavily as possible, try to avoid too much interpersonal contact, enjoy the fantasy that you're a popular scientist who's gonna make a difference in a slightly quirky way. So that's what I grew up on. My father was an engineer and loved science and he loved sort of things like Sir Arthur Edmonds, Spacetime and Gravitation, which was the first understandable version of general relativity. So all the fairy stories I was told as I was growing up were all about these characters. I'm keeping the Hobbit out of this because that doesn't quite fit my narrative. There's a journey of exploration, I suppose, of sorts. So yeah, I've just grown up to be what I imagine a mild mannered Sherlock Holmes slash Albert Einstein would do in my shoes. And you did it elegantly and beautifully. Carl was a huge honor talking today, it was fun. Thank you so much for your time. No, thank you. Appreciate it. Thank you for listening to this conversation with Carl Friston and thank you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash App and using code LexPodcast. If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter at LexFriedman. And now let me leave you with some words from Carl Friston. Your arm moves because you predict it will and your motor system seeks to minimize prediction error. Thank you for listening and hope to see you next time.
Karl Friston: Neuroscience and the Free Energy Principle | Lex Fridman Podcast #99
The following is a conversation with Yosha Bach, VP of Research at the AI Foundation, with a history of research positions at MIT and Harvard. Yosha is one of the most unique and brilliant people in the artificial intelligence community, exploring the workings of the human mind, intelligence, consciousness, life on Earth, and the possibly simulated fabric of our universe. I could see myself talking to Yosha many times in the future. Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the podcast by signing up at expressvpn.com slash LexPod and downloading Cash App and using code LEXPodcast. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter at LexFriedman. Since this comes up more often than I ever would have imagined, I challenge you to try to figure out how to spell my last name without using the letter E. And it'll probably be the correct way. As usual, I'll do a few minutes of ads now and never hear any yas in the middle that can break the flow of the conversation. This show is sponsored by ExpressVPN. Get it at expressvpn.com slash LexPod to support this podcast and to get an extra three months free on a one year package. I've been using ExpressVPN for many years. I love it. I think ExpressVPN is the best VPN out there. They told me to say it, but I think it actually happens to be true. It doesn't log your data, it's crazy fast, and it's easy to use. Literally, just one big power on button. Again, for obvious reasons, it's really important that they don't log your data. It works on Linux and everywhere else too. Shout out to my favorite flavor of Linux, Bantu Mate 2004. Once again, get it at expressvpn.com slash LexPod to support this podcast and to get an extra three months free on a one year package. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LexPodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of the fractional orders is an algorithmic marvel. So big props to the Cash App engineers for taking a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code LexPodcast, you get $10, and Cash App will also donate $10 to First, an organization that is helping advanced robotics and STEM education for young people around the world. And now here's my conversation with Joscha Bach. As you've said, you grew up in a forest in East Germany, just as we're talking about off mic, to parents who are artists. And now I think, at least to me, you've become one of the most unique thinkers in the AI world. So can we try to reverse engineer your mind a little bit? What were the key philosopher, scientist ideas, maybe even movies or just realizations that had an impact on you when you were growing up that kind of led to the trajectory, or were the key sort of crossroads in the trajectory of your intellectual development? My father came from a long tradition of architects, a distant branch of the Bach family. And so basically, he was technically a nerd. And nerds need to interface in society with nonstandard ways. Sometimes I define a nerd as somebody who thinks that the purpose of communication is to submit your ideas to peer review. And normal people understand that the primary purpose of communication is to negotiate alignment. And these purposes tend to conflict, which means that nerds have to learn how to interact with society at large. Who is the reviewer in the nerd's view of communication? Everybody who you consider to be a peer. So whatever hapless individual is around, well, you would try to make him or her the gift of information. Okay. So you're now, by the way, my research malinformed me. So you're architect or artist? So he did study architecture. But basically, my grandfather made the wrong decision. He married an aristocrat and was drawn into the war. And he came back after 15 years. So basically, my father was not parented by a nerd, but by somebody who tried to tell him what to do, and expected him to do what he was told. And he was unable to. He's unable to do things if he's not intrinsically motivated. So in some sense, my grandmother broke her son. And her son responded when he became an architect to become an artist. So he built 100 Wasser architecture. He built houses without right angles. He built lots of things that didn't work in the more brutalist traditions of eastern Germany. And so he bought an old watermill, moved out to the countryside, and did only what he wanted to do, which was art. Eastern Germany was perfect for Boheme, because you had complete material safety. Food was heavily subsidized, healthcare was free. You didn't have to worry about rent or pensions or anything. So it's a socialized communist side. Yes. And the other thing is, it was almost impossible not to be in political disagreement with your government, which is very productive for artists. So everything that you do is intrinsically meaningful, because it will always touch on the deeper currents of society of culture and be in conflict with it and tension with it. And you will always have to define yourself with respect to this. So what impacted your father, this outside of the box thinker against the government, against the world artists? He was actually not a thinker. He was somebody who only got self aware to the degree that he needed to make himself functional. So in some sense, he was also in the late 1960s. And he was in some sense a hippie. So he became a one person cult. He lived out there in his kingdom. He built big sculpture gardens and started many avenues of art and so on and convinced a woman to live with him. She was also an architect and she adored him and decided to share her life with him. And I basically grew up in a big cave full of books. I'm almost feral. And I was bored out there. It was very, very beautiful, very quiet, and quite lonely. So I started to read. And by the time I came to school, I've read everything until fourth grade and then some. And there was not a real way for me to relate to the outside world. And I couldn't quite put my finger on why. And today I know it was because I was a nerd, obviously, and it was the only nerd around. So there was no other kids like me. And there was nobody interested in physics or computing or mathematics and so on. And this village school that I went to was basically a nice school. Kids were nice to me. I was not beaten up, but I also didn't make many friends or build deep relationships. They only happened in starting from ninth grade when I went into a school for mathematics and physics. Do you remember any key books from this moment? I basically read everything. So I went to the library and I worked my way through the children's and young adult sections. And then I read a lot of science fiction, for instance, Stanislav Lem, basically the great author of Cybernetics, has influenced me. Back then, I didn't see him as a big influence because everything that he wrote seemed to be so natural to me. And it's only later that I contrasted it with what other people wrote. Another thing that was very influential on me were the classical philosophers and also the literature of romanticism. So German poetry and art, Troste Hilshoff and Heine and up to Hesse and so on. Hesse. I love Hesse. So at which point do the classical philosophers end? At this point, we're in the 21st century. What's the latest classical philosopher? Does this stretch through even as far as Nietzsche or is this, are we talking about Plato and Aristotle? I think that Nietzsche is the classical equivalent of a shit poster. He's very smart and easy to read, but he's not so much trolling others. He's trolling himself because he was at odds with the world. Largely his romantic relationships didn't work out. He got angry and he basically became a nihilist. Isn't that a beautiful way to be as an intellectual is to constantly be trolling yourself, to be in that conflict, in that tension? I think it's a lack of self awareness. At some point, you have to understand the comedy of your own situation. If you take yourself seriously and you are not functional, it ends in tragedy as it did for Nietzsche. I think you think he took himself too seriously in that tension. And as you find the same thing in Hesse and so on, this Steppenwolf syndrome is classic adolescence where you basically feel misunderstood by the world and you don't understand that all the misunderstandings are the result of your own lack of self awareness because you think that you are a prototypical human and the others around you should behave the same way as you expect them based on your innate instincts and it doesn't work out and you become a transcendentalist to deal with that. So it's very, very understandable and have great sympathies for this to the degree that I can have sympathy for my own intellectual history. But you have to grow out of it. So as an intellectual, a life well lived, a journey well traveled is one where you don't take yourself seriously from that perspective? No, I think that you are neither serious or not serious yourself because you need to become unimportant as a subject. That is, if you are a philosopher, belief is not a verb. You don't do this for the audience and you don't do it for yourself. You have to submit to the things that are possibly true and you have to follow wherever your inquiry leads. But it's not about you. It has nothing to do with you. So do you think then people like Ayn Rand believed sort of an idea of there's objective truth. So what's your sense in the philosophical, if you remove yourself as objective from the picture, you think it's possible to actually discover ideas that are true or are we just in a mesh of relative concepts that are either true nor false? It's just a giant mess. You cannot define objective truth without understanding the nature of truth in the first place. So what does the brain mean by saying that discover something as truth? So for instance, a model can be predictive or not predictive. Then there can be a sense in which a mathematical statement can be true because it's defined as true under certain conditions. So it's basically a particular state that a variable can have in a simple game. And then you can have a correspondence between systems and talk about truth, which is again, a type of model correspondence. And there also seems to be a particular kind of ground truth. So for instance, you're confronted with the enormity of something existing at all. It's stunning when you realize something exists rather than nothing. And this seems to be true. There's an absolute truth in the fact that something seems to be happening. Yeah, that to me is a showstopper. I could just think about that idea and be amazed by that idea for the rest of my life and not going any farther because I don't even know the answer to that. Why does anything exist at all? Well, the easiest answer is existence is the default, right? So this is the lowest number of bits that you would need to encode this. Whose answer? The simplest answer to this is that existence is the default. What about nonexistence? I mean, that seems... Nonexistence might not be a meaningful notion in this sense. So in some sense, if everything that can exist exists, for something to exist, it probably needs to be implementable. The only thing that can be implemented is finite automata. So maybe the whole of existence is the superposition of all finite automata and we are in some region of the fractal that has the properties that it can contain us. What does it mean to be a superposition of finite automata? Superposition of all possible rules? Imagine that every automaton is basically an operator that acts on some substrate and as a result, you get emergent patterns. What's the substrate? I have no idea to know. But some substrate. It's something that can store information. Something that can store information, there's a automaton. Something that can hold state. Still, it doesn't make sense to me the why that exists at all. I could just sit there with a beer or a vodka and just enjoy the fact, pondering the why. It may not have a why. This might be the wrong direction to ask into this. So there could be no relation in the why direction without asking for a purpose or for a cause. It doesn't mean that everything has to have a purpose or cause. So we mentioned some philosophers in that early, just taking a brief step back into that. So we asked ourselves when did classical philosophy end? I think for Germany, it largely ended with the first revolution. That's basically when we entered the monarchy and started a democracy. And at this point, we basically came up with a new form of government that didn't have a good sense of this new organism that society wanted to be. And in a way, it decapitated the universities. So the universities went on through modernism like a headless chicken. At the same time, democracy failed in Germany and we got fascism as a result. And it burned down things in a similar way as Stalinism burned down intellectual traditions in Russia. And Germany, both Germanys have not recovered from this. Eastern Germany had this vulgar dialectic materialism and Western Germany didn't get much more edgy than Habermas. So in some sense, both countries lost their intellectual traditions and killing off and driving out the Jews didn't help. Yeah. So that was the end of really rigorous what you would say is classical philosophy. There's also this thing that in some sense, the low hanging foods in philosophy were mostly wrapped. And the last big things that we discovered was the constructivist turn in mathematics. So to understand that the parts of mathematics that work are computation, there was a very significant discovery in the first half of the 20th century. And it hasn't fully permeated philosophy and even physics yet. Physicists checked out the code libraries for mathematics before constructivism became universal. What's constructivism? What are you referring to, Gödel's incompleteness theorem, those kinds of ideas? So basically, Gödel himself, I think, didn't get it yet. Hilbert could get it. Hilbert saw that, for instance, countries set theoretic experiments and mathematics led into contradictions. And he noticed that with the current semantics, we cannot build a computer in mathematics that runs mathematics without crashing. And Gödel could prove this. And so what Gödel could show is using classical mathematical semantics, you run into contradictions. And because Gödel strongly believed in these semantics and more than what he could observe and so on, he was shocked. It basically shook his world to the core because in some sense, he felt that the world has to be implemented in classical mathematics. And for Turing, it wasn't quite so bad. I think that Turing could see that the solution is to understand that mathematics was computation all along, which means you, for instance, pi in classical mathematics is a value. It's also a function, but it's the same thing. And in computation, a function is only a value when you can compute it. And if you cannot compute the last digit of pi, you only have a function. You can plug this function into your local sun, let it run until the sun burns out. This is it. This is the last digit of pi you will know. But it also means there can be no process in the physical universe or in any physically realized computer that depends on having known the last digit of pi. Which means there are parts of physics that are defined in such a way that cannot strictly be true, because assuming that this could be true leads into contradictions. So I think putting computation at the center of the world view is actually the right way to think about it. Yes. And Wittgenstein could see it. And Wittgenstein basically preempted the logitist program of AI that Minsky started later, like 30 years later. Turing was actually a pupil of Wittgenstein. I didn't know there's any connection between Turing and Wittgenstein. Wittgenstein even cancelled some classes when Turing was not present because he thought it was not worth spending the time with the others. If you read the Tractatus, it's a very beautiful book, like basically one thought on 75 pages. It's very non typical for philosophy because it doesn't have arguments in it and it doesn't have references in it. It's just one thought that is not intending to convince anybody. He says, it's mostly for people that had the same insight as me, just spell it out. And this insight is there is a way in which mathematics and philosophy ought to meet. Mathematics tries to understand the domain of all languages by starting with those that are so formalizable that you can prove all the properties of the statements that you make. But the price that you pay is that your language is very, very simple. So it's very hard to say something meaningful in mathematics. And it looks complicated to people, but it's far less complicated than what our brain is casually doing all the time when it makes sense of reality. And philosophy is coming from the top. So it's mostly starting from natural languages with vaguely defined concepts. And the hope is that mathematics and philosophy can meet at some point. And Wittgenstein was trying to make them meet. And he already understood that, for instance, you could express everything with the NAND calculus, that you could reduce the entire logic to NAND gates as we do in our modern computers. So in some sense, he already understood Turing universality before Turing spelled it out. I think when he wrote the Tractatus, he didn't understand yet that the idea was so important and significant. And I suspect then when Turing wrote it out, nobody cared that much. Turing was not that famous when he lived. It was mostly his work in decrypting the German codes that made him famous or gave him some notoriety. But this saint status that he has to computer science right now and the AI is something that I think he could acquire later. That's kind of interesting. Do you think of computation and computer science? And you kind of represent that to me is maybe that's the modern day. You in a sense are the new philosopher by sort of the computer scientist who dares to ask the bigger questions that philosophy originally started is the new philosopher. Certainly not me. I think I'm mostly still this child that grows up in a very beautiful valley and looks at the world from the outside and tries to understand what's going on. And my teachers tell me things and they largely don't make sense. So I have to make my own models. I have to discover the foundations of what the others are saying. I have to try to fix them to be charitable. I try to understand what they must have thought originally or what their teachers or their teacher's teachers must have thought until everything got lost in translation and how to make sense of the reality that we are in. And whenever I have an original idea, I'm usually late to the party by say 400 years. And the only thing that's good is that the parties get smaller and smaller the older I get and the more I explore it. The parties get smaller and more exclusive and more exclusive. So it seems like one of the key qualities of your upbringing was that you were not tethered, whether it's because of your parents or in general, maybe something within your mind, some genetic material, you were not tethered to the ideas of the general populace, which is actually a unique property. We're kind of the education system and whatever, not education system, just existing in this world forces certain sets of ideas onto you. Can you disentangle that? Why are you not so tethered? Even in your work today, you seem to not care about perhaps a best paper in Europe, right? Being tethered to particular things that current today in this year, people seem to value as a thing you put on your CV and resume. You're a little bit more outside of that world, outside of the world of ideas that people are especially focused in the benchmarks of today, the things. Can you disentangle that? Because I think that's inspiring. And if there were more people like that, we might be able to solve some of the bigger problems that AI dreams to solve. And there's a big danger in this because in a way you are expected to marry into an intellectual tradition and visit this tradition into a particular school. If everybody comes up with their own paradigms, the whole thing is not cumulative as an enterprise. So in some sense, you need a healthy balance. You need paradigmatic thinkers and you need people that work within given paradigms. Basically, scientists today define themselves largely by methods. And it's almost a disease that we think as a scientist, as somebody who was convinced by their guidance counselor, that they should join a particular discipline and then they find a good mentor to learn the right methods. And then they are lucky enough and privileged enough to join the right team. And then their name will show up on influential papers. But we also see that there are diminishing returns with this approach. And when our field, computer science and AI started, most of the people that joined this field had interesting opinions. And today's thinkers in AI either don't have interesting opinions at all, or these opinions are inconsequential for what they're actually doing. Because what they're doing is they apply the state of the art methods with a small epsilon. And this is often a good idea if you think that this is the best way to make progress. And for me, it's first of all, very boring. If somebody else can do it, why should I do it? If the current methods of machine learning lead to strong AI, why should I be doing it? I will just wait until they're done and wait until they do this on the beach or read interesting books or write some and have fun. But if you don't think that we are currently doing the right thing, if we are missing some perspectives, then it's required to think outside of the box. It's also required to understand the boxes. But it's necessary to understand what worked and what didn't work and for what reasons. So you have to be willing to ask new questions and design new methods whenever you want to answer them. And you have to be willing to dismiss the existing methods if you think that they're not going to yield the right answers. It's very bad career advice to do that. So maybe to briefly stay for one more time in the early days, when would you say for you was the dream before we dive into the discussions that we just almost started, when was the dream to understand or maybe to create human level intelligence born for you? I think that you can see AI largely today as advanced information processing. If you would change the acronym of AI into that, most people in the field would be happy. It would not change anything what they're doing. We're automating statistics and many of the statistical models are more advanced than what statisticians had in the past. And it's pretty good work. It's very productive. And the other aspect of AI is philosophical project. And this philosophical project is very risky and very few people work on it and it's not clear if it succeeds. So first of all, you keep throwing sort of a lot of really interesting ideas and I have to pick which ones we go with. But first of all, you use the term information processing, just information processing as if it's the mere, it's the muck of existence as if it's the epitome of existence, that the entirety of the universe might be information processing, that consciousness and intelligence might be information processing. So that maybe you can comment on if the advanced information processing is a limiting kind of a round of ideas. And then the other one is, what do you mean by the philosophical project? So I suspect that general intelligence is the result of trying to solve general problems. So intelligence, I think, is the ability to model. It's not necessarily goal directed rationality or something. Many intelligent people are bad at this, but it's the ability to be presented with a number of patterns and see a structure in those patterns and be able to predict the next set of patterns, to make sense of things. And some problems are very general. Usually intelligence serves control, so you make these models for a particular purpose of interacting as an agent with the world and getting certain results. But the intelligence itself is in the sense instrumental to something, but by itself it's just the ability to make models. And some of the problems are so general that the system that makes them needs to understand what itself is and how it relates to the environment. So as a child, for instance, you notice you do certain things despite you perceiving yourself as wanting different things. So you become aware of your own psychology. You become aware of the fact that you have complex structure in yourself and you need to model yourself, to reverse engineer yourself, to be able to predict how you will react to certain situations and how you deal with yourself in relationship to your environment. And this process, this project, if you reverse engineer yourself and your relationship to reality and the nature of a universe that can continue, if you go all the way, this is basically the project of AI, or you could say the project of AI is a very important component in it. The Turing test, in a way, is you ask a system, what is intelligence? If that system is able to explain what it is, how it works, then you should assign it the property of being intelligent in this general sense. So the test that Turing was administering in a way, I don't think that he couldn't see it, but he didn't express it yet in the original 1950 paper, is that he was trying to find out whether he was generally intelligent. Because in order to take this test, the rub is, of course, you need to be able to understand what that system is saying. And we don't yet know if we can build an AI. We don't yet know if we are generally intelligent. Basically, you win the Turing test by building an AI. Yes. So in a sense, hidden within the Turing test is a kind of recursive test. Yes, it's a test on us. The Turing test is basically a test of the conjecture, whether people are intelligent enough to understand themselves. Okay. But you also mentioned a little bit of a self awareness and then the project of AI. Do you think this kind of emergent self awareness is one of the fundamental aspects of intelligence? So as opposed to goal oriented, as you said, kind of puzzle solving, is coming to grips with the idea that you're an agent in the world. I find that many highly intelligent people are not very self aware, right? So self awareness and intelligence are not the same thing. And you can also be self aware if you have good priors, especially, without being especially intelligent. So you don't need to be very good at solving puzzles if the system that you are already implements the solution. But I do find intelligence, you kind of mentioned children, right? Is that the fundamental project of AI is to create the learning system that's able to exist in the world. So you kind of drew a difference between self awareness and intelligence. And yet you said that the self awareness seems to be important for children. So I call this ability to make sense of the world and your own place in it. So to make you able to understand what you're doing in this world, sentience. And I would distinguish sentience from intelligence because sentience is possessing certain classes of models. And intelligence is a way to get to these models if you don't already have them. I see. So can you maybe pause a bit and try to answer the question that we just said we may not be able to answer? And it might be a recursive meta question of what is intelligence? I think that intelligence is the ability to make models. So models. I think it's useful as examples. Very popular now. Neural networks form representations of a large scale data set. They form models of those data sets. When you say models and look at today's neural networks, what are the difference of how you're thinking about what is intelligent in saying that intelligence is the process of making models? Two aspects to this question. One is the representation. Is the representation adequate for the domain that we're talking about? One is the representation. Is the representation adequate for the domain that we want to represent? The other one is the type of the model that you arrive at adequate. So basically, are you modeling the correct domain? I think in both of these cases, modern AI is lacking still. I think that I'm not saying anything new here. I'm not criticizing the field. Most of the people that design our paradigms are aware of that. One aspect that we're missing is unified learning. When we learn, we at some point discover that everything that we sense is part of the same object, which means we learn it all into one model and we call this model the universe. So the experience of the world that we are embedded on is not a secret direct via to physical reality. Physical reality is a weird quantum graph that we can never experience or get access to. But it has these properties that it can create certain patterns that are systemic interface to the world. And we make sense of these patterns and the relationship between the patterns that we discover is what we call the physical universe. So at some point in our development as a nervous system, we discover that everything that we relate to in the world can be mapped to a region in the same three dimensional space, by and large. We now know in physics that this is not quite true. The world is not actually three dimensional, but the world that we are entangled with at the level which we are entangled with is largely a flat three dimensional space. And so this is the model that our brain is intuitively making. And this is, I think, what gave rise to this intuition of res extensa of this material world, this material domain. It's one of the mental domains, but it's just the class of all models that relate to this environment, this three dimensional physics engine in which we are embedded. Physics engine which we're embedded. I love that. Just slowly pause. So the quantum graph, I think you called it, which is the real world, which you can never get access to, there's a bunch of questions I want to sort of disentangle that. But maybe one useful one, one of your recent talks I looked at, can you just describe the basics? Can you talk about what is dualism? What is idealism? What is materialism? What is functionalism? And what connects with you most in terms of, because you just mentioned there's a reality we don't have access to. Okay. What does that even mean? And why don't we get access to it? Aren't we part of that reality? Why can't we access it? So the particular trajectory that mostly exists in the West is the result of our indoctrination by a cult for 2000 years. A cult? Which one? Oh, 2000 years. The Catholic cult mostly. And for better or worse, it has created or defined many of the modes of interaction that we have that has created the society. But it has also in some sense scarred our rationality. And the intuition that exists, if you would translate the mythology of the Catholic church into the modern world is that the world in which you and me interact is something like a multiplayer role playing adventure. And the money and the objects that we have in this world, this is all not real. Or as Eastern philosophers would say, it's Maya. It's just stuff that appears to be meaningful. And this embedding in this meaning, if you believe in it, is samsara. It's basically the identification with the needs of the mundane, secular, everyday existence. And the Catholics also introduced the notion of higher meaning, the sacred. And this existed before, but eventually the natural shape of God is the Platonic form of the civilization that you're part of. It's basically the superorganism that is formed by the individuals as an intentional agent. And basically, the Catholics used a relatively crude mythology to implement software on the minds of people and get the software synchronized to make them walk on lockstep, to basically get this God online and to make it efficient and effective. And I think God technically is just a self that spends multiple brains as opposed to your and my self, which mostly exists just on one brain. Right? And so in some sense, you can construct a self functionally as a function is implemented by brains that exists across brains. And this is a God with a small g. That's one of the, if you, Yuval Harari kind of talking about, this is one of the nice features of our brains. It seems to that we can all download the same piece of software like God in this case and kind of share it. Yeah. So basically you give everybody a spec and the mathematical constraints that are intrinsic to information processing, make sure that given the same spec, you come up with a compatible structure. Okay. So that's, there's the space of ideas that we all share. And we think that's kind of the mind, but that's separate from the idea is from Christianity for, from religion is that there's a separate thing between the mind. There is a real world. And this real world is the world in which God exists. God is the coder of the multiplayer adventure, so to speak. And we are all players in this game. And that's dualism. Yes. But the aspect is because the mental realm exists in a different implementation than the physical realm. And the mental realm is real. And a lot of people have this intuition that there is this real room in which you and me talk and speak right now, then comes a layer of physics and abstract rules and so on. And then comes another real room where our souls are and our true form isn't a thing that gives us phenomenal experience. And this is, of course, a very confused notion that you would get. And it's basically, it's the result of connecting materialism and idealism in the wrong way. So, okay. I apologize, but I think it's really helpful if we just try to define, try to define terms. Like what is dualism? What is idealism? What is materialism? For people that don't know. So the idea of dualism in our cultural tradition is that there are two substances, a mental substance and a physical substance. And they interact by different rules. And the physical world is basically causally closed and is built on a low level causal structure. So they're basically a bottom level that is causally closed. That's entirely mechanical and mechanical in the widest sense. So it's computational. There's basically a physical world in which information flows around and physics describes the laws of how information flows around in this world. Would you compare it to like a computer where you have hardware and software? The computer is a generalization of information flowing around. Basically, but if you want to discover that there is a universal principle, you can define this universal machine that is able to perform all the computations. So all these machines have the same power. This means that you can always define a translation between them, as long as they have unlimited memory to be able to perform each other's computations. So would you then say that materialism is this whole world is just the hardware and idealism is this whole world is just the software? Not quite. I think that most idealists don't have a notion of software yet because software also comes down to information processing. So what you notice is the only thing that is real to you and me is this experiential world in which things matter, in which things have taste, in which things have color, phenomenal content, and so on. You are bringing up consciousness. Okay. This is distinct from the physical world in which things have values only in an abstract sense. And you only look at cold patterns moving around. So how does anything feel like something? And this connection between the two things is very puzzling to a lot of people, of course, too many philosophers. So idealism starts out with the notion that mind is primary, materialism, things that matter is primary. And so for the idealist, the material patterns that we see playing out are part of the dream that the mind is dreaming. And we exist in a mind on a higher plane of existence, if you want. And for the materialist, there is only this material thing, and that generates some models, and we are the result of these models. And in some sense, I don't think that we should understand, if we understand it properly, materialism and idealism as a dichotomy, but as two different aspects of the same thing. So the weird thing is we don't exist in the physical world. We do exist inside of a story that the brain tells itself. Okay. Let my information processing take that in. We don't exist in the physical world. We exist in the narrative. Basically, a brain cannot feel anything. A neuron cannot feel anything. They're physical things. Physical systems are unable to experience anything. But it would be very useful for the brain or for the organism to know what it would be like to be a person and to feel something. Yeah. So the brain creates a simulacrum of such a person that it uses to model the interactions of the person. It's the best model of what that brain, this organism thinks it is in relationship to its environment. So it creates that model. It's a story, a multimedia novel that the brain is continuously writing and updating. But you also kind of said that you said that we kind of exist in that story. Yes. In that story. What is real in any of this? So again, these terms are... You kind of said there's a quantum graph. I mean, what is this whole thing running on then? Is the story... And is it completely fundamentally impossible to get access to it? Because isn't the story supposed to... Isn't the brain in something existing in some kind of context? So what we can identify as computer scientists, we can engineer systems and test our theories this way that might have the necessary insufficient properties to produce the phenomena that we are observing, which is the self in a virtual world that is generated in somebody's neocortex that is contained in the skull of this primate here. And when I point at this, this indexicality is of course wrong. But I do create something that is likely to give rise to patterns on your retina that allow you to interpret what I'm saying. But we both know that the world that you and me are seeing is not the real physical world. What we are seeing is a virtual reality generated in your brain to explain the patterns on your retina. How close is it to the real world? That's kind of the question. Is it when you have people like Donald Hoffman that say that you're really far away. The thing we're seeing, you and I now, that interface we have is very far away from anything. We don't even have anything close to the sense of what the real world is. Or is it a very surface piece of architecture? I imagine you look at the Mandelbrot fractal, this famous thing that Bernard Mandelbrot discovered. You see an overall shape in there. But if you truly understand it, you know it's two lines of code. It's basically in a series that is being tested for complex numbers in the complex number plane for every point. And for those where the series is diverging, you paint this black. And where it's converging, you don't. And you get the intermediate colors by taking how far it diverges. This gives you this shape of this fractal. But imagine you live inside of this fractal and you don't have access to where you are in the fractal. Or you have not discovered the generator function even. So what you see is, all I can see right now is a spiral. And this spiral moves a little bit to the right. Is this an accurate model of reality? Yes, it is. It is an adequate description. You know that there is actually no spiral in the Mandelbrot fractal. It only appears like this to an observer that is interpreting things as a two dimensional space and then defines certain regularities in there at a certain scale that it currently observes. Because if you zoom in, the spiral might disappear and turn out to be something different at a different resolution. So at this level, you have the spiral. And then you discover the spiral moves to the right and at some point it disappears. So you have a singularity. At this point, your model is no longer valid. You cannot predict what happens beyond the singularity. But you can observe again and you will see it hit another spiral and at this point it disappeared. So we now have a second order law. And if you make 30 layers of these laws, then you have a description of the world that is similar to the one that we come up with when we describe the reality around us. It's reasonably predictive. It does not cut to the core of it. It does not explain how it's being generated, how it actually works. But it's relatively good to explain the universe that we are entangled with. But you don't think the tools of computer science, the tools of physics could get, could step outside, see the whole drawing and get at the basic mechanism of how the pattern, the spirals are generated. Imagine you would find yourself embedded into a motherboard fractal and you try to figure out what works and you somehow have a Turing machine with enough memory to think. And as a result, you come to this idea, it must be some kind of automaton. And maybe you just enumerate all the possible automata until you get to the one that produces your reality. So you can identify necessary and sufficient condition. For instance, we discover that mathematics itself is the domain of all languages. And then we see that most of the domains of mathematics that we have discovered are in some sense describing the same fractals. This is what category theory is obsessed about, that you can map these different domains to each other. So they're not that many fractals. And some of these have interesting structure and symmetry breaks. And so you can discover what region of this global fractal you might be embedded in from first principles. But the only way you can get there is from first principles. So basically your understanding of the universe has to start with automata and then number theory and then spaces and so on. Yeah. I think like Stephen Wolfram still dreams that he'll be able to arrive at the fundamental rules of the cellular automata or the generalization of which is behind our universe. Yeah. You've said on this topic, you said in a recent conversation that quote, some people think that a simulation can't be conscious and only a physical system can, but they got it completely backward. A physical system cannot be conscious. Only a simulation can be conscious. Consciousness is a simulated property that simulates itself. Just like you said, the mind is kind of the, we'll call it story narrative. There's a simulation. So our mind is essentially a simulation. Usually I try to use the terminology so that the mind is basically a principles that produce the simulation. It's the software that is implemented by your brain. And the mind is creating both the universe that we are in and the self, the idea of a person that is on the other side of attention and is embedded in this world. Why is that important that idea of a self, why is that an important feature in the simulation? It's basically a result of the purpose that the mind has. It's a tool for modeling, right? We are not actually monkeys. We are side effects of the regulation needs of monkeys. And what the monkey has to regulate is the relationship of an organism to an outside world that is in large part also consisting of other organisms. And as a result, it basically has regulation targets that it tries to get to. These regulation targets start with priors. They're basically like unconditional reflexes that we are more or less born with. And then we can reverse engineer them to make them more consistent. And then we get more detailed models about how the world works and how to interact with it. And so these priors that you commit to are largely target values that our needs should approach set points. And this deviation to the set point creates some urge, some tension. And we find ourselves living inside of feedback loops, right? Consciousness emerges over dimensions of disagreements with the universe, things that you care, things are not the way they should be, but you need to regulate. And so in some sense, the sense self is the result of all the identifications that you're having. And that identification is a regulation target that you're committing to. It's a dimension that you care about, you think is important. And this is also what locks you in. If you let go of these commitments, of these identifications, you get free. There's nothing that you have to do anymore. And if you let go of all of them, you're completely free and you can enter nirvana because you're done. And actually, this is a good time to pause and say thank you to a friend of mine, Gustav Soderström, who introduced me to your work. I wanted to give him a shout out. He's a brilliant guy. And I think the AI community is actually quite amazing. And Gustav is a good representative of that. You are as well. So I'm glad, first of all, I'm glad the internet exists, YouTube exists, where I can watch your talks and then get to your book and study your writing and think about, you know, that's amazing. Okay. But you've kind of described sort of this emergent phenomenon of consciousness from the simulation. So what about the hard problem of consciousness? Can you just linger on it? Why does it still feel like, I understand you're kind of, the self is an important part of the simulation, but why does the simulation feel like something? So if you look at a book by, say, George R. R. Martin, where the characters have plausible psychology and they stand on a hill because they want to conquer the city below the hill and they're done in it. And they look at the color of the sky and they are apprehensive and feel empowered and all these things. Why do they have these emotions? It's because it's written into the story, right? And it's written to the story because there's an adequate model of the person that predicts what they're going to do next. And the same thing is true for us. So it's basically a story that our brain is writing. It's not written in words. It's written in perceptual content, basically multimedia content. And it's a model of what the person would feel if it existed. So it's a virtual person. And you and me happen to be this virtual person. So this virtual person gets access to the language center and talks about the sky being blue. And this is us. But hold on a second. Do I exist in your simulation? You do exist in an almost similar way as me. So there are internal states that are less accessible for me that you have and so on. And my model might not be completely adequate. There are also things that I might perceive about you that you don't perceive. But in some sense, both you and me are some puppets, two puppets that enact a play in my mind. And I identify with one of them because I can control one of the puppets directly. And with the other one, I can create things in between. So for instance, we can go on an interaction that even leads to a coupling to a feedback loop. So we can think things together in a certain way or feel things together. But this coupling is itself not a physical phenomenon. It's entirely a software phenomenon. It's the result of two different implementations interacting with each other. So that's interesting. So are you suggesting, like the way you think about it, is the entirety of existence a simulation and where kind of each mind is a little subsimulation, that like, why don't you, why doesn't your mind have access to my mind's full state? Like, for the same reason that my mind doesn't have access to its own full state. So what, I mean, There is no trick involved. So basically, when I know something about myself, it's because I made a model. So one part of your brain is tasked with modeling what other parts of your brain are doing. Yes. But there seems to be an incredible consistency about this world in the physical sense that there's repeatable experiments and so on. How does that fit into our silly, the center of apes simulation of the world? So why is it so repeatable? Why is everything so repeatable? And not everything. There's a lot of fundamental physics experiments that are repeatable for a long time, all over the place and so on. Laws of physics. How does that fit in? It seems that the parts of the world that are not deterministic are not long lived. So if you build a system, any kind of automaton, so if you build simulations of something, you'll notice that the phenomena that endure are those that give rise to stable dynamics. So basically, if you see anything that is complex in the world, it's the result of usually of some control of some feedback that keeps it stable around certain attractors. And the things that are not stable that don't give rise to certain harmonic patterns and so on, they tend to get weeded out over time. So if we are in a region of the universe that sustains complexity, which is required to implement minds like ours, this is going to be a region of the universe that is very tightly controlled and controllable. So it's going to have lots of interesting symmetries and also symmetry breaks that allow the creation of structure. But they exist where? So there's such an interesting idea that our mind is simulation that's constructing the narrative. But my question is, just to try to understand how that fits with this, with the entirety of the universe, you're saying that there's a region of this universe that allows enough complexity to create creatures like us. But what's the connection between the brain, the mind, and the broader universe? Which comes first? Which is more fundamental? Is the mind the starting point, the universe is emergent? Is the universe the starting point, the minds are emergent? I think quite clearly the latter. That's at least a much easier explanation because it allows us to make causal models. And I don't see any way to construct an inverse causality. So what happens when you die to your mind simulation? My implementation ceases. So basically the thing that implements myself will no longer be present, which means if I am not implemented on the minds of other people, the thing that I identify with, the weird thing is I don't actually have an identity beyond the identity that I construct. If I was the Dalai Lama, he identifies as a form of government. So basically the Dalai Lama gets reborn, not because he's confused, but because he is not identifying as a human being. He runs on a human being. He's basically a governmental software that is instantiated in every new generation and you. So his advice is to pick someone who does this in the next generation. So if you identify with this, you are no longer a human and you don't die in the sense that what dies is only the body of the human that you run on. To kill the Dalai Lama, you would have to kill his tradition. And if we look at ourselves, we realize that we are to a small part like this, most of us. So for instance, if you have children, you realize something lives on in them. Or if you spark an idea in the world, something lives on, or if you identify with the society around you, because you are in part that you're not just this human being. Yeah. So in a sense, you are kind of like a Dalai Lama in the sense that you, Joshua Bach, is just a collection of ideas. So like you have this operating system on which a bunch of ideas live and interact. And then once you die, they kind of part, some of them jump off the ship. You put it put it the other way. Identity is a software state. It's a construction. It's not physically real. Identity is not a physical concept. It's basically a representation of different objects on the same world line. But identity lives and dies. Are you attached? What's the fundamental thing? Is it the ideas that come together to form identity? Or is each individual identity actually a fundamental thing? It's a representation that you can get agency over if you care. So basically, you can choose what you identify with if you want to. No, but it just seems if the mind is not real, that the birth and death is not a crucial part of it. Well, maybe I'm silly. Maybe I'm attached to this whole biological organism. But it seems that being a physical object in this world is an important aspect of birth and death. Like it feels like it has to be physical to die. It feels like simulations don't have to die. The physics that we experience is not the real physics. There is no color and sound in the real world. Color and sound are types of representations that you get if you want to model reality with oscillators. So colors and sound in some sense have octaves, and it's because they are represented probably with oscillators. So that's why colors form a circle of use. And colors have harmonics, sounds have harmonics as a result of synchronizing oscillators in the brain. So the world that we subjectively interact with is fundamentally the result of the representation mechanisms in our brain. They are mathematically to some degree universal. There are certain regularities that you can discover in the patterns and not others. But the patterns that we get, this is not the real world. The world that we interact with is always made of too many parts to count. So when you look at this table and so on, it's consisting of so many molecules and atoms that you cannot count them. So you only look at the aggregate dynamics, at limit dynamics. If you had almost infinitely many particles, what would be the dynamics of the table? And this is roughly what you get. So geometry that we are interacting with is the result of discovering those operators that work in the limit that you get by building an infinite series that converges. For those parts where it converges, it's geometry. For those parts where it doesn't converge, it's chaos. Right. And then so all of that is filtered through the consciousness that's emergent in our narrative. The consciousness gives it color, gives it feeling, gives it flavor. So I think the feeling, flavor and so on is given by the relationship that a feature has to all the other features. It's basically a giant relational graph that is our subjective universe. The color is given by those aspects of the representation or this experiential color where you care about, where you have identifications, where something means something, where you are the inside of a feedback loop. And the dimensions of caring are basically dimensions of this motivational system that we emerge over. The meaning of the relations, the graph. Can you elaborate on that a little bit? Like where does the, maybe we can even step back and ask the question of what is consciousness to be sort of more systematic. Like what do you, how do you think about consciousness? I think that consciousness is largely a model of the contents of your attention. It's a mechanism that has evolved for a certain type of learning. At the moment, our machine learning systems largely work by building chains of weighted sums of real numbers with some nonlinearity. And you learn by piping an error signal through these different chained layers and adjusting the weights in these weighted sums. And you can approximate most polynomials with this if you have enough training data. But the price is you need to change a lot of these weights. Basically, the error is piped backwards into the system until it accumulates at certain junctures in the network. And everything else evens out statistically. And only at these junctures, this is where you had the actual error in the network, you make the change there. This is a very slow process. And our brains don't have enough time for that because we don't get old enough to play Go the way that our machines learn to play Go. So instead, what we do is an attention based learning. We pinpoint the probable region in the network where we can make an improvement. And then we store this binding state together with the expected outcome in a protocol. And this ability to make index memories for the purpose of learning to revisit these commitments later, this requires a memory of the contents of our attention. Another aspect is when I construct my reality, I make mistakes. So I see things that turn out to be reflections or shadows and so on, which means I have to be able to point out which features of my perception gave rise to a present construction of reality. So the system needs to pay attention to the features that are currently in its focus. And it also needs to pay attention to whether it pays attention itself, in part because the attentional system gets trained with the same mechanism, so it's reflexive, but also in part because your attention lapses if you don't pay attention to the attention itself. So it's the thing that I'm currently seeing, just a dream that my brain has spun off into some kind of daydream, or am I still paying attention to my percept? So you have to periodically go back and see whether you're still paying attention. And if you have this loop and you make it tight enough between the system becoming aware of the contents of its attention and the fact that it's paying attention itself and makes attention the object of its attention, I think this is the loop over which we wake up. So there's this attentional mechanism that's somehow self referential that's fundamental to what consciousness is. So just ask you a question, I don't know how much you're familiar with the recent breakthroughs in natural language processing, they use attentional mechanism, they use something called transformers to learn patterns and sentences by allowing the network to focus its attention to particular parts of the sentence and each individual. So like parameterize and make it learnable the dynamics of a sentence by having like a little window into the sentence. Do you think that's like a little step towards that eventually will take us to the attentional mechanisms from which consciousness can emerge? Not quite. I think it models only one aspect of attention. In the early days of automated language translation, there was an example that I found particularly funny where somebody tried to translate a text from English into German and it was a bat broke the window. And the translation in German was to translate back into English a bat, this flying mammal broke the window with a baseball bat. Yes. And it seemed to be the most similar to this program because it somehow maximized the possibility of translating the concept bat into German in the same sentence. And this is a mistake that the transformer model is not doing because it's tracking identity. And the attentional mechanism in the transformer model is basically putting its finger on individual concepts and make sure that these concepts pop up later in the text and tracks basically the individuals through the text. And it's why the system can learn things that other systems couldn't before it, which makes it, for instance, possible to write a text where it talks about the scientist, then the scientist has a name and has a pronoun and it gets a consistent story about that thing. What it does not do, it doesn't fully integrate this. So this meaning falls apart at some point, it loses track of this context. It does not yet understand that everything that it says has to refer to the same universe. And this is where this thing falls apart. But the attention in the transformer model does not go beyond tracking identity. And tracking identity is an important part of attention, but it's a different, very specific attentional mechanism. And it's not the one that gives rise to the type of consciousness that we have. Just to linger on, what do you mean by identity in the context of language? So when you talk about language, you have different words that can refer to the same concept. Got it. And in the sense that... The space of concepts. So... Yes. And it can also be in a nominal sense or in an inexical sense that you say this word does not only refer to this class of objects, but it refers to a definite object, to some kind of agent that waves their way through the story. And it's only referred by different ways in the language. So the language is basically a projection from a conceptual representation from a scene that is evolving into a discrete string of symbols. And what the transformer is able to do, it learns aspects of this projection mechanism that other models couldn't learn. So have you ever seen an artificial intelligence or any kind of construction idea that allows for, unlike neural networks or perhaps within neural networks, that's able to form something where the space of concepts continues to be integrated? So what you're describing, building a knowledge base, building this consistent, larger and larger sets of ideas that would then allow for deeper understanding. Wittgenstein thought that we can build everything from language, from basically a logical grammatical construct. And I think to some degree, this was also what Minsky believed. So that's why he focused so much on common sense reasoning and so on. And a project that was inspired by him was Psych. That was basically going on. Yes. Of course, ideas don't die. Only people die. That's true. And Alt Psych is a productive project. It's just probably not one that is going to converge to general intelligence. The thing that Wittgenstein couldn't solve, and he looked at this in his book at the end of his life, Philosophical Investigations, was the notion of images. So images play an important role in Tractatus. The Tractatus is an attempt to basically turn philosophy into logical probing language, to design a logical language in which you can do actual philosophy that's rich enough for doing this. And the difficulty was to deal with perceptual content. And eventually, I think he decided that he was not able to solve it. And I think this preempted the failure of the logitist program in AI. And the solution, as we see it today, is we need more general function approximation. There are geometric functions that we learn to approximate that cannot be efficiently expressed and computed in a grammatical language. We can, of course, build automata that go via number theory and so on to learn in algebra and then compute an approximation of this geometry. But to equate language and geometry is not an efficient way to think about it. So function, well, you kind of just said that neural networks are sort of, the approach that neural networks takes is actually more general than what can be expressed through language. Yes. So what can be efficiently expressed through language at the data rates at which we process grammatical language? Okay. So you don't think languages, so you disagree with Wittgenstein, that language is not fundamental to... I agree with Wittgenstein. I just agree with the late Wittgenstein. And I also agree with the beauty of the early Wittgenstein. I think that the Tractatus itself is probably the most beautiful philosophical text that was written in the 20th century. But language is not fundamental to cognition and intelligence and consciousness. So I think that language is a particular way or the natural language that we're using is a particular level of abstraction that we use to communicate with each other. But the languages in which we express geometry are not grammatical languages in the same sense. So they work slightly differently, more general expressions of functions. And I think the general nature of a model is you have a bunch of parameters. These have a range, these are the variances of the world, and you have relationships between them, which are constraints, which say if certain parameters have these values, then other parameters have to have the following values. And this is a very early insight in computer science. And I think some of the earliest formulations is the Boltzmann machine. And the problem with the Boltzmann machine is that it has a measure of whether it's good. This is basically the energy on the system, the amount of tension that you have left in the constraints where the constraints don't quite match. It's very difficult to, despite having this global measure, to train it. Because as soon as you add more than trivially few elements, parameters into the system, it's very difficult to get it settled in the right architecture. And so the solution that Hinton and Zanofsky found was to use a restricted Boltzmann machine, which uses the hidden links, the internal links in the Boltzmann machine and only has basically input and output layer. But this limits the expressivity of the Boltzmann machine. So now he builds a network of these primitive Boltzmann machines. And in some sense, you can see almost continuous development from this to the deep learning models that we're using today, even though we don't use Boltzmann machines at this point. But the idea of the Boltzmann machine is you take this model, you clamp some of the values to perception, and this forces the entire machine to go into a state that is compatible with the states that you currently perceive. And this state is your model of the world. I think it's a very general way of thinking about models, but we have to use a different approach to make it work. We have to find different networks that train the Boltzmann machine. So the mechanism that trains the Boltzmann machine and the mechanism that makes the Boltzmann machine settle into its state are distinct from the constrained architecture of the Boltzmann machine itself. The kind of mechanisms that we want to develop, you're saying? Yes. So the direction in which I think our research is going to go is going to, for instance, what you notice in perception is our perceptual models of the world are not probabilistic, but possibilistic, which means you should be able to perceive things that are improbable, but possible. A perceptual state is valid, not if it's probable, but if it's possible, if it's coherent. So if you see a tiger coming after you, you should be able to see this even if it's unlikely. And the probability is necessary for convergence of the model. So given the state of possibilities that is very, very large and a set of perceptual features, how should you change the states of the model to get it to converge with your perception? But the space of ideas that are coherent with the context that you're sensing is perhaps not as large. I mean, that's perhaps pretty small. The degree of coherence that you need to achieve depends, of course, how deep your models go. That is, for instance, politics is very simple when you know very little about game theory and human nature. So the younger you are, the more obvious it is how politics would work, right? And because you get a coherent aesthetics from relatively few inputs. And the more layers you model, the more layers you model reality, the harder it gets to satisfy all the constraints. So, you know, the current neural networks are fundamentally supervised learning system with a feed forward neural network is back propagation to learn. What's your intuition about what kind of mechanisms might we move towards to improve the learning procedure? I think one big aspect is going to be meta learning and architecture search starts in this direction. In some sense, the first wave of classical AI work by identifying a problem and a possible solution and implementing the solution, right? A program that plays chess. And right now we are in the second wave of AI. So instead of writing the algorithm that implements the solution, we write an algorithm that automatically searches for an algorithm that implements the solution. So the learning system in some sense is an algorithm that itself discovers the algorithm that solves the problem, like Go. Go is too hard to implement the solution by hand, but we can implement an algorithm that finds the solution. Yeah. So now let's move to the third stage, right? The third stage would be meta learning. Find an algorithm that discovers a learning algorithm for the given domain. Our brain is probably not a learning system, but a meta learning system. This is one way of looking at what we are doing. There is another way. If you look at the way our brain is, for instance, implemented, there is no central control that tells all the neurons how to wire up. Instead, every neuron is an individual reinforcement learning agent. Every neuron is a single celled organism that is quite complicated and in some sense quite motivated to get fed. And it gets fed if it fires on average at the right time. And the right time depends on the context that the neuron exists in, which is the electrical and chemical environment that it has. So it basically has to learn a function over its environment that tells us when to fire to get fed. Or if you see it as a reinforcement learning agent, every neuron is in some sense making a hypothesis when it sends a signal and tries to pipe a signal through the universe and tries to get positive feedback for it. And the entire thing is set up in such a way that it's robustly self organizing into a brain, which means you start out with different neuron types that have different priors on which hypothesis to test and how to get its reward. And you put them into different concentrations in a certain spatial alignment, and then you entrain it in a particular order. And as a result, you get a well organized brain. Yeah, so okay, so the brain is a meta learning system with a bunch of reinforcement learning agents. And what I think you said, but just to clarify, there's no centralized government that tells you, here's a loss function, here's a loss function, here's a loss function. Who says what's the objective? There are also governments which impose loss functions on different parts of the brain. So we have differential attention. Some areas in your brain get specially rewarded when you look at faces. If you don't have that, you will get prosopagnosia, which basically the inability to tell people apart by their faces. And the reason that happens is because it had an evolutionary advantage. So like evolution comes into play here. But it's basically an extraordinary attention that we have for faces. I don't think that people with prosopagnosia have a perceived defective brain, the brain just has an average attention for faces. So people with prosopagnosia don't look at faces more than they look at cups. So the level at which they resolve the geometry of faces is not higher than for cups. And people that don't have prosopagnosia look obsessively at faces, right? For you and me, it's impossible to move through a crowd without scanning the faces. And as a result, we make insanely detailed models of faces that allow us to discern mental states of people. So obviously, we don't know 99% of the details of this meta learning system. That's our mind. Okay. But still, we took a leap from something much dumber to that from through the evolutionary process. Can you first of all, maybe say how hard the, how big of a leap is that from our brain, from our ancestors to multi cell organisms? And is there something we can think about? As we start to think about how to engineer intelligence, is there something we can learn from evolution? In some sense, life exists because of the market opportunity of controlled chemical reactions. We compete with dump chemical reactions and we win in some areas against this dump combustion because we can harness those entropy gradients where you need to add a little bit of energy in a specific way to harvest more energy. So we out competed combustion. Yes, in many regions we do and we try very hard because when we are in direct competition, we lose, right? So because the combustion is going to close the entropy gradients much faster than we can run. So basically we do this because every cell has a Turing machine built into it. It's like literally a read write head on the tape. So everything that's more complicated than a molecule that just is a vortex around attractors that needs a Turing machine for its regulation. And then you bind cells together and you get next level organizational organism where the cells together implement some kind of software. For me, a very interesting discovery in the last year was the word spirit because I realized that what spirit actually means is an operating system for an autonomous robot. And when the word was invented, people needed this word. But they didn't have robots that they built themselves yet. The only autonomous robots that were known were people, animals, plants, ecosystems, cities, and so on. And they all had spirits. And it makes sense to say that the plant has an operating system, right? If you pinch the plant in one area, then it's going to have repercussions throughout the plant. Everything in the plant is in some sense connected into some global aesthetics like in other organisms. An organism is not a collection of cells, it's a function that tells cells how to behave. And this function is not implemented as some kind of supernatural thing, like some morphogenetic field. It is an emergent result of the interactions of each cell with each other cell. Oh my God. So what you're saying is the organism is a function that tells what to do and the function emerges from the interaction of the cells. Yes. So it's basically a description of what the plant is doing in terms of microstates. And the microstates, the physical implementation are too many of them to describe them. So the software that we use to describe what the plant is doing, the spirit of the plant is the software, the operating system of the plant, right? This is a way in which we, the observers, make sense of the plant. And the same is true for people. So people have spirits, which is their operating system in a way, right? And there's aspects of that operating system that relate to how your body functions and others, how you socially interact, how you interact with yourself and so on. And we make models of that spirit. And we think it's a loaded term because it's from a pre scientific age. But it took the scientific age a long time to rediscover a term that is pretty much the same thing. And I suspect that the differences that we still see between the old word and the new word are translation errors that have happened over the centuries. Can you actually linger on that? Why do you say that spirit, just to clarify, because I'm a little bit confused. So the word spirit is a powerful thing. But why did you say in the last year or so that you discovered this? Do you mean the same old traditional idea of a spirit? I try to find out what people mean by spirit. When people say spirituality in the US, it usually refers to the phantom limb that they develop in the absence of culture. And a culture is in some sense, you could say the spirit of a society that is long game. This thing that is become self aware at a level above the individuals where you say, if you don't do the following things, then the grand, grand, grand grandchildren of our children will have nothing to eat. So if you take this long scope, where you try to maximize the length of the game that you are playing as a species, you realize that you're part of a larger thing that you cannot fully control. You probably need to submit to the ecosphere instead of trying to completely control it. There needs to be a certain level at which we can exist as a species if you want to endure. And our culture is not sustaining this anymore. We basically made this bet with the industrial revolution that we can control everything. And the modernist societies with basically unfettered growth led to a situation in which we depend on the ability to control the entire planet. And since we are not able to do that, as it seems, this culture will die. And we realize that it doesn't have a future, right? We called our children generation Z. That's a very optimistic thing to do. Yeah. So you can have this kind of intuition that our civilization, you said culture, but you really mean the spirit of the civilization, the entirety of the civilization may not exist for long. Yeah. Can you untangle that? What's your intuition behind that? So you kind of offline mentioned to me that the industrial revolution was kind of the moment we agreed to accept the offer sign on the paper on the dotted line with the industrial revolution, we doomed ourselves. Can you elaborate on that? This is a suspicion. I, of course, don't know how it plays out. But it seems to me that in a society in which you leverage yourself very far over an entropic abyss without land on the other side, it's relatively clear that your cantilever is at some point going to break down into this entropic abyss. And you have to pay the bill. Okay. Russia is my first language. And I'm also an idiot. Me too. This is just two apes. Instead of playing with a banana, trying to have fun by talking. Okay. Anthropic what? And what's entropic? Entropic. So entropic in the sense of entropy. Oh, entropic. Got it. And entropic, what was the other word you used? Abyss. What's that? It's a big gorge. Oh, abyss. Abyss, yes. Entropic abyss. So many of the things you say are poetic. It's hurting my ears. And this one is amazing, right? It's mispronounced, which makes you more poetic. Wittgenstein would be proud. So entropic abyss. Okay. Let's rewind then. The industrial revolution. So how does that get us into the entropic abyss? So in some sense, we burned a hundred million years worth of trees to get everybody plumbing. Yes. And the society that we had before that had a very limited number of people. So basically since zero BC, we hovered between 300 and 400 million people. Yes. And this only changed with the enlightenment and the subsequent industrial revolution. And in some sense, the enlightenment freed our rationality and also freed our norms from the preexisting order gradually. It was a process that basically happened in feedback loops. So it was not that just one caused the other. It was a dynamic that started. And the dynamic worked by basically increasing productivity to such a degree that we could feed all our children. And I think the definition of poverty is that you have as many children as you can feed before they die, which is in some sense, the state that all animals on earth are in. The definition of poverty is having enough. So you can have only so many children as you can feed and if you have more, they die. Yes. And in our societies, you can basically have as many children as you want, they don't die. Right. So the reason why we don't have as many children as we want is because we also have to pay a price in terms of we have to insert ourselves in a lower social stratum if we have too many children. So basically everybody in the under middle and lower upper class has only a limited number of children because having more of them would mean a big economic hit to the individual families. Yes. Because children, especially in the US, super expensive to have. And you only are taken out of this if you are basically super rich or if you are super poor. If you're super poor, it doesn't matter how many kids you have because your status is not going to change. And these children allow you not going to die of hunger. So how does this lead to self destruction? So there's a lot of unpleasant properties about this process. So basically what we try to do is we try to let our children survive, even if they have diseases. Like I would have died before my mid twenties without modern medicine. And most of my friends would have as well. And so many of us wouldn't live without the advantages of modern medicine and modern industrialized society. We get our protein largely by subduing the entirety of nature. Imagine there would be some very clever microbe that would live in our organisms and would completely harvest them and change them into a thing that is necessary to sustain itself. And it would discover that for instance, brain cells are kind of edible, but they're not quite nice. So you need to have more fat in them and you turn them into more fat cells. And basically this big organism would become a vegetable that is barely alive and it's going to be very brittle and not resilient when the environment changes. Yeah, but some part of that organism, the one that's actually doing all the using of the, there'll still be somebody thriving. So it relates back to this original question I suspect that we are not the smartest thing on this planet. I suspect that basically every complex system has to have some complex regulation if it depends on feedback loops. And so for instance, it's likely that we should describe a certain degree of intelligence to plants. The problem is that plants don't have a nervous system. So they don't have a way to telegraph messages over large distances almost instantly in the plant. And instead, they will rely on chemicals between adjacent cells, which means the signal processing speed depends on the signal processing with a rate of a few millimeters per second. And as a result, if the plant is intelligent, it's not going to be intelligent at similar timescales as this. Yeah, the time scale is different. So you suspect we might not be the most intelligent but we're the most intelligent in this spatial scale in our timescale. So basically, if you would zoom out very far, we might discover that there have been intelligent ecosystems on the planet that existed for thousands of years in an almost undisturbed state. And it could be that these ecosystems actively related their environment. So basically change the course of the evolution vision, this ecosystem to make it more efficient in the future. So it's possible something like plants is actually a set of living organisms, an ecosystem of living organisms that are just operating a different timescale and are far superior in intelligence than human beings. And then human beings will die out and plants will still be there and they'll be there. Yeah, there's an evolutionary adaptation playing a role at all of these levels. For instance, if mice don't get enough food and get stressed, the next step is to get more sparse and more scrawny. And the reason for this is because they in a natural environment, the mice have probably hidden a drought or something else. And if they're overgrazed, then all the things that sustain them might go extinct. And there will be no mice a few generations from now. So to make sure that there will be mice in five generations from now, basically the mice scale back. And a similar thing happens with the predators of mice. They should make sure that the mice don't completely go extinct. So in some sense, if the predators are smart enough, they will be tasked with shepherding their food supply. Maybe the reason why lions have much larger brains than antelopes is not so much because it's so hard to catch an antelope as opposed to run away from the lion. But the lions need to make complex models of their environment, more complex than the antelopes. So first of all, just describing that there's a bunch of complex systems and human beings may not even be the most special or intelligent of those complex systems, even on Earth, makes me feel a little better about the extinction of human species that we're talking about. Yes, maybe you're just Guy Astploit to put the carbon back into the atmosphere. Yeah, this is just a nice, we tried it out. The big stain on evolution is not us, it was trees. Earth evolved trees before there could be digested again. There were no insects that could break all of them apart. Cellulose is so robust that you cannot get all of it with microorganisms. So many of these trees fell into swamps and all this carbon became inert and could no longer be recycled into organisms. And we are the species that is destined to take care of that. So this is kind of... To get out of the ground, put it back into the atmosphere and the Earth is already greening. So we have to be careful about that. To get out of the ground, put it back into the atmosphere and the Earth is already greening. So within a million years or so when the ecosystems have recovered from the rapid changes, that they're not compatible with right now, the Earth is going to be awesome again. And there won't be even a memory of us, of us little apes. I think there will be memories of us. I suspect we are the first generally intelligent species in the sense. We are the first species within industrial society because we will leave more phones than bones in the stratosphere. Phones than bones. I like it. But then let me push back. You've kind of suggested that we have a very narrow definition of... I mean, why aren't trees a higher level of general intelligence? If trees were intelligent, then they would be at different timescales, which means within a hundred years, the tree is probably not going to make models that are as complex as the ones that we make in 10 years. But maybe the trees are the ones that made the phones, right? You could say the entirety of life did it. The first cell never died. The first cell only split, right? And every cell in our body is still an instance of the first cell that split off from that very first cell. There was only one cell on this planet as far as we know. And so the cell is not just a building block of life. It's a hyperorganism. And we are part of this hyperorganism. So nevertheless, this hyperorganism, no, this little particular branch of it, which is us humans, because of the industrial revolution and maybe the exponential growth of technology might somehow destroy ourselves. So what do you think is the most likely way we might destroy ourselves? So some people worry about genetic manipulation. Some people, as we've talked about, worry about either dumb artificial intelligence or super intelligent artificial intelligence destroying us. Some people worry about nuclear weapons and weapons of war in general. What do you think? If you were a betting man, what would you bet on in terms of self destruction? And then would it be higher than 50%? So it's very likely that nothing that we bet on matters after we win our bets. So I don't think that bets are literally the right way to go about this. I mean, once you're dead, you won't be there to collect the wings. So it's also not clear if we as a species go extinct. But I think that our present civilization is not sustainable. So the thing that will change is there will be probably fewer people on the planet than there are today. And even if not, then still most of people that are alive today will not have offspring in 100 years from now because of the geographic changes and so on and the changes in the food supply. It's quite likely that many areas of the planet will only be livable with a close cooling chain in 100 years from now. So many of the areas around the equator and in subtropical climates that are now quite pleasant to live in, will stop to be inhabitable without air conditioning. So you honestly, wow, cooling chain, close knit cooling chain communities. So you think you have a strong worry about the effects of global warming? By itself, it's not a big issue. If you live in Arizona right now, you have basically three months in the summer in which you cannot be outside. And so you have a close cooling chain. You have air conditioning in your car and in your home and you're fine. And if the air conditioning would stop for a few days, then in many areas you would not be able to survive. Can we just pause for a second? You say so many brilliant, poetic things. Do people use that term closed cooling chain? I imagine that people use it when they describe how they get meat into a supermarket, right? If you break the cooling chain and this thing starts to thaw, you're in trouble and you have to throw it away. That's such a beautiful way to put it. It's like calling a city a closed social chain or something like that. I mean, that's right. I mean, the locality of it is really important. It basically means you wake up in a climatized room, you go to work in a climatized car, you work in a climatized office, you shop in a climatized supermarket and in between you have very short distance in which you run from your car to the supermarket, but you have to make sure that your temperature does not approach the temperature of the environment. The crucial thing is the wet barb temperature. The wet barb temperature. It's what you get when you take a wet cloth and you put it around your thermometer and then you move it very quickly through the air so you get the evaporation heat. And as soon as you can no longer cool your body temperature via evaporation to a temperature below something like I think 35 degrees, you die. Which means if the outside world is dry, you can still cool yourself down by sweating. But if it has a certain degree of humidity or if it goes over a certain temperature, then sweating will not save you. And this means even if you're a healthy, fit individual within a few hours, even if you try to be in the shade and so on, you'll die unless you have some climatizing equipment. And this itself, as long as you maintain civilization and you have energy supply and you have foot trucks coming to your home that are climatized, everything is fine. But what if you lose large scale open agriculture at the same time? So basically you run into foot insecurity because climate becomes very irregular or weather becomes very irregular and you have a lot of extreme weather events. So you need to roll most of your foot maybe indoor or you need to import your foot from certain regions. And maybe you're not able to maintain the civilization throughout the planet to get the infrastructure to get the foot to your home. Right. But there could be significant impacts in the sense that people begin to suffer. There could be wars over resources and so on. But ultimately, do you not have a, not a faith, but what do you make of the capacity of technological innovation to help us prevent some of the worst damages that this condition can create? So as an example, as an almost out there example, is the work that SpaceX and Elon Musk is doing of trying to also consider our propagation throughout the universe in deep space to colonize other planets. That's one technological step. But of course, what Elon Musk is trying on Mars is not to save us from global warming, because Mars looks much worse than Earth will look like after the worst outcomes of global warming imaginable, right? Mars is essentially not habitable. It's exceptionally harsh environment, yes. But what he is doing, what a lot of people throughout history since the Industrial Revolution are doing, are just doing a lot of different technological innovation with some kind of target. And when it ends up happening, it's totally unexpected new things come up. So trying to terraform or trying to colonize Mars, extremely harsh environment, might give us totally new ideas of how to expand or increase the power of this closed cooling circuit that empowers the community. So it seems like there's a little bit of a race between our open ended technological innovation of this communal operating system that we have and our general tendency to want to overuse resources and thereby destroy ourselves. You don't think technology can win that race? I think the probability is relatively low, given that our technology is, for instance, the US is stagnating since the 1970s roughly, in terms of technology. Most of the things that we do are the result of incremental processes. What about Intel? What about Moore's Law? It's basically, it's very incremental. The things that we're doing is, so the invention of the microprocessor was a major thing, right? The miniaturization of transistors was really major. But the things that we did afterwards largely were not that innovative. We had gradual changes of scaling things from CPUs into GPUs and things like that. But I don't think that there are, basically there are not many things. If you take a person that died in the 70s and was at the top of their game, they would not need to read that many books to be current again. But it's all about books. Who cares about books? There might be things that are beyond books. Or say papers. No, papers. Forget papers. There might be things that are, so papers and books and knowledge, that's a concept of a time when you were sitting there by candlelight and individual consumers of knowledge. What about the impact that we're not in the middle of, might not be understanding of Twitter, of YouTube? The reason you and I are sitting here today is because of Twitter and YouTube. So the ripple effect, and there's two minds, sort of two dumb apes coming up with a new, perhaps a new clean insights, and there's 200 other apes listening right now, 200,000 other apes listening right now. And that effect, it's very difficult to understand what that effect will have. That might be bigger than any of the advancements of the microprocessor or any of the industrial revolution, the ability of spread knowledge. And that knowledge, like it allows good ideas to reach millions much faster. And the effect of that, that might be the new, that might be the 21st century, is the multiplying of ideas, of good ideas. Because if you say one good thing today, that will multiply across huge amounts of people, and then they will say something, and then they will have another podcast, and they'll say something, and then they'll write a paper. That could be a huge, you don't think that? Yeah, we should have billions for Neumann's right now in two rings, and we don't for some reason. I suspect the reason is that we destroy our attention span. Also the incentives, of course, different. Yeah, we have extreme Kardashians, yeah. So the reason why we're sitting here and doing this as a YouTube video is because you and me don't have the attention span to write a book together right now. And you guys probably don't have the attention span to read it. So let me tell you, it's very short. But we're an hour and 40 minutes in, and I guarantee you that 80% of the people are still listening. So there is an attention span. It's just the form. Who said that the book is the optimal way to transfer information? This is still an open question. That's what we're... It's something that social media could be doing that other forms could not be doing. I think the end game of social media is a global brain. And Twitter is in some sense a global brain that is completely hooked on dopamine, doesn't have any kind of inhibition, and as a result is caught in a permanent seizure. It's also in some sense a multiplayer role playing game. And people use it to play an avatar that is not like them, as they were in this sane world, and they look through the world through the lens of their phones and think it's the real world. But it's the Twitter world that is distorted by the popularity incentives of Twitter. Yeah, the incentives and just our natural biological, the dopamine rush of a like, no matter how... I try to be very kind of Zen like and minimalist and not be influenced by likes and so on, but it's probably very difficult to avoid that to some degree. Speaking at a small tangent of Twitter, how can Twitter be done better? I think it's an incredible mechanism that has a huge impact on society by doing exactly what you're doing. Sorry, doing exactly what you described, which is having this... We're like, is this some kind of game, and we're kind of our individual RL agents in this game, and it's uncontrollable because there's not really a centralized control. Neither Jack Dorsey nor the engineers at Twitter seem to be able to control this game. Or can they? That's sort of a question. Is there any advice you would give on how to control this game? I wouldn't give advice because I am certainly not an expert, but I can give my thoughts on this. And our brain has solved this problem to some degree. Our brain has lots of individual agents that manage to play together in a way. And we have also many contexts in which other organisms have found ways to solve the problems of cooperation that we don't solve on Twitter. And maybe the solution is to go for an evolutionary approach. So imagine that you have something like Reddit or something like Facebook and something like Twitter, and you think about what they have in common. What they have in common, they are companies that in some sense own a protocol. And this protocol is imposed on a community, and the protocol has different components for monetization, for user management, for user display, for rating, for anonymity, for import of other content, and so on. And now imagine that you take these components of the protocol apart, and you do it in some sense like communities within this social network. And these communities are allowed to mix and match their protocols and design new ones. So for instance, the UI and the UX can be defined by the community. The rules for sharing content across communities can be defined. The monetization can be redefined. The way you reward individual users for what can be redefined. The way users can represent themselves and to each other can redefined. Who could be the redefiner? So can individual human beings build enough intuition to redefine those things? This itself can become part of the protocol. So for instance, it could be in some communities, it will be a single person that comes up with these things. And others, it's a group of friends. Some might implement a voting scheme that has some interesting weighted voting. Who knows? Who knows what will be the best self organizing principle for this. But the process can't be automated. I mean, it seems like the brain. It can be automated so people can write software for this. And eventually the idea is, let's not make an assumption about this thing if you don't know what the right solution is. In those areas that we have no idea whether the right solution will be people designing this ad hoc, or machines doing this. Whether you want to enforce compliance by social norms like Wikipedia, or with software solutions, or with AI that goes through the posts of people, or with a legal principle, and so on. This is something maybe you need to find out. And so the idea would be if you let the communities evolve, and you just control it in such a way that you are incentivizing the most sentient communities. The ones that produce the most interesting behaviors that allow you to interact in the most helpful ways to the individuals. You have a network that gives you information that is relevant to you. It helps you to maintain relationships to others in healthy ways. It allows you to build teams. It allows you to basically bring the best of you into this thing and goes into a coupling into a relationship with others in which you produce things that you would be unable to produce alone. Yes, beautifully put. But the key process of that with incentives and evolution is things that don't adopt themselves to effectively get the incentives have to die. And the thing about social media is communities that are unhealthy or whatever you wanted that defines the incentives really don't like dying. One of the things that people really get aggressive, protest aggressively is when they're censored. Especially in America. I don't know much about the rest of the world, but the idea of freedom of speech, the idea of censorship is really painful in America. And so what do you think about that? Having grown up in East Germany, do you think censorship is an important tool in our brain and the intelligence and in social networks? So basically, if you're not a good member of the entirety of the system, they should be blocked away. Well, locked away, blocked. An important thing is who decides that you are a good member. Who? Is it distributed? And what is the outcome of the process that decides it, both for the individual and for society at large. For instance, if you have a high trust society, you don't need a lot of surveillance. And the surveillance is even in some sense undermining trust. Because it's basically punishing people that look suspicious when surveyed, but do the right thing anyway. And the opposite, if you have a low trust society, then surveillance can be a better trade off. And the US is currently making a transition from a relatively high trust or mixed trust society to a low trust society. So surveillance will increase. Another thing is that beliefs are not just inert representations. There are implementations that run code on your brain and change your reality and change the way you interact with each other at some level. And some of the beliefs are just public opinions that we use to display our alignment. So for instance, people might say, all cultures are the same and equally good, but still they prefer to live in some cultures over others, very, very strongly so. And it turns out that the cultures are defined by certain rules of interaction. And these rules of interaction lead to different results when you implement them. So if you adhere to certain rules, you get different outcomes in different societies. And this all leads to very tricky situations when people do not have a commitment to a shared purpose. And our societies probably need to rediscover what it means to have a shared purpose and how to make this compatible with a non totalitarian view. So in some sense, the US is caught in a conundrum between totalitarianism and diversity, and doesn't need to know how to resolve this. And the solutions that the US has found so far are very crude because it's a very young society that is also under a lot of tension. It seems to me that the US will have to reinvent itself. What do you think, just philosophizing, what kind of mechanisms of government do you think we as a species should be involved with, US or broadly? What do you think will work well as a system? Of course, we don't know. It all seems to work pretty crappily, some things worse than others. Some people argue that communism is the best. Others say, yeah, look at the Soviet Union. Some people argue that anarchy is the best and then completely discarding the positive effects of government. There's a lot of arguments. US seems to be doing pretty damn well in the span of history. There's a respect for human rights, which seems to be a nice feature, not a bug. And economically, a lot of growth, a lot of technological development. People seem to be relatively kind on the grand scheme of things. What lessons do you draw from that? What kind of government system do you think is good? Ideally, a government should not be perceivable. It should be frictionless. The more you notice the influence of the government, the more friction you experience, the less effective and efficient the government probably is. A government, game theoretically, is an agent that imposes an offset on your payout metrics to make your Nash equilibrium compatible with the common good. You have these situations where people act on local incentives and these local incentives, everybody does the thing that's locally the best for them, but the global outcome is not good. And this is even the case when people care about the global outcome, because a regulation mechanism exists that creates a causal relationship between what I want to have for the global good and what I do. For instance, if I think that we should fly less and I stay at home, there's not a single plane that is going to not start because of me, right? It's not going to have an influence, but I don't get from A to B. So the way to implement this would be to have a government that is sharing this idea that we should fly less and is then imposing a regulation that, for instance, makes flying more expensive and gives incentives for inventing other forms of transportation that are less putting that strain on the environment, for instance. So there's so much optimism and so many things you describe, and yet there's the pessimism of you think our civilization is going to come to an end. So that's not a hundred percent probability. Nothing in this world is. So what's the trajectory out of self destruction, do you think? I suspect that in some sense, we are both too smart and not smart enough, which means we are very good at solving near term problems. And at the same time, we are unwilling to submit to the imperatives that we would have to follow in if you want to stick around. So that makes it difficult. If you were unable to solve everything technologically, you can probably understand how high the child mortality needs to be to absorb the mutation rate and how high the mutation rate needs to be to adapt to a slowly changing ecosystemic environment. So you could in principle compute all these things game theoretically and adapt to it. But if you cannot do this, because you are like me and you have children, you don't want them to die, you will use any kind of medical information to keep mortality low. Even if it means that within a few generations, we have enormous genetic drift, and most of us have allergies as a result of not being adapted to the changes that we made to our food supply. That's for now, I say technologically speaking, we're just very young, 300 years industrial revolution, we're very new to this idea. So you're attached to your kids being alive and not being murdered for the good of society. But that might be a very temporary moment of time that we might evolve in our thinking. So like you said, we're both smart and not smart enough. We are probably not the first human civilization that has discovered technology that allows us to efficiently overgraze our resources. And this overgrazing, this thing, at some point, we think we can compensate this because if we have eaten all the grass, we will find a way to grow mushrooms. But it could also be that the ecosystems tip. And so what really concerns me is not so much the end of the civilization, because we will invent a new one. But what concerns me is the fact that, for instance, the oceans might tip. So for instance, maybe the plankton dies because of ocean acidification and cyanobacteria take over, and as a result, we can no longer breathe the atmosphere. This would be really concerning. So basically a major reboot of most complex organisms on Earth. And I think this is a possibility. I don't know what the percentage for this possibility is, but it doesn't seem to be outlandish to me if you look at the scale of the changes that we've already triggered on this planet. And so Danny Hiller suggests that, for instance, we may be able to put chalk into the stratosphere to limit solar radiation. Maybe it works. Maybe this is sufficient to counter the effects of what we've done. Maybe it won't be. Maybe we won't be able to implement it by the time it's prevalent. I have no idea how the future is going to play out in this regard. It's just, I think it's quite likely that we cannot continue like this. All our cousin species, the other hominids are gone. So the right step would be to what? To rewind and to rewind towards the industrial revolution and slow the, so try to contain the technological process that leads to the overconsumption of resources? Imagine you get to choose, you have one lifetime. You get born into a sustainable agricultural civilization, 300, maybe 400 million people on the planet tops. Or before this, some kind of nomadic species was like a million or 2 million. And so you don't meet new people unless you give birth to them. You cannot travel to other places in the world. There is no internet. There is no interesting intellectual tradition that reaches considerably deep. So you would not discover human completeness probably and so on. We wouldn't exist. And the alternative is you get born into an insane world. One that is doomed to die because it has just burned a hundred million years worth of trees in a single century. Which one do you like? I think I like this one. It's a very weird thing that when you find yourself on a Titanic and you see this iceberg and it looks like we are not going to miss it. And a lot of people are in denial. And most of the counter arguments sound like denial to me. They don't seem to be rational arguments. And the other thing is we are born on this Titanic. Without this Titanic, we wouldn't have been born. We wouldn't be here. We wouldn't be talking. We wouldn't be on the internet. We wouldn't do all the things that we enjoy. And we are not responsible for this happening. If we had the choice, we would probably try to prevent it. But when we were born, we were never asked when we want to be born, in which society we want to be born, what incentive structures we want to be exposed to. We have relatively little agency in the entire thing. Humanity has relatively little agency in the whole thing. It's basically a giant machine that's tumbling down a hill and everybody is frantically trying to push some buttons. Nobody knows what these buttons are meaning, what they connect to. And most of them are not stopping this tumbling down the hill. Is it possible that artificial intelligence will give us an escape latch somehow? So there's a lot of worry about existential threats of artificial intelligence. But what AI also allows, in general forms of automation, allows the potential of extreme productivity growth that will also perhaps in a positive way transform society, that may allow us to inadvertently to return to the more, to the same kind of ideals of closer to nature that's represented in hunter gatherer societies. That's not destroying the planet, that's not doing overconsumption and so on. I mean, generally speaking, do you have hope that AI can help somehow? I think it's not fun to be very close to nature until you completely subdue nature. So our idea of being close to nature means being close to agriculture, basically forests that don't have anything in them that eats us. TITO See, I mean, I want to disagree with that. I think the niceness of being close to nature is to being fully present and in like, when survival becomes your primary, not just your goal, but your whole existence. I'm not just romanticizing, I can just speak for myself. I am self aware enough that that is a fulfilling existence. I personally prefer to be in nature and not fight for my survival. I think fighting for your survival while being in the cold and in the rain and being hunted by animals and having open wounds is very unpleasant. There's a contradiction in there. Yes, I and you, just as you said, would not choose it. But if I was forced into it, it would be a fulfilling existence. Yes, if you are adapted to it, basically, if your brain is wired up in such a way that you get rewards optimally in such an environment. And there's some evidence for this that for a certain degree of complexity, basically, people are more happy in such an environment because it's what you largely have evolved for. In between, we had a few thousand years in which I think we have evolved for a slightly more comfortable environment. So there is probably something like an intermediate stage in which people would be more happy than they would be if they would have to fend for themselves in small groups in the forest and often die. Versus something like this, where we now have basically a big machine, a big Mordor in which we run through concrete boxes and press buttons and machines, and largely don't feel well cared for as the monkeys that we are. So returning briefly to, not briefly, but returning to AI, what, let me ask a romanticized question, what is the most beautiful to you, silly ape, the most beautiful or surprising idea in the development of artificial intelligence, whether in your own life or in the history of artificial intelligence that you've come across? If you built an AI, it probably can make models at an arbitrary degree of detail, right, of the world. And then it would try to understand its own nature. It's tempting to think that at some point when we have general intelligence, we have competitions where we will let the AIs wake up in different kinds of physical universes, and we measure how many movements of the Rubik's cube it takes until it's figured out what's going on in its universe and what it is in its own nature and its own physics and so on, right? So what if we exist in the memory of an AI that is trying to understand its own nature and remembers its own genesis and remembers Lex and Joscha sitting in a hotel room, sparking some of the ideas off that led to the development of general intelligence. So we're a kind of simulation that's running in an AI system that's trying to understand itself. It's not that I believe that, but I think it's a beautiful idea. I mean, you kind of returned to this idea with the Turing test of intelligence being the process of asking and answering what is intelligence. I mean, do you think there is an answer? Why is there such a search for an answer? So does there have to be like an answer? You just said an AI system that's trying to understand the why of what, you know, understand itself. Is that a fundamental process of greater and greater complexity, greater and greater intelligence is the continuous trying of understanding itself? No, I think you will find that most people don't care about that because they're well adjusted enough to not care. And the reason why people like you and me care about it probably has to do with the need to understand ourselves. It's because we are in fundamental disagreement with the universe that we wake up in. They look down on me and they see, oh my God, I'm caught in a monkey. What's that? Some people are unhappy with the government and I'm unhappy with the entire universe that I find myself in. Oh, so you don't think that's a fundamental aspect of human nature that some people are just suppressing? That they wake up shocked they're in the body of a monkey? No, there is a clear adaptive value to not be confused by that and by... Well, no, that's not what I asked. So you have this clear adaptive value, then there's clear there's clear adaptive value to while fundamentally your brain is confused by that, by creating an illusion, another layer of the narrative that says, you know, that tries to suppress that and instead say that, you know, what's going on with the government right now is the most important thing. What's going on with my football team is the most important thing. But it seems to me, like for me, it was a really interesting moment reading Ernest Becker's Denial of Death. That, you know, this kind of idea that we're all, you know, the fundamental thing from which most of our human mind springs is this fear of mortality and being cognizant of your mortality and the fear of that mortality. And then you construct illusions on top of that. I guess you being just a push on it, you really don't think it's possible that this worry of the big existential questions is actually fundamental as the existentialist thought to our existence. I think that the fear of death only plays a role as long as you don't see the big picture. The thing is that minds are software states, right? Software doesn't have identity. Software in some sense is a physical law. But it feels like there's an identity. I thought that was the for this particular piece of software and the narrative it tells, that's a fundamental property of it. The maintenance of the identity is not terminal. It's instrumental to something else. You maintain your identity so you can serve your meaning. So you can do the things that you're supposed to do before you die. And I suspect that for most people the fear of death is the fear of dying before they are done with the things that they feel they have to do, even though they cannot quite put their finger on it, what that is. Right. But in the software world, to return to the question, then what happens after we die? Why would you care? You will not be longer there. The point of dying is that you are gone. Well, maybe I'm not. This is what, you know, it seems like there's so much, in the idea that this is just, the mind is just a simulation that's constructing a narrative around some particular aspects of the quantum mechanical wave function world that we can't quite get direct access to. Then like the idea of mortality seems to be a little fuzzy as well. It doesn't, maybe there's not a clear answer. The fuzzy idea is the one of continuous existence. We don't have continuous existence. How do you know that? Because it's not computable. Because you're saying it's going to be directly infinite. There is no continuous process. The only thing that binds you together with the Lex Friedman from yesterday is the illusion that you have memories about him. So if you want to upload, it's very easy. You make a machine that thinks it's you. Because this is the same thing that you are. You are a machine that thinks it's you. But that's immortality. Yeah, but it's just a belief. You can create this belief very easily once you realize that the question whether you are immortal or not depends entirely on your beliefs and your own continuity. But then you can be immortal by the continuity of the belief. You cannot be immortal, but you can stop being afraid of your mortality because you realize you were never continuously existing in the first place. Well, I don't know if I'd be more terrified or less terrified by that. It seems like the fact that I existed. You don't know this state in which you don't have a self. You can't turn off yourself. I can't turn off myself. You can't turn it off. You can't turn it off. I can. Yes. And you can basically meditate yourself in a state where you are still conscious, where still things are happening, where you know everything that you knew before, but you're no longer identified with changing anything. And this means that yourself, in a way, dissolves. There is no longer this person. You know that this person construct exists in other states and it runs on this brain of Lex Friedman, but it's not a real thing. It's a construct. It's an idea. And you can change that idea. And if you let go of this idea, if you don't think that you are special, you realize it's just one of many people and it's not your favorite person even. It's just one of many. And it's the one that you are doomed to control for the most part. And that is basically informing the actions of this organism as a control model. And this is all there is. And you are somehow afraid that this control model gets interrupted or loses the identity of continuity. Yeah. So I'm attached. I mean, yeah, it's a very popular, it's a somehow compelling notion that being attached, like there's no need to be attached to this idea of an identity. But that in itself could be an illusion that you construct. So the process of meditation, while popular, is thought of as getting under the concept of identity. It could be just putting a cloak over it, just telling it to be quiet for the moment. I think that meditation is eventually just a bunch of techniques that let you control attention. And when you can control attention, you can get access to your own source code, hopefully not before you understand what you're doing. And then you can change the way it works temporarily or permanently. So yeah, meditation is to get a glimpse at the source code, get under, so basically control or turn off the attention. The entire thing is that you learn to control attention. So everything else is downstream from controlling attention. And control the attention that's looking at the attention. Normally we only get attention in the parts of our mind that create heat, where you have a mismatch between model and the results that are happening. And so most people are not self aware because their control is too good. If everything works out roughly the way you want, and the only things that don't work out is whether your football team wins, then you will mostly have models about these domains. And it's only when, for instance, your fundamental relationships to the world around you don't work, because the ideology of your country is insane, and you don't understand why it's insane, and the other kids are not nerds, and don't understand why you understand physics, and you don't, why you want to understand physics, and you don't understand why somebody would not want to understand physics. So we kind of brought up neurons in the brain as reinforcement learning agents. And there's been some successes as you brought up with Go, with AlphaGo, AlphaZero, with ideas which I think are incredibly interesting ideas of systems playing each other in an automated way to improve by playing other systems in a particular construct of a game that are a little bit better than itself, and then thereby improving continuously. All the competitors in the game are improving gradually. So being just challenging enough and from learning from the process of the competition. Do you have hope for that reinforcement learning process to achieve greater and greater level of intelligence? So we talked about different ideas in AI that need to be solved. Is RL a part of that process of trying to create an AGI system? What do you think? Definitely forms of unsupervised learning, but there are many algorithms that can achieve that. And I suspect that ultimately the algorithms that work, there will be a class of them or many of them. And they might have small differences of like a magnitude and efficiency, but eventually what matters is the type of model that you form and the types of models that we form right now are not sparse enough. What does it mean to be sparse? It means that ideally every potential model state should correspond to a potential world state. So basically if you vary states in your model, you always end up with valid world states and our mind is not quite there. So an indication is basically what we see in dreams. The older we get, the more boring our dreams become because we incorporate more and more constraints that we learned about how the world works. So many of the things that we imagine to be possible as children turn out to be constrained by physical and social dynamics. And as a result, fewer and fewer things remain possible. It's not because our imagination scales back, but the constraints under which it operates become tighter and tighter. And so the constraints under which our neural networks operate are almost limitless, which means it's very difficult to get a neural network to imagine things that look real. So I suspect part of what we need to do is we probably need to build dreaming systems. I suspect that part of the purpose of dreams is similar to a generative adversarial network, we learn certain constraints and then it produces alternative perspectives on the same set of constraints. So you can recognize it under different circumstances. Maybe we have flying dreams as children because we recreate the objects that we know and the maps that we know from different perspectives, which also means from a bird's eye perspective. So I mean, aren't we doing that anyway? I mean, not with our eyes closed and when we're sleeping, aren't we just constantly running dreams and simulations in our mind as we try to interpret the environment? I mean, sort of considering all the different possibilities, the way we interact with the environment seems like, essentially, like you said, sort of creating a bunch of simulations that are consistent with our expectations, with our previous experiences, with the things we just saw recently. And through that hallucination process, we are able to then somehow stitch together what actually we see in the world with the simulations that match it well and thereby interpret it. I suspect that you and my brain are slightly unusual in this regard, which is probably what got you into MIT. So this obsession of constantly pondering possibilities and solutions to problems. Oh, stop it. I think I'm not talking about intellectual stuff. I'm talking about just doing the kind of stuff it takes to walk and not fall. Yes, this is largely automatic. Yes, but the process is, I mean... It's not complicated. It's relatively easy to build a neural network that, in some sense, learns the dynamics. The fact that we haven't done it right so far doesn't mean it's hard, because you can see that a biological organism does it with relatively few neurons. So basically, you build a bunch of neural oscillators that entrain themselves with the dynamics of your body in such a way that the regulator becomes isomorphic in its model to the dynamics that it regulates, and then it's automatic. And it's only interesting in the sense that it captures attention when the system is off. See, but thinking of the kind of mechanism that's required to do walking as a controller, as a neural network, I think it's a compelling notion, but it discards quietly, or at least makes implicit, the fact that you need to have something like common sense reasoning to walk. It's an open question whether you do or not. But my intuition is to act in this world, there's a huge knowledge base that's underlying it somehow. There's so much information of the kind we have never been able to construct in neural networks in an artificial intelligence systems period, which is like, it's humbling, at least in my imagination, the amount of information required to act in this world humbles me. And I think saying that neural networks can accomplish it is missing the fact that we don't have yet a mechanism for constructing something like common sense reasoning. I mean, what's your sense about to linger on the idea of what kind of mechanism would be effective at walking? You said just a neural network, not maybe the kind we have, but something a little bit better, would be able to walk easily. Don't you think it also needs to know like a huge amount of knowledge that's represented under the flag of common sense reasoning? How much common sense knowledge do we actually have? Imagine that you are really hardworking for all your life and you form two new concepts every half hour or so. You end up with something like a million concepts because you don't get that old. So a million concepts, that's not a lot. So it's not just a million concepts. I think it would be a lot. I personally think it might be much more than a million. But if you think just about the numbers, you don't live that long. If you think about how many cycles do your neurons have in your life, it's quite limited. You don't get that old. Yeah, but the powerful thing is the number of concepts, and they're probably deeply hierarchical in nature. The relations, as you described between them, is the key thing. So it's like, even if it's a million concepts, the graph of relations that's formed and some kind of, perhaps, some kind of probabilistic relationships, that's what's common sense reasoning is the relationship between things. Yeah, so in some sense, I think of the concepts as the address space for our behavior programs. And the behavior programs allow us to recognize objects and interact with them, also mental objects. And a large part of that is the physical world that we interact with, which is this RAS extender thing, which is basically navigation of information in space. And basically, it's similar to a game engine. It's a physics engine that you can use to describe and predict how things that look in a particular way, that feel when you touch them in a particular way, that love proprioception, that love auditory, for example. So it's a lot of auditory perception and so on, how they work out. So basically, the geometry of all these things. And this is probably 80% of what our brain is doing is dealing with that, with this real time simulation. And by itself, a game engine is fascinating, but it's not that hard to understand what it's doing. And our game engines are already, in some sense, approximating the fidelity of what we can perceive. So if we put on an Oculus Quest, we get something that is still relatively crude with respect to what we can perceive, but it's also in the same ballpark already. It's just a couple order of magnitudes away from saturating our perception in terms of the complexity that it can produce. So in some sense, it's reasonable to say that the computer that you can buy and put into your home is able to give a perceptual reality that has a detail that is already in the same ballpark as what your brain can process. And everything else are ideas about the world. And I suspect that they are relatively sparse and also the intuitive models that we form about social interaction. Social interaction is not so hard. It's just hard for us nerds because we all have our wires crossed, so we need to deduce them. But the pyres are present in most social animals. So it's interesting thing to notice that many domestic social animals, like cats and dogs, have better social cognition than children. Right. I hope so. I hope it's not that many concepts fundamentally to do to exist in this world. For me, it's more like I'm afraid so because this thing that we only appear to be so complex to each other because we are so stupid is a little bit depressing. Yeah, to me that's inspiring if we're indeed as stupid as it seems. The things our brains don't scale and the information processing that we build tend to scale very well. Yeah, but I mean, one of the things that worries me is that the fact that the brain doesn't scale means that that's actually a fundamental feature of the brain. All the flaws of the brain, everything we see that we see as limitations, perhaps there's a fundamental, the constraints on the system could be a requirement of its power, which is different than our current understanding of intelligent systems where scale, especially with deep learning, especially with reinforcement learning, the hope behind OpenAI and DeepMind, all the major results really have to do with huge compute. It could also be that our brains are so small, not just because they take up so much glucose in our body, like 20% of the glucose, so they don't arbitrarily scale. There's some animals like elephants which have larger brains than us and they don't seem to be smarter. Elephants seem to be autistic. They have very, very good motor control and they're really good with details, but they really struggle to see the big picture. So you can make them recreate drawings stroke by stroke, they can do that, but they cannot reproduce a still life. So they cannot make a drawing of a scene that they see. They will always be only able to reproduce the line drawing, at least as far from what I could see in the experiments. So why is that? Maybe smarter elephants would meditate themselves out of existence because their brains are too large. So basically the elephants that were not autistic, they didn't reproduce. Yeah. So we have to remember that the brain is fundamentally interlinked with the body in our human and biological system. Do you think that AGI systems that we try to create or greater intelligent systems would need to have a body? I think they should be able to make use of a body if you give it to them. But I don't think that they fundamentally need a body. So I suspect if you can interact with the world by moving your eyes and your head, you can make controlled experiments. And this allows you to have many magnitudes, fewer observations in order to reduce the uncertainty in your models. So you can pinpoint the areas in your models where you're not quite sure and you just move your head and see what's going on over there and you get additional information. If you just have to use YouTube as an input and you cannot do anything beyond this, you probably need just much more data. But we have much more data. So if you can build a system that has enough time and attention to browse all of YouTube and extract all the information that there is to be found, I don't think there's an obvious limit to what it can do. Yeah, but it seems that the interactivity is a fundamental thing that the physical body allows you to do. But let me ask on that topic sort of that that's what a body is, is allowing the brain to like touch things and move things and interact with the whether the physical world exists or not, whatever, but interact with some interface to the physical world. What about a virtual world? Do you think we can do the same kind of reasoning, consciousness, intelligence if we put on a VR headset and move over to that world? Do you think there's any fundamental difference between the interface to the physical world that it's here in this hotel and if we were sitting in the same hotel in a virtual world? The question is, does this nonphysical world or this other environment entice you to solve problems that require general intelligence? If it doesn't, then you probably will not develop general intelligence and arguably most people are not generally intelligent because they don't have to solve problems that make them generally intelligent. And even for us, it's not yet clear if we are smart enough to build AI and understand our own nature to this degree. So it could be a matter of capacity and for most people, it's in the first place a matter of interest. They don't see the point because the benefit of attempting this project are marginal because you're probably not going to succeed in it and the cost of trying to do it requires complete dedication of your entire life. Right? But it seems like the possibilities of what you can do in the virtual world, so imagine that is much greater than you can in the real world. So imagine a situation, maybe interesting option for me. If somebody came to me and offered what I'll do is, so from now on, you can only exist in the virtual world. And so you put on this headset and when you eat, we'll make sure to connect your body up in a way that when you eat in the virtual world, your body will be nourished in the same way in the virtual world. So the aligning incentives between our common sort of real world and the virtual world, but then the possibilities become much bigger. Like I could be other kinds of creatures. I could do, I can break the laws of physics as we know them. I could do a lot. I mean, the possibilities are endless, right? As far as we think it's an interesting thought, whether like what existence would be like, what kind of intelligence would emerge there? What kind of consciousness? What kind of maybe greater intelligence, even in me, Lex, even at this stage in my life, if I spend the next 20 years in that world to see how that intelligence emerges. And if that happened at the very beginning, before I was even cognizant of my existence in this physical world, it's interesting to think how that child would develop. And the way virtual reality and digitization of everything is moving, it's not completely out of the realm of possibility that we're all, that some part of our lives will, if not entirety of it, will live in a virtual world to a greater degree than we currently have living on Twitter and social media and so on. Do you have, I mean, does something draw you intellectually or naturally in terms of thinking about AI to this virtual world where more possibilities are? I think that currently it's a waste of time to deal with the physical world before we have mechanisms that can automatically learn how to deal with it. The body gives you second order agency, but what constitutes the body is the things that you can indirectly control. The third order are tools, and the second order is the things that are basically always present, but you operate on them with first order things, which are mental operators. And the zero order is in some sense, the direct sense of what you're deciding. Right. So you observe yourself initiating an action, there are features that you interpret as the initiation of an action. Then you perform the operations that you perform to make that happen. And then you see the movement of your limbs and you learn to associate those and thereby model your own agency over this feedback, right? But the first feedback that you get is from this first order thing already. Basically, you decide to think a thought and the thought is being thought. You decide to change the thought and you observe how the thought is being changed. And in some sense, this is, you could say, an embodiment already, right? And I suspect it's sufficient as an embodiment for intelligence. And so it's not that important at least at this time to consider variations in the second order. Yes. But the thing that you also put mentioned just now is physics that you could change in any way you want. So you need an environment that puts up resistance against you. If there's nothing to control, you cannot make models, right? There needs to be a particular way that resists you. And by the way, your motivation is usually outside of your mind. It resists you. Motivation is what gets you up in the morning even though it would be much less work to stay in bed. So it's basically forcing you to resist the environment and it forces your mind to serve it, to serve this resistance to the environment. So in some sense, it is also putting up resistance against the natural tendency of the mind to not do anything. Yeah. So some of that resistance, just like you described with motivation is like in the first order, it's in the mind. Some resistance is in the second order, like actual physical objects pushing against you and so on. It seems that the second order stuff in virtual reality could be recreated. Of course. But it might be sufficient that you just do mathematics and mathematics is already putting up enough resistance against you. So basically just with an aesthetic motive, this could maybe sufficient to form a type of intelligence. It would probably not be a very human intelligence, but it might be one that is already general. So to mess with this zero order, maybe first order, what do you think about ideas of brain computer interfaces? So again, returning to our friend Elon Musk and Neuralink, a company that's trying to, of course, there's a lot of a trying to cure diseases and so on with a near term, but the longterm vision is to add an extra layer to basically expand the capacity of the brain connected to the computational world. Do you think one that's possible too, how does that change the fundamentals of the zeroth order in the first order? It's technically possible, but I don't see that the FDA would ever allow me to drill holes in my skull to interface my neocortex the way Elon Musk envisions. So at the moment, I can do horrible things to mice, but I'm not able to do useful things to people, except maybe at some point down the line in medical applications. So this thing that we are envisioning, which means recreational and creational brain computer interfaces are probably not going to happen in the present legal system. I love it how I'm asking you out there philosophical and sort of engineering questions. And for the first time ever, you jumped to the legal FDA. There would be enough people that would be crazy enough to have holes drilled in their skull to try a new type of brain computer interface. But also, if it works, FDA will approve it. I mean, yes, it's like, you know, I work a lot with autonomous vehicles. Yes, you can say that it's going to be a very difficult regulatory process of approving autonomous, but it doesn't mean autonomous vehicles are never going to happen. No, they will totally happen as soon as we create jobs for at least two lawyers and one regulator per car. Yes, lawyers, that's actually like lawyers is the fundamental substrate of reality. In the US, it's a very weird system. It's not universal in the world. The law is a very interesting software once you realize it, right? These circuits are in some sense streams of software and this is largely works by exception handling. So you make decisions on the ground and they get synchronized with the next level structure as soon as an exception is being thrown. So it escalates the exception handling. The process is very expensive, especially since it incentivizes the lawyers for producing work for lawyers. Yes, so the exceptions are actually incentivized for firing often. But to return, outside of lawyers, is there anything interesting, insightful about the possibility of this extra layer of intelligence added to the brain? I do think so, but I don't think that you need technically invasive procedures to do so. We can already interface with other people by observing them very, very closely and getting in some kind of empathetic resonance. And I'm not very good at this, but I noticed that people are able to do this to some degree. And it basically means that we model an interface layer of the other person in real time. And it works despite our neurons being slow because most of the things that we do are built on periodic processes. So you just need to entrain yourself with the oscillation that happens. And if the oscillation itself changes slowly enough, you can basically follow along. Right. But the bandwidth of the interaction, it seems like you can do a lot more computation when there's... Of course. But the other thing is that the bandwidth that our brain, our own mind is running on is actually quite slow. So the number of thoughts that I can productively think in any given day is quite limited. If they had the discipline to write it down and the speed to write it down, maybe it would be a book every day or so. But if you think about the computers that we can build, the magnitudes at which they operate, this would be nothing. It's something that it can put out in a second. Well, I don't know. So it's possible the number of thoughts you have in your brain is... It could be several orders of magnitude higher than what you're possibly able to express through your fingers or through your voice. Most of them are going to be repetitive because they... How do you know that? If they have to control the same problems every day. When I walk, there are going to be processes in my brain that model my walking pattern and regulate them and so on. But it's going to be pretty much the same every day. But that could be... Every step. But I'm talking about intellectual reasoning, thinking. So the question, what is the best system of government? So you sit down and start thinking about that. One of the constraints is that you don't have access to a lot of facts, a lot of studies. You always have to interface with something else to learn more, to aid in your reasoning process. If you can directly access all of Wikipedia in trying to understand what is the best form of government, then every thought won't be stuck in a loop. Every thought that requires some extra piece of information will be able to grab it really quickly. That's the possibility of if the bottleneck is literally the information that... The bottleneck of breakthrough ideas is just being able to quickly access huge amounts of information, then the possibility of connecting your brain to the computer could lead to totally new breakthroughs. You can think of mathematicians being able to just up the orders of magnitude of power in their reasoning about mathematical proofs. What if humanity has already discovered the optimal form of government through an evolutionary process? There is an evolution going on. So what we discover is that maybe the problem of government doesn't have stable solutions for us as a species, because we are not designed in such a way that we can make everybody conform to them. But there could be solutions that work under given circumstances or that are the best for certain environment and depends on, for instance, the primary forms of ownership and the means of production. So if the main means of production is land, then the forms of government will be regulated by the landowners and you get a monarchy. If you also want to have a form of government in which you depend on some form of slavery, for instance, where the peasants have to work very long hours for very little gain, so very few people can have plumbing, then maybe you need to promise them to get paid in the afterlife, the overtime. So you need a theocracy. And so for much of human history in the West, we had a combination of monarchy and theocracy that was our form of governance. At the same time, the Catholic Church implemented game theoretic principles. I recently reread Thomas Aquinas. It's very interesting to see this because he was not dualist. He was translating Aristotle in a particular way for designing an operating system for the Catholic society. And he says that basically people are animals in very much the same way as Aristotle envisions, which is basically organisms with cybernetic control. And then he says that there are additional rational principles that humans can discover and everybody can discover them so they are universal. If you are sane, you should understand, you should submit to them because you can rationally deduce them. And these principles are roughly you should be willing to self regulate correctly. You should be willing to do correct social regulation. It's intraorganismic. You should be willing to act on your models so you have skin in the game. And you should have goal rationality. You should be choosing the right goals to work on. So basically these three rational principles, goal rationality he calls prudence or wisdom, social regulation is justice, the correct social one, and the internal regulation is temperance. And this willingness to act on your models is courage. And then he says that there are additionally to these four cardinal virtues, three divine virtues. And these three divine virtues cannot be rationally deduced, but they reveal themselves by the harmony, which means if you assume them and you extrapolate what's going to happen, you will see that they make sense. And it's often been misunderstood as God has to tell you that these are the things. So basically there's something nefarious going on. The Christian conspiracy forces you to believe some guy with a long beard that they discovered this. So these principles are relatively simple. Again, it's for high level organization for the resulting civilization that you form. A commitment to unity. So basically you serve this higher, larger thing, this structural principle on the next level. And he calls that faith. Then there needs to be a commitment to shared purpose. This is basically this global reward that you try to figure out what that should be and how you can facilitate this. And this is love. The commitment to shared purpose is the core of love, right? You see the sacred thing that is more important than your own organismic interests in the other, and you serve this together. And this is how you see the sacred in the other. And the last one is hope, which means you need to be willing to act on that principle without getting rewards in the here and now because it doesn't exist yet. Then you start out building the civilization, right? So you need to be able to do this in the absence of its actual existence yet. So it can come into being. So the way it comes into being is by you accepting those notions and then you see these three divine concepts and you see them realized. Divine is a loaded concept in our world because we are outside of this cult and we are still scarred from breaking free of it. But the idea is basically we need to have a civilization that acts as an intentional agent, like an insect state. And we are not actually a tribal species, we are a state building species. And what enables state building is basically the formation of religious states and other forms of rule based administration in which the individual doesn't matter as much as the rule or the higher goal. We got there by the question, what's the optimal form of governance? So I don't think that Catholicism is the optimal form of governance because it's obviously on the way out, right? So it is for the present type of society that we are in. Religious institutions don't seem to be optimal to organize that. So what we discovered right now that we live in in the West is democracy. And democracy is the rule of oligarchs that are the people that currently own the means of production that is administered not by the oligarchs themselves because there's too much disruption. We have so much innovation that we have in every generation new means of production that we invent. And corporations die usually after 30 years or so and something other takes a leading role in our societies. So it's administered by institutions and these institutions themselves are not elected but they provide continuity and they are led by electable politicians. And this makes it possible that you can adapt to change without having to kill people, right? So you can, for instance, have a change in governments if people think that the current government is too corrupt or is not up to date, you can just elect new people. Or if a journalist finds out something inconvenient about the institution and the institution has no plan B like in Russia, the journalist has to die. This is when you run society by the deep state. So ideally you have an administration layer that you can change if something bad happens, right? So you will have a continuity in the whole thing. And this is the system that we came up in the West. And the way it's set up in the US is largely a result of low level models. So it's mostly just second, third order consequences that people are modeling in the design of these institutions. So it's a relatively young society that doesn't really take care of the downstream effects of many of the decisions that are being made. And I suspect that AI can help us this in a way if you can fix the incentives. The society of the US is a society of cheaters. It's basically cheating is so indistinguishable from innovation and we want to encourage innovation. Can you elaborate on what you mean by cheating? It's basically people do things that they know are wrong. It's acceptable to do things that you know are wrong in this society to a certain degree. You can, for instance, suggest some non sustainable business models and implement them. Right. But you're always pushing the boundaries. I mean, yes, this is seen as a good thing largely. Yes. And this is different from other societies. So for instance, social mobility is an aspect of this. Social mobility is the result of individual innovation that would not be sustainable at scale for everybody else. Right. Normally you should not go up, you should go deep, right? We need bakers and if we are very very good bakers, but in a society that innovates, maybe you can replace all the bakers with a really good machine. Right. And that's not a bad thing. And it's a thing that made the US so successful, right? But it also means that the US is not optimizing for sustainability, but for innovation. And so it's not obvious as the evolutionary process is unrolling, it's not obvious that that long term would be better. It has side effects. So you basically, if you cheat, you will have a certain layer of toxic sludge that covers everything that is a result of cheating. And we have to unroll this evolutionary process to figure out if these side effects are so damaging that the system is horrible, or if the benefits actually outweigh the negative effects. How do we get to which system of government is best? That was from, I'm trying to trace back the last like five minutes. I suspect that we can find a way back to AI by thinking about the way in which our brain has to organize itself. In some sense, our brain is a society of neurons. And our mind is a society of behaviors. And they need to be organizing themselves into a structure that implements regulation and government is social regulation. We often see government as the manifestation of power or local interests, but it's actually a platform for negotiating the conditions of human survival. And this platform emerges over the current needs and possibilities and the trajectory that we have. So given the present state, there are only so many options on how we can move into the next state without completely disrupting everything. And we mostly agree that it's a bad idea to disrupt everything because it will endanger our food supply for a while and the entire infrastructure and fabric of society. So we do try to find natural transitions, and there are not that many natural transitions available at any given point. What do you mean by natural transitions? So we try not to have revolutions if we can have it. Right. So speaking of revolutions and the connection between government systems and the mind, you've also said that in some sense, becoming an adult means you take charge of your emotions. Maybe you never said that. Maybe I just made that up. But in the context of the mind, what's the role of emotion? And what is it? First of all, what is emotion? What's its role? It's several things. So psychologists often distinguish between emotion and feeling, and in common day parlance, we don't. I think that emotion is a configuration of the cognitive system. And that's especially true for the lowest level for the affective state. So when you have an affect, it's the configuration of certain modulation parameters like arousal, valence, your attentional focus, whether it's wide or narrow, inter reception or extra reception, and so on. And all these parameters together put you in a certain way. You relate to the environment and to yourself, and this is in some sense an emotional configuration. In the more narrow sense, an emotion is an affective state. It has an object, and the relevance of that object is given by motivation. And motivation is a bunch of needs that are associated with rewards, things that give you pleasure and pain. And you don't actually act on your needs, you act on models of your needs. Because when the pleasure and pain manifest, it's too late, you've done everything. So you act on expectations that will give you pleasure and pain. And these are your purposes. The needs don't form a hierarchy, they just coexist and compete. And your brain has to find a dynamic homeostasis between them. But the purposes need to be consistent. So you basically can create a story for your life and make plans. And so we organize them all into hierarchies. And there is not a unique solution for this. Some people eat to make art and other people make art to eat. They might end up doing the same things, but they cooperate in very different ways. Because their ultimate goals are different. And we cooperate based on shared purpose. Everything else that is not cooperation on shared purpose is transactional. I don't think I understood that last piece of achieving the homeostasis. Are you distinguishing between the experience of emotion and the expression of emotion? Of course. So the experience of emotion is a feeling. And in this sense, what you feel is an appraisal that your perceptual system has made of the situation at hand. And it makes this based on your motivation and on your estimates, not your but of the subconscious geometric parts of your mind that assess the situation in the world with something like a neural network. And this neural network is making itself known to the symbolic parts of your mind, to your conscious attention by mapping them as features into a space. So what you will feel about your emotion is a projection usually into your body map. So you might feel anxiety in your solar plexus, and you might feel it as a contraction, which is all geometry. Your body map is the space that is always instantiated and always available. So it's a very obvious cheat if your non symbolic parts of your brain try to talk to your symbolic parts of your brain to map the feelings into the body map. And then you perceive them as pleasant and unpleasant, depending on whether the appraisal has a negative or positive valence. And then you have different features of them that give you more knowledge about the nature of what you're feeling. So for instance, when you feel connected to other people, you typically feel this in your chest region around the heart. And you feel this is an expansive feeling in which you're reaching out, right? And it's very intuitive to encode it like this. That's why it's encoded like this. It's a code in which the non symbolic parts of your mind talk to the symbolic ones. And then the expression of emotion is then the final step that could be sort of gestural or visual and so on. That's part of the communication. This probably evolved as part of an adversarial communication. So as soon as you started to observe the facial expression and posture of others to understand what emotional state they're in, others started to use this as signaling and also to subvert your model of their emotional state. So we now look at the inflections, at the difference between the standard face that they're going to make in this situation. When you are at a funeral, everybody expects you to make a solemn face, but the solemn face doesn't express whether you're sad or not. It just expresses that you understand what face you have to make at a funeral. Nobody should know that you are triumphant. So when you try to read the emotion of another person, you try to look at the delta between a truly sad expression and the things that are animating this face behind the curtain. So the interesting thing is, so having done this podcast and the video component, one of the things I've learned is that now I'm Russian and I just don't know how to express emotion on my face. One, I see that as weakness, but whatever. The people look to me after you say something, they look to my face to help them see how they should feel about what you said, which is fascinating because then they'll often comment on why did you look bored or why did you particularly enjoy that part or why did you whatever. It's a kind of interesting, it makes me cognizant of I'm part, like you're basically saying a bunch of brilliant things, but I'm part of the play that you're the key actor in by making my facial expressions and then therefore telling the narrative of what the big, like the big point is, which is fascinating. Makes me cognizant that I'm supposed to be making facial expressions. Even this conversation is hard because my preference would be to wear a mask with sunglasses to where I could just listen. Yes, I understand this because it's intrusive to interact with others this way. And basically Eastern European society have a taboo against that and especially Russia, the further you go to the East and in the US it's the opposite. You're expected to be hyperanimated in your face and you're also expected to show positive affect. Yes. And if you show positive affect without a good reason in Russia, people will think you are a stupid, unsophisticated person. Exactly. And here positive affect without reason goes either appreciated or goes unnoticed. No, it's the default. It's being expected. Everything is amazing. Have you seen these? Lego movie? No, there was a diagram where somebody gave the appraisals that exist in the US and Russia, so you have your bell curve. And the lower 10% in the US, it's a good start. Everything above the lowest 10%, it's amazing. It's amazing. And for Russians, everything below the top 10%, it's terrible. And then everything except the top percent is, I don't like it. And the top percent is even so. It's funny, but it's kind of true. There's a deeper aspect to this. It's also how we construct meaning in the US. Usually you focus on the positive aspects and you just suppress the negative aspects. And in our Eastern European traditions, we emphasize the fact that if you hold something above the waterline, you also need to put something below the waterline because existence by itself is as best neutral. Right. That's the basic intuition, at best neutral. Or it could be just suffering, the default is suffering. There are moments of beauty, but these moments of beauty are inextricably linked to the reality of suffering. And to not acknowledge the reality of suffering means that you are really stupid and unaware of the fact that basically every conscious being spends most of the time suffering. Yeah. You just summarized the ethos of the Eastern Europe. Yeah. Most of life is suffering with an occasional moments of beauty. And if your facial expressions don't acknowledge the abundance of suffering in the world and in existence itself, then you must be an idiot. It's an interesting thing when you raise children in the US and you, in some sense, preserve the identity of the intellectual and cultural traditions that are embedded in your own families. And your daughter asks you about Ariel the mermaid and asks you, why is Ariel not allowed to play with the humans? And you tell her the truth. She's a siren. Sirens eat people. You don't play with your food. It does not end well. And then you tell her the original story, which is not the one by Anderson, which is the romantic one. And there's a much darker one, which is Undine's story. What happened? So Undine is a mermaid or a water woman. She lives on the ground of a river and she meets this prince and they fall in love. And the prince really, really wants to be with her. And she says, okay, but the deal is you cannot have any other woman. If you marry somebody else, even though you cannot be with me, because obviously you cannot breathe underwater and have other things to do than managing your kingdom as you have here, you will die. And eventually after a few years, he falls in love with some princess and marries her. And she shows up and quietly goes into his chamber and nobody is able to stop her or willing to do so because she is fierce. And she comes quietly and sad out of his chamber. And they ask her, what has happened? What did you do? And she said, I kissed him to death. All done. And you know the Anderson story, right? In the Anderson story, the mermaid is playing with this prince that she saves and she falls in love with him and she cannot live out there. So she is giving up her voice and her tale for a human like appearance so she can walk among the humans. But this guy does not recognize that she is the one that you would marry. Instead, he marries somebody who has a kingdom and economical and political relationships to his own kingdom and so on, as he should. And she dies. Yeah. Instead, Disney, the Little Mermaid story has a little bit of a happy ending. That's the Western, that's the American way. My own problem is this, of course, that I read Oscar Wilde before I read the other things. So I'm indoctrinated, inoculated with this romanticism. And I think that the mermaid is right. You sacrifice your life for romantic love. That's what you do. Because if you are confronted with either serving the machine and doing the obviously right thing under the economic and social and other human incentives, that's wrong. You should follow your heart. So do you think suffering is fundamental to happiness along these lines? Suffering is the result of caring about things that you cannot change. And if you are able to change what you care about to those things that you can change, you will not suffer. But would you then be able to experience happiness? Yes. But happiness itself is not important. Happiness is like a cookie. When you are a child, you think cookies are very important and you want to have all the cookies in the world, you look forward to being an adult because then you have as many cookies as you want. But as an adult, you realize a cookie is a tool. It's a tool to make you eat vegetables. And once you eat your vegetables anyway, you stop eating cookies for the most part, because otherwise you will get diabetes and will not be around for your kids. Yes, but then the cookie, the scarcity of a cookie, if scarcity is enforced, nevertheless, so like the pleasure comes from the scarcity. Yes. But the happiness is a cookie that your brain bakes for itself. It's not made by the environment. The environment cannot make you happy. It's your appraisal of the environment that makes you happy. And if you can change the appraisal of the environment, which you can learn to, then you can create arbitrary states of happiness. And some meditators fall into this trap. So they discover the womb, this basement womb in their brain where the cookies are made, and they indulge and stuff themselves. And after a few months, it gets really old and the big crisis of meaning comes. Because they thought before that their unhappiness was the result of not being happy enough. So they fixed this, right? They can release the newer transmitters at will if they train. And then the crisis of meaning pops up in a deeper layer. And the question is, why do I live? How can I make a sustainable civilization that is meaningful to me? How can I insert myself into this? And this was the problem that you couldn't solve in the first place. But at the end of all this, let me then ask that same question. What is the answer to that? What could the possible answer be of the meaning of life? What could an answer be? What is it to you? I think that if you look at the meaning of life, you look at what the cell is. Life is the cell. Or this principle, the cell. It's this self organizing thing that can participate in evolution. In order to make it work, it's a molecular machine. It needs a self replicator and an entropy extractor and a Turing machine. If any of these parts is missing, you don't have a cell and it is not living. And life is basically the emergent complexity over that principle. Once you have this intelligent super molecule, the cell, there is very little that you cannot make it do. It's probably the optimal computronium and especially in terms of resilience. It's very hard to sterilize the planet once it's infected with life. So it's active function of these three components or the supercell cell is present in the cell, it's present in us, and it's just... We are just an expression of the cell. It's a certain layer of complexity in the organization of cells. So in a way, it's tempting to think of the cell as a von Neumann probe. If you want to build intelligence on other planets, the best way to do this is to infect them with cells and wait for long enough and there's a reasonable chance the stuff is going to evolve into an information processing principle that is general enough to become sentient. That idea is very akin to the same dream and beautiful ideas that are expressed to cellular automata in their most simple mathematical form. If you just inject the system with some basic mechanisms of replication and so on, basic rules, amazing things would emerge. The cell is able to do something that James Trardy calls existential design. He points out that in technical design, we go from the outside in. We work in a highly controlled environment in which everything is deterministic, like our computers, our labs, or our engineering workshops. And then we use this determinism to implement a particular kind of function that we dream up and that seamlessly interfaces with all the other deterministic functions that we already have in our world. So it's basically from the outside in. Biological systems designed from the inside out as seed will become a seedling by taking some of the relatively unorganized matter around it and turning it into its own structure and thereby subdue the environment. Cells can cooperate if they can rely on other cells having a similar organization that is already compatible. But unless that's there, the cell needs to divide to create that structure by itself. So it's a self organizing principle that works on a somewhat chaotic environment. And the purpose of life in this sense is to produce complexity. And the complexity allows you to harvest entropy gradients that you couldn't harvest without the complexity. And in this sense, intelligence and life are very strongly connected because the purpose of intelligence is to allow control under conditions and the conditions of complexity. So basically, you shift the boundary between the ordered systems into the realm of chaos. You build bridge heads into chaos with complexity. And this is what we are doing. This is not necessarily a deeper meaning. I think the meaning that we have priors for that we are all for outside of the priors, there is no meaning. Meaning only exists if the mind projects it. That is probably civilization. I think that what feels most meaningful to me is to try to build and maintain a sustainable civilization. And taking a slight step outside of that, we talked about a man with a beard and God, but something, some mechanism, perhaps must have planted the seed, the initial seed of the cell. Do you think there is a God? What is a God? And what would that look like? If there was no spontaneous biogenesis, in the sense that the first cell formed by some happy random accidents where the molecules just happened to be in the right constellation to each other. But there could also be the mechanism that allows for the random. I mean, there's like turtles all the way down. There seems to be, there has to be a head turtle at the bottom. Let's consider something really wild. Imagine, is it possible that a gas giant could become intelligent? What would that involve? So imagine you have vortices that spontaneously emerge on the gas giants, like big storm systems that endure for thousands of years. And some of these storm systems produce electromagnetic fields because some of the clouds are ferromagnetic or something. And as a result, they can change how certain clouds react rather than other clouds and thereby produce some self stabilizing patterns that eventually lead to regulation feedback loops, nested feedback loops and control. So imagine you have such this thing that basically has emergent self sustaining, self organizing complexity. And at some point, this breaks up and realizes and basically lam solaris, I am a thinking planet, but I will not replicate because I can recreate the conditions of my own existence somewhere else. I'm just basically an intelligence that has spontaneously formed because it could. And now it builds a von Neumann probe and the best von Neumann probe for such a thing might be the cell. So maybe it, because it's very, very clever and very enduring, creates cells and sends them out. And one of them has infected our planet. And I'm not suggesting that this is the case, but it would be compatible with the Prince Birmingham hypothesis. And it was my intuition that our biogenesis is very unlikely. It's possible, but you probably need to roll the cosmic dice very often, maybe more often than there are planetary surfaces. I don't know. So God is just a large enough, a system that's large enough that allows randomness. No, I don't think that God has anything to do with creation. I think it's a mistranslation of the Talmud into the Catholic mythology. I think that Genesis is actually the childhood memories of a God. So the, when. Sorry, Genesis is the. The childhood memories of a God. It's basically a mind that is remembering how it came into being. Wow. And we typically interpret Genesis as the creation of a physical universe by a supernatural being. Yes. And I think when you read it, there is light and darkness that is being created. And then you discover sky and ground, create them. You construct the plants and the animals and you give everything their names and so on. That's basically cognitive development. It's a sequence of steps that every mind has to go through when it makes sense of the world. And when you have children, you can see how initially they distinguish light and darkness and then they make out directions in it and they discover sky and ground and they discover the plants and the animals and they give everything their name. And it's a creative process that happens in every mind because it's not given. Your mind has to invent these structures to make sense of the patterns on your retina. Also, if there was some big nerd who set up a server and runs this world on it, this would not create a special relationship between us and the nerd. This nerd would not have the magical power to give meaning to our existence. So this equation of a creator god with the god of meaning is a sleight of hand. You shouldn't do it. The other one that is done in Catholicism is the equation of the first mover, the prime mover of Aristotle, which is basically the automaton that runs the universe. Aristotle says if things are moving and things seem to be moving here, something must move them. If something moves them, something must move the thing that is moving it. So there must be a prime mover. This idea to say that this prime mover is a supernatural being is complete nonsense. It's an automaton in the simplest case. So we have to explain the enormity that this automaton exists at all. But again, we don't have any possibility to infer anything about its properties except that it's able to produce change in information. So there needs to be some kind of computational principle. This is all there is. But to say this automaton is identical again with the creator of the first cause or with the thing that gives meaning to our life is confusion. No, I think that what we perceive is the higher being that we are part of. The higher being that we are part of is the civilization. It's the thing in which we have a similar relationship as the cell has to our body. And we have this prior because we have evolved to organize in these structures. So basically, the Christian God in its natural form without the mythology, if you undress it, is basically the platonic form of the civilization. Is the ideal? Yes, it's this ideal that you try to approximate when you interact with others, not based on your incentives, but on what you think is right. Wow, we covered a lot of ground. And we're left with one of my favorite lines, and there's many, which is happiness is a cookie that the brain bakes itself. It's been a huge honor and a pleasure to talk to you. I'm sure our paths will cross many times again. Joshua, thank you so much for talking today. I really appreciate it. Thank you, Lex. It was so much fun. I enjoyed it. Awesome. Thanks for listening to this conversation with Joshua Bach. And thank you to our sponsors, ExpressVPN and Cash App. Please consider supporting this podcast by getting ExpressVPN at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. If you enjoy this thing, subscribe on YouTube, review it with five stars in Apple Podcast, support it on Patreon, or simply connect with me on Twitter at lexfreedman. And yes, try to figure out how to spell it without the E. And now let me leave you with some words of wisdom from Joshua Bach. If you take this as a computer game metaphor, this is the best level for humanity to play. And this best level happens to be the last level, as it happens against the backdrop of a dying world. But it's still the best level. Thank you for listening and hope to see you next time.
Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101